Mar 13, 2013

GMPLS Operation and Deployment Challenges


GMPLS extends MPLS functionality with the enhancement of forwarding, traffic engineering, and quality-of-service (QoS) capabilities of packet-based networks by creating virtual label-switched paths (LSPs) across a network of label switching routers (LSRs) to optical network devices utilizing time-division multiplexing (TDM), fiber switching, and lambda switching. In a GMPLS network it is therefore possible to find and provision end-to-end paths that traverse different networks. For example, a packet/cell-based LSP can be nested in a TDM-based LSP for transport over a SONET network. The TDM-based LSP can similarly be nested in a lambda-based LSP for transport over a wavelength network. Multiple lambda switch-capable LSPs can be nested within a fiber switch-capable set up between two fiber switching elements. This forwarding hierarchy of nested LSPs allows service providers to transparently send different types of traffic over various types of network segments.

GMPLS introduces Link Management Protocol (LMP) to manage and maintain the health of the control and data planes between two neighboring nodes. LMP is an IP-based protocol that includes extensions to the Resource Reservation Protocol Traffic Engineering (RSVP-TE) and Constraint-Based Label Distribution Protocol (CR-LDP) signaling protocols.

GMPLS provides the ability to automate many of the network functions that are directly related to operational complexities, including:

• End-to-end provisioning of services

• Network resource discovery

• Bandwidth assignment

• Service creation


Traffic engineering parameters relating to SONET protection support, available bandwidth, route diversity, and QoS are distributed throughout the network. This allows every node in the network to have full visibility and configuration status of every other node. This ultimately provides an intelligent optical network.

As service providers introduce new network elements into their networks, add or remove facilities, or turn up new circuits, the control plane will automatically distribute and update the network with the new information. Contrast this with the operationally intensive manual upgrades and updates performed today. Provisioning of connections often requires a substantial amount of coordination among operations staff located throughout the network. Capacity is assessed, optimal connection and restoration paths are determined, and the connection must be fully tested after it is established.

In contrast with operationally intensive manual upgrades and updates, GMPLS uses advanced routing features, including the Open Shortest Path First (OSPF) protocol and Intermediate System-to-Intermediate System (IS-IS) protocol and signaling protocols such as RSVP and CR-LDP to build intelligence into the network. The network can then effectively self-discover to dynamically advertise the availability or lack of availability of resources. With such capabilities, multihop connections with optical routes and backup paths can be established in a single provisioning step.

Cisco Dynamic Packet Transport (DPT) / Resilient Packet Ring (RPR)

  • DPT/RPR uses two symmetric bi-directional counter-rotating fiber rings. Each fiber ring can be concurrently utilized to pass both data and control packets. Data can be sent on both rings simultaneously. The rings are referred to as “bi-directional counter-rotating” rings, because traffic travels in opposite directions on the rings.
  • To distinguish between the two rings, one fiber ring is referred to as the “inner” ring and the other as the “outer” ring. Notice the outer ring sends traffic clockwise while the inner ring sends traffic counter-clockwise. 
  • At the same time as data is sent (downstream) on one ring, a corresponding control packet is sent (upstream) around on the other ring. Having control packets traveling in the opposite direction on a separate ring makes it possible to restore service more quickly in the event of a failure. 
  • DPT/RPR uses the entire concatenated payload at the specified line rate. For example, at OC48 or STM16 both fiber rings use the entire 2.5 gigabits per second (minus the SONET/SDH framing overhead bits), giving total bandwidth of 2x2.5=5.0 gigabit per second by the DPT rings.

Mar 12, 2013

SONET Transport Hierarchy


SONET Transport Hierarchy

Each level of the hierarchy terminates its corresponding fields in the SONET payload, as such:

Section

A section is a single fiber run that can be terminated by a network element (Line or Path) or an optical regenerator.
The main function of the section layer is to properly format the SONET frames, and to convert the electrical signals to optical signals. Section Terminating Equipment (STE) can originate, access, modify, or terminate the section header overhead. (A standard STS-1 frame is nine rows by 90 bytes. The first three bytes of each row comprise the Section and Line header overhead.)

Line

Line-Terminating Equipment (LTE) originates or terminates one or more sections of a line signal. The LTE does the synchronization and multiplexing of information on SONET frames. Multiple lower-level SONET signals can be mixed together to form higher-level SONET signals. An Add/Drop Multiplexer (ADM) is an example of LTE.

Path

Path-Terminating Equipment (PTE) interfaces non-SONET equipment to the SONET network. At this layer, the payload is mapped and demapped into the SONET frame. For example, an STS PTE can assemble 25 1.544 Mbps DS1 signals and insert path overhead to form an STS-1 signal.
This layer is concerned with end-to-end transport of data.

Selective Packet Discard (SPD)


Overview

Selective Packet Discard (SPD) is a mechanism to manage the process level input queues on the Route Processor (RP). The goal of SPD is to provide priority to routing protocol packets and other important traffic control Layer 2 keepalives during periods of process level queue congestion.
Historically, on platforms such as the Cisco 7x00 and non-Cisco Express Forwarding (CEF) 7500 systems, significant numbers of transit packets were forwarded by the Route Processor in order to populate the fast switching cache. Consequently, SPD was required in this case to prioritize the routing protocol packets over the transit packets which share the same queue.
Currently, on the Cisco 12000 Series Internet Router and on the 7500 running CEF, only traffic destined to the router itself is sent to process level. In this case, SPD is used to prioritize routing protocol packets when management traffic such as Simple Network Management Protocol (SNMP) is present or when a Denial of Service (DoS) attack sending traffic to the RP is occurring.

The SPD Process

On the Cisco 12000 Series, when a line card determines that an incoming packet needs to be punted to the RP for processing, the packet travels across the switch fabric as Cisco Cells and is eventually received by the Cisco Cell Segmentation and Reassembly (CSAR) Field Programmable Gate Array (FPGA).
Its purpose is to handle the traffic between the switch fabric and the RP CPU, and this is where the SPD checks are performed. This applies to IP packets, Connectionless Network Service (CLNS) packets, Layer 2 keepalives, and similar packets punted to the RP. SPD makes two checks and can potentially drop a packet in one of these two states:
  • SPD state check
  • Input queue check

SPD State Check

The IP process queue on the RP is divided into two parts: a general packet queue and a priority queue. Packets put in the general packet queue are subject to the SPD state check, and those that are put in the priority queue are not. Packets that qualify for the priority packet queue are high priority packets such as those of IP precedence 6 or 7 and should never be dropped. The non-qualifiers, however, can be dropped here depending on the length of the general packet queue depending on the SPD state. The general packet queue can be in three states and, as such, the low priority packets may be serviced differently:
  • NORMAL: queue size <= min
  • RANDOM DROP: min <= queue size <= max
  • FULL DROP: max <= queue size
In the NORMAL state, we never drop well-formed and malformed packets.
In the RANDOM DROP state, we randomly drop well-formed packets. If aggressive mode is configured, we drop all malformed packets; otherwise, we treat them as well-formed packets.
Note: These random drops are called SPD flushes. Basically, when the interface gets overloaded, flushes occur. Buffer misses cause the flush counter to increment.
In FULL DROP state, we drop all well-formed and malformed packets. These minimum (default 73) and maximum (default 74) values are derived from the smallest hold-queue on the chassis, but can be overridden with the global commands ip spd queue min-threshold and ip spd queue max-threshold.

Layer Two Tunneling Protocol - Version 3 (L2TPv3) - ICRQ

Incoming-Call-Request (ICRQ)

Incoming-Call-Request (ICRQ) is the control message sent by an LCCE to a peer when an incoming call is detected (although the ICRQ may also be sent as a result of a local event).  It is the first in a three-message exchange used for establishing a session via an L2TP control connection.

The ICRQ is used to indicate that a session is to be established between an LCCE and a peer.  The sender of an ICRQ provides the peer with parameter information for the session.  However, the sender makes no demands about how the session is terminated at the peer (i.e., whether the L2 traffic is processed locally, forwarded, etc.).

   The following AVPs MUST be present in the ICRQ:
  •     Message Type
  •     Local Session ID
  •     Remote Session ID
  •     Serial Number
  •     Pseudowire Type
  •     Remote End ID
  •     Circuit Status

How BGP Graceful Restart Preserves Prefix Information During a Restart?

When a router that is capable of BGP Graceful Restart loses connectivity, the following happens to the restarting router:
1. The router establishes BGP sessions with other routers and relearns the BGP routes from other routers that are also capable of Graceful Restart. The restarting router waits to receive updates from the neighboring routers. When the neighboring routers send end-of-Routing Information Base (RIB) markers to indicate that they are done sending updates, the restarting router starts sending its own updates.
2. The restarting router accesses the checkpoint database to find the label that was assigned for each prefix. If it finds the label, it advertises it to the neighboring router. If it does not find the label, it allocates a new label and advertises it.
3. The restarting router removes any stale prefixes after a timer for stale entries expires.

When a peer router that is capable of BGP Graceful Restart encounters a restarting router, it does the following:
1. The peer router sends all of the routing updates to the restarting router. When it has finished sending updates, the peer router sends an end-of RIB marker to the restarting router.
2. The peer router does not immediately remove the BGP routes learned from the restarting router from its BGP routing table. As it learns the prefixes from the restarting router, the peer refreshes the stale routes if the new prefix and label information matches the old information

Layer 2 VPNs Cisco IOS MPLS Virtual Private LAN Service

The signaling requirements of VPLS:

The virtual circuit setup uses the same LDP signaling mechanism defined for point-to-point services. Using a directed LDP session, each provider edge advertises a virtual circuit label mapping that is used as part of the label stack imposed on the Ethernet frames by the ingress provider edge during packet forwarding.

The reachability information distributed in a VPLS

Cisco VPLS does not require the exchange of reachability (MAC addresses) information via a signaling protocol. This information is learned from the data plane using standard address learning, aging, and filtering mechanisms defined for Ethernet bridging. However, the LDP signaling used for setting up and tearing down the virtual circuits can be used to indicate to a remote provider edge that some or all MAC addresses learned over a virtual circuit need to be withdrawn from the VSI. This mechanism provides a convergence optimization over the normal address aging that would eventually flush the invalid addresses.

THE G.709 OPTICAL TRANSPORT NETWORK - Optical Data Unit (ODU)

Optical Data Unit (ODU)

The ODU overhead is broken into several fields: RES, PM, TCMi, TCM ACT, FTFL, EXP, GCC1/GCC2 and APS/PCC. The reserved (RES) bytes are undefined and are set aside for future applications.

  • The path monitoring (PM) field is similar to the SM field described above. It contains the TTI, BIP-8, BEI, BDI and Status (STAT) sub-fields.
  • There are six tandem connection monitoring (TCMi) fields that define the ODU TCM sub-layer, each containing TTI, BIP-8, BEI/BIAE, BDI and STAT sub-fields associated to each TCM level (i=1 to 6). The STAT sub-field is used in the PM and TCMi fields to provide an indication of the presence or absence of maintenance signals.
  • The tandem connection monitoring activation/deactivation (TCM ACT) field is currently undefined in the standards. The fault type and fault location reporting communication channel (FTFL) field is used to create a message spread over a 256-byte multiframe. It provides the ability to send forward and backward path-level fault indications.
  • The experimental (EXP) field is a field that is not subject to standards and is available for network operator applications.
  • General communication channels 1 and 2 (GCC1/GCC2) fields are very similar to the GCC0 field except that each channel is available in the ODU.
  • The automatic protection switching and protection communication channel (APS/PCC) supports up to eight levels of nested APS/PCC signals, which are associated to a dedicated-connection monitoring level depending on the value of the multiframe.

CRC Troubleshooting Guide for ATM Interfaces

Reasons for ATM CRC Errors

The following are some potential reasons for ATM CRC errors:

  • Dropped cells due to traffic policing in the ATM cloud on one or more VCs attached to the ATM interface.
  • Noise, gain hits, or other transmission problems on the data-link equipment.
  • A faulty or failing ATM interface.

The show interfaces command output displays the CRC error count. These errors suggest that when the SAR reassembles the packet and checks the CRC, the calculated CRC value does not match the value in the assembled packet's CRC field.

Mar 11, 2013

HDLC Operational Modes


HDLC offers three different modes of operation. These three modes of operations are:
  • Normal Response Mode(NRM)
  • Asynchronous Response Mode(ARM)
  • Asynchronous Balanced Mode(ABM)
Normal Response Mode
This is the mode in which the primary station initiates transfers to the secondary station. The secondary station can only transmit a response when, and only when, it is instructed to do so by the primary station. In other words, the secondary station must receive explicit permission from the primary station to transfer a response. After receiving permission from the primary station, the secondary station initiates it's transmission. This transmission from the secondary station to the primary station may be much more than just an acknowledgment of a frame. It may in fact be more than one information frame. Once the last frame is transmitted by the secondary station, it must wait once again from explicit permission to transfer anything, from the primary station. Normal Response Mode is only used within an unbalanced configuration.

Asynchronous Response Mode
In this mode, the primary station doesn't initiate transfers to the secondary station. In fact, the secondary station does not have to wait to receive explicit permission from the primary station to transfer any frames. The frames may be more than just acknowledgment frames. They may contain data, or control information regarding the status of the secondary station. This mode can reduce overhead on the link, as no frames need to be transferred in order to give the secondary station permission to initiate a transfer. However some limitations do exist. Due to the fact that this mode is Asynchronous, the secondary station must wait until it detects and idle channel before it can transfer any frames. This is when the ARM link is operating at half-duplex. If the ARM link is operating at full-duplex, the secondary station can transmit at any time. In this mode, the primary station still retains responsibility for error recovery, link setup, and link disconnection.

Asynchronous Balanced Mode
This mode uses combined stations. There is no need for permission on the part of any station in this mode. This is because combined stations do not require any sort of instructions to perform any task on the link.
Normal Response Mode is used most frequently in multi-point lines, where the primary station controls the link. Asynchronous Response Mode is better for point to point links, as it reduces overhead. Asynchronous Balanced Mode is not used widely today.
The "asynchronous" in both ARM and ABM does not refer to the format of the data on the link. It refers to the fact that any given station can transfer frames without explicit permission or instruction from any other station. 

Class-Based Tunnel Selection: CBTS

Class-Based Tunnel Selection: CBTS

  • EXP-based selection between multiple tunnels to same destination
  • Local mechanism at head-end (no IGP extensions)
  • Tunnel master bundles tunnel members
  • Tunnel selection configured on tunnel master (auto-route, etc.)
  • Bundle members configured with EXP values to carry
  • Bundle members may be configured as default
  • Supports VRF traffic, IP-to-MPLS and MPLS-to-MPLS switching paths

Reference:
  • http://meetings.apnic.net/__data/assets/pdf_file/0010/45010/MPLS-TE.pdf

Mar 10, 2013

Sink Holes - Understand And Analyze Your Network

Sinkhole Routers/Networks

•Sinkholes are a topological security feature—somewhat analogous to a honeypot
•Router or workstation built to suck in traffic and assist in analyzing attacks (original use)
•Used to redirect attacks away from the customer—working the attack on a router built to withstand the attack
•Used to monitor attack noise, scans, data from misconfiguration and other activity (via the advertisement of default or unused IP space)
•Traffic is typically diverted via BGP route advertisements and policies
•Leverage instrumentation in a controlled environment—Pull the traffic past analyzers/analysis tools

Why Sinkholes?

•They work! Providers, enterprise operators and researchers use them in their network for data collection and analysis
•More uses are being found through experience and individual innovation
•Deploying sinkholes correctly takes preparation

BGP Trigger

•Leverage the same BGP technique used for RTBH
•Dedicated trigger router redistributes more specific route for destination being re-rerouted - Next-hop set via route-map
•All BGP-speaking routers receive update
•Complex design can use multiple route-maps and next-hops to provide very flexible designs

Anycast and Sinkholes

•Sinkholes are designed to pull in traffic, potentially large volumes
•Optimal placement in the network requires mindful integration and can have substantial impact on network performance and availability
•A single sinkhole might require major  re-engineering of the network
•Anycast sinkholes provide a means to distribute the load throughout the network



MPLS VPN - Route Target Rewrite

Prerequisites for MPLS VPN - Route Target Rewrite


The MPLS VPN - Route Target Rewrite feature requires the following:

You should know how to configure Multiprotocol Virtual Private Networks (MPLS VPNs).

You need to configure your network to support interautonomous systems (Inter-AS) with different route target (RT) values in each autonomous system (AS).

You need to identify the RT replacement policy and target router for each AS.

Route Target Replacement Policy


Routing policies for a peer include all configurations that may impact inbound or outbound routing table updates. The MPLS VPN - Route Target Rewrite feature can influence routing table updates by allowing the replacement of route targets on inbound and outbound BGP updates. Route targets are carried as extended community attributes in BGP Virtual Private Network IP Version 4 (VPNv4) updates. Route target extended community attributes are used to identify a set of sites and VPN routing/forwarding instances (VRFs) that can receive routes with a configured route target.

In general, ASBRs perform route target replacement at autonomous system borders when the ASBRs exchange VPNv4 prefixes. You can also configure the MPLS VPN - Route Target Rewrite feature on PE routers and RR routers.

Figure 1 shows an example of route target replacement on ASBRs in an MPLS VPN Inter-AS topology. This example includes the following configurations:

PE1 is configured to import and export RT 100:1 for VRF VPN1.

PE2 is configured to import and export RT 200:1 for VRF VPN2.

ASBR1 is configured to rewrite all inbound VPNv4 prefixes with RT 200:1 to RT 100:1.

ASBR2 is configured to rewrite all inbound VPNv4 prefixes with RT 100:1 to RT 200:1.

Figure 1 Route Target Replacement on ASBRs in an MPLS VPN Inter-AS Topology


Figure 2 shows an example of route target replacement on route reflectors in an MPLS VPN Inter-AS topology. This example includes the following configurations:

EBGP is configured on the route reflectors.

EBGP and IBGP IPv4 label exchange is configured between all BGP routers.

Peer groups are configured on the routers reflectors.

PE2 is configured to import and export RT 200:1 for VRF VPN2.

PE2 is configured to import and export RT 200:2 for VRF VPN3.

PE1 is configured to import and export RT 100:1 for VRF VPN1.

RR1 is configured to rewrite all inbound VPNv4 prefixes with RT 200:1 or RT 200:2 to RT 100:1.

RR2 is configured to rewrite all inbound prefixes with RT 100:1 to RT 200:1 and RT 200:2.

Figure 2 Route Target Rewrite on Route Reflectors in an MPLS VPN Inter-AS Topology


Remote Trigger Black Hole Filtering

Remotely Triggered Blackhole Filtering

  • We will use BGP to trigger a network wide response to an attack 
  • A simple static route and BGP will enable a network-wide destination address blackhole as fast as iBGP can update the network 
  • This provides a tool that can be used to respond to security related events and forms a foundation for other remote triggered uses 
  • Often referred to as RTBH


Step 1: Prepare All the Routers with Trigger

  • Select a small block that will not be used for anything other than blackhole filtering; test Net (192.0.2.0/24) is optimal since it should not be in use
  • Put a static route with a /32 from Test-Net—192.0.2.0/24 to Null 0 on every edge router on the network

ip route 192.0.2.1 255.255.255.255 Null0 


Step 2: Prepare the Trigger Router


  • The Trigger Router Is the Device That Will Inject the iBGP Announcement into the ISP’s Network
  • Should be part of the iBGP mesh—but does not have to accept routes
  • Can be a separate router (recommended) 
  • Can be a production router 
  • Can be a workstation with Zebra/Quagga (interface with Perl scripts and other tools)

Step 3: Activate the Blackhole

  • Add a static route to the destination to be blackholed; the static is added with the “tag 66” to keep it separate from other statics on the router
ip route 172.19.61.1 255.255.255.255 Null0 Tag 66
  • BGP advertisement goes out to all BGP speaking routers
  • Routers received BGP update, and “glue” it to the existing static route; due to recursion, the next-hop is now Null0

Customer Is DOSed (After) Packet Drops Pushed to the Edge


MPLS Traffic Engineering - DiffServ Aware (DS-TE)

MPLS Traffic Engineering - DiffServ Aware (DS-TE) Background and Overview

MPLS traffic engineering allows constraint-based routing (CBR) of IP traffic. One of the constraints satisfied by CBR is the availability of required bandwidth over a selected path. DiffServ-aware Traffic Engineering extends MPLS traffic engineering to enable you to perform constraint-based routing of "guaranteed" traffic, which satisfies a more restrictive bandwidth constraint than that satisfied by CBR for regular traffic. The more restrictive bandwidth is termed a sub-pool, while the regular TE tunnel bandwidth is called the global pool. (The sub-pool is a portion of the global pool. In the new IETF-Standard, the global pool is called BC0 and the sub-pool is called BC1. These are two of an eventually available eight Class Types). This ability to satisfy a more restrictive bandwidth constraint translates into an ability to achieve higher Quality of Service performance in terms of delay, jitter, or loss for the guaranteed traffic.

For example, DS-TE can be used to ensure that traffic is routed over the network so that, on every link, there is never more than 40 per cent (or any assigned percentage) of the link capacity of guaranteed traffic (for example, voice), while there can be up to 100 per cent of the link capacity of regular traffic. Assuming that QoS mechanisms are also used on every link to queue guaranteed traffic separately from regular traffic, it then becomes possible to enforce separate "overbooking" ratios for guaranteed and regular traffic. In fact, for the guaranteed traffic it becomes possible to enforce no overbooking at all—or even an underbooking—so that very high QoS can be achieved end-to-end for that traffic, even while for the regular traffic a significant overbooking continues to be enforced.

Also, through the ability to enforce a maximum percentage of guaranteed traffic on any link, the network administrator can directly control the end-to-end QoS performance parameters without having to rely on over-engineering or on expected shortest path routing behavior. This is essential for transport of applications that have very high QoS requirements such as real-time voice, virtual IP leased line, and bandwidth trading, where over-engineering cannot be assumed everywhere in the network.

The new IETF-Standard functionality of DS-TE expands the means for allocating constrained bandwidth into two distinct models, called the "Russian Dolls Model" and the "Maximum Allocation Model". They differ from each other as follows:
Table 1 Bandwidth Constraint Model Capabilities

MODEL

Achieves Bandwidth Efficiency

Ensures Isolation across Class Types

Protects against QoS Degradation...

When Preemption is Not Used

When
Preemption is Used

...of the Premium Class Type

...of all other Class Types

Maximum Allocation

Yes

Yes

Yes

Yes

No

Russian Dolls

Yes

No

Yes

Yes

Yes


Therefore in practice, a Network Administrator might prefer to use:

the Maximum Allocation Model when s/he needs to ensure isolation across all Class Types without having to use pre-emption, and s/he can afford to risk some QoS degradation of Class Types other than the Premium Class.

the Russian Dolls Model when s/he needs to prevent QoS degradation of all Class Types and can impose pre-emption.

Frame Relay Local Management Interface Optional Extensions

Optional LMI Extensions

The LMI specification also defines several optional extensions:
  • Global addressing convention
  • Multicast capability
  • A simple flow control mechanism
  • Ability for the network to communicate a PVC's CIR to the subscriber in a Status message
  • A new message type that allows the network to announce PVC status changes without prompting from the subscriber
Implementors may build any, all, or none of these features into their networks.

Global Addressing

The global addressing convention defines a simple commitment from the operator of a network that DLCIs will remain unique throughout the network. In a globally addressed network, each DLCI identifies a subscriber device uniquely.
For a few years Frame Relay networks will remain small enough that they won't need to implement extended addressing to use the global addressing feature. As networks grow and interconnect, any trend toward global addressing will probably require use of extended addresses.

Multicasting

The LMI multicast capability adapts a popular feature from the LAN world. It reserves a block of DLCIs (1019 to 1022) as multicast groups so that a subscriber wishing to transmit a message to all members of the group must transmit the message only once on the multicast DLCI.
The multicasting feature requires a new information element, Multicast Status, in the full LMI Status message. The Multicast Status element is similar in most respects to the PVC Status IE, but it includes a field for the source DLCI transmitting over the multicast group. It also omits the function of the R bit (see below), since a multicast group may use several paths with different congestion conditions.

Flow Control

The optional LMI flow control capability provides a way for the network to report congestion to the subscriber. The flow control feature uses the optional R bit in the PVC Status information element as a "Receive-Not-Ready" signal for the PVC whose status is being reported. A 1 in the R bit indicates congestion; a 0 indicates no congestion.
On networks where LMI is fully implemented, this feature improves on the ECN bits of the basic Frame Relay protocol because the LMI heartbeat process guarantees that PVC Status elements will reach the subscriber periodically. Of course, according to the laissez faire practice of Frame Relay, the subscriber may or may not have implemented the feature, and may or may not choose to act on the information.

Communicating the Minumum Bandwidth Available

The next optional feature uses the three reserved octets at the end of the PVC Status information element to communicate the minimum bandwidth available on the network to the PVC.
In most implementations, this number will be the PVC's CIR. However, clever implementors and operators may begin to use this feature to respond to changing traffic conditions by dynamically increasing or decreasing the bandwidth available to individual PVCs.
The specification neither encourages nor forbids such practices.

Status Update Message

The final optional feature of LMI allows the network to communicate changes in a PVC's status by means of a message type called Status Update without first receiving a Status Enquiry from the subscriber.
The Status Update contains only PVC Status and Multicast Status information elements, so it cannot function in the heartbeat process. Further, it contains Status elements for only those PVCs and multicast groups whose status has changed.
Changes reported include:
  • Deletion of a PVC or multicast group (reported by setting the optional D bit of the Status element)
  • Changes in the minimum bandwidth allocated to a PVC
  • Activation or deactivation of a PVC (indicated by setting or clearing the A bit)
  • Flow control information (changes in congestion status, signalled by setting or resetting the R bit). Besides improving flow control, this feature allows LMI signalling over network-to-network Frame Relay connections where neither partner functions as a subscriber device

Inter-Autonomous System Connectivity: Another Application of Tunnels

Carrier Supporting Carrier

Carrier supporting carrier (CsC) is a two-layer IP VPN solution designed to allow a backbone carrier to use MPLS VPN (or L2TPv3) to carry traffic belonging to customers' carriers that use MPLS VPNs.
Before looking at the solution, it is a good idea to understand the problem being solved . An MPLS PE router holds all the routes of all the sites to which it connects. In a normal scenario, although this number can be large, the expectation is that an individual VPN would require at most hundreds or perhaps thousands of entries in a VRF. However, if the customer is itself an ISP, carrying routes belonging to their customers, the potential exists to require the backbone PE to carry an impossibly large number of routes. The CsC solution addresses this issue.
CsC is based on the observation that the label switched domain of an MPLS VPN network (that is, the backbone network) only needs routing information to reach provider (P) routersthe customer routing domain is invisible to the core.
In a CsC scenario, the ISP needs to share the global routing table with the backbone carrier only. The CsC backbone routers (labeled CSC-PE1, CSC-P, and CSC-PE2 in Figure 5-13) carry the next-hop routes for the ISP carrier networks so that an LSP exists between ISP sites. Note that the next -hop routes should not be aggregated because that would break the end-to-end LSP.
Figure 5-13. CsC Topology

Figure 5-13 shows the CsC topology.
The major differences between CsC and a standard MPLS VPN solution are as follows :
  • CE-PE interface use MPLS.
  • CE-PE exchange routes and labels.
  • Packets on the CsC backbone have three labels on their label stack.
Figure 5-13 shows CsC data-plane operation, specifically the label stack of a packet as it traverses the ISP and backbone carrier networks:
  1. CE1 sends a packet with a destination address in the 10.2.0.0/24 network. The next hop for this address is PE1.
  2. PE1 pushes two labels: the VPN identifier, 20, and the next-hop label announced by P1, 31.
  3. P1 does a label swap and forwards the packet with outer label value of 33.
  4. CSC-CE1 does a label swap and forwards the packet with outer label 26. This label was announced by CSC-PE1.
  5. CSC-PE1 removes label 26 and pushes two labels onto the stack. Label 19 is the VPN identifier that identifies the ISP's VRF. Label 36 is the value announced by the CSC-P router, which is the next hop on the CSC backbone network.
  6. CSC-P performs a PHP operation and forwards the packet with outer label value 19.
  7. CSC-PE2 matches the incoming label value to the correct VRF and pushes label 48 before forwarding to CSC-CE2.
  8. CSC-CE2 does a PHP and forwards the packet with outer label value of 20.
  9. PE2 matches the incoming label value to the correct VRF and forwards an IP packet to the customer router, CE2.
Figure 5-13 also illustrates the control-plane operation:
  1. PE1 and PE2 exchange labels and VPNv4 routes using MP-BGP. The labels identify customer VRFs.
  2. PE1, P1, and CSC-CE1 exchange labels using LDP. These labels identify the next-hop FEC.
  3. CSC-CE1 and CSC-PE1 exchange labels and routes. There are two ways to do this. The first uses LDP; the second, specified in RFC 3107, uses external BGP (eBGP) to exchange IPv4 and labels.
  4. CSC-PE1 and CSC-PE2 exchange labels and VPNv4 routes using MP-BGP. These labels identify customer VRFs.
  5. CSC-PE1, CSC-P, and CSC-PE2 exchange labels using LDP.


SONET Transport Hierarchy

SONET Transport Hierarchy

Each level of the hierarchy terminates its corresponding fields in the SONET payload, as such:

Section

A section is a single fiber run that can be terminated by a network element (Line or Path) or an optical regenerator.
The main function of the section layer is to properly format the SONET frames, and to convert the electrical signals to optical signals. Section Terminating Equipment (STE) can originate, access, modify, or terminate the section header overhead. (A standard STS-1 frame is nine rows by 90 bytes. The first three bytes of each row comprise the Section and Line header overhead.)

Line

Line-Terminating Equipment (LTE) originates or terminates one or more sections of a line signal. The LTE does the synchronization and multiplexing of information on SONET frames. Multiple lower-level SONET signals can be mixed together to form higher-level SONET signals. An Add/Drop Multiplexer (ADM) is an example of LTE.

Path

Path-Terminating Equipment (PTE) interfaces non-SONET equipment to the SONET network. At this layer, the payload is mapped and demapped into the SONET frame. For example, an STS PTE can assemble 25 1.544 Mbps DS1 signals and insert path overhead to form an STS-1 signal.
This layer is concerned with end-to-end transport of data.

Cisco CRS-1 Multishelf System Hardware Overview

Cisco CRS-1 16-Slot Line Card Chassis


The LCC is a mechanical enclosure that houses modular services cards (MSCs) and their associated physical layer interface modules (PLIMs), switch fabric cards (SFCs), route processor (RP) cards, and distributed route processor (DRP) cards. The LCC is bolted to the facility floor and does not require an external rack. The LCC contains its own power and cooling systems. A minimum of two LCCs are required to configure a multishelf system.

Cisco CRS-1 Fabric Card Chassis


The FCC is a mechanical enclosure that houses switch fabric cards (SFCs) and shelf controller Gigabit Ethernet (SCGE) cards (2-port or 22-port) in the front of the chassis. The rear of the chassis houses the optical interface modules (OIMs) and the OIM light emitting diode (LED) monitoring card (OIM-LED). The FCC is bolted to the facility floor and does not require an external rack. The FCC contains its own power and cooling systems. At least one FCC is required to configure a multishelf system.

MPLS VPN QoS Design

Customer Edge QoS Design Considerations

In addition to the full-mesh implication of MPLS VPNs, these considerations should be kept in mind when considering MPLS VPN CE QoS design:

  • Layer 2 access (link-specific) QoS design
  • Service-provider service-level agreements (SLA)
  • Enterprise-to-service provider mapping models

http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/VPNQoS.pdf

QoS DSCP for Call-Signaling

Call-Signaling Traffic


The following are key QoS requirements and recommendations for Call-Signaling traffic:

Call-Signaling traffic should be marked as DSCP CS3 per the QoS Baseline (during migration, it may also be marked the legacy value of DSCP AF31).

150 bps (plus Layer 2 overhead) per phone of guaranteed bandwidth is required for voice control traffic; more may be required, depending on the call signaling protocol(s) in use.

Call-Signaling traffic was originally marked by Cisco IP Telephony equipment to DSCP AF31. However, the Assured Forwarding classes, as defined in RFC 2597, were intended for flows that could be subject to markdown and - subsequently - the aggressive dropping of marked-down values. Marking down and aggressively dropping Call-Signaling could result in noticeable delay-to-dial-tone (DDT) and lengthy call setup times, both of which generally translate to poor user experiences.

The QoS Baseline changed the marking recommendation for Call-Signaling traffic to DSCP CS3 because Class Selector code points, as defined in RFC 2474, were not subject to markdown/aggressive dropping. Some Cisco IP Telephony products have already begun transitioning to DSCP CS3 for Call-Signaling marking. In this interim period, both code-points (CS3 and AF31) should be reserved for Call-Signaling marking until the transition is complete.

Many Cisco IP phones use Skinny Call-Control Protocol (SCCP) for call signaling. SCCP is a relatively lightweight protocol that requires only a minimal amount of bandwidth protection. However, newer versions of CallManager and SCCP have improved functionality requiring new message sets yielding a higher bandwidth consumption. Cisco signaling bandwidth design recommendations have been adjusted to match. The IPT SRND's Network Infrastructure chapter contains the relevant details, available at:http://www.cisco.com/en/US/products/sw/voicesw/ps556/products_implementation_design_guides_list.html.

Other call signaling protocols include (but are not limited to) H.323, H.225, Session Initiated Protocol (SIP) and Media Gateway Control Protocol (MGCP). Each call signaling protocol has unique TCP/UDP ports and traffic patterns that should be taken into account when provisioning QoS policies for them.

Metro Ethernet Forum (MEF) Services

MEF Services Overview

MEF Ethernet Services are defined as connectivity services provided by a Service Provider's Carrier Ethernet Networks (CENs) to Customer Edge (CE) devices.  The connectivity service is modeled by an Ethernet Virtual Connection (EVC). The EVC is defined as an association of two or more User Network Interfaces (UNIs) that limits the exchange of Service Frames to UNIs in the EVC.  An Ethernet Service [9] [10] consists of an Ethernet Service Type and is associated with one or more Bandwidth Profile(s) and supports one or more Classes of Service. A service is also associated with a list of Layer Two Control Protocols such as Spanning Tree Protocol or Link Aggregation Control Protocol and a set of actions that specify how they should be handled.
MEF Service
Ethernet Service Types can be used to create a broad range of Subscriber services. The service types are characterized by their required connectivity [10]. The following service types have been defined to date: Ethernet Line Service (E-Line Service) uses a Point-to-Point EVC. The Ethernet LAN Service (E-LAN Service) uses a Multipoint-to-Multipoint EVC.The Ethernet Tree Service (E-TREE Service) uses a Rooted-Multipoint EVC.
MEF E-Line Service
E-Line service types require Point-to-Point (P2P) connectivity, as illustrated in Figure 2. In a Point-to-Point EVC, exactly two UNIs must be associated with one another.  Ingress Service Frame to the EVC at one UNI can only result in an egress Service Frame at the associated UNI.
fig2.gif
Figure 2 - Point-to-Point EVC.
MEF E-LAN Service
E-LAN service types require Multipoint-to-Multipoint (MP2MP) connectivity, as illustrated in Figure 3. In a Multipoint EVC, two or more UNIs are associated with one another. An ingress Service Frame mapped to the EVC at one of the UNIs can only result in an egress Service Frame at one or more of the associated UNIs.
fig3.gif
Figure 3 - Multipoint-to-Multipoint EVC.
MEF E-Tree Service
E-Tree service types require Rooted-Multipoint (RMP) connectivity, as illustrated in Figure 4. In a Rooted-Multipoint EVC, one or more of the UNIs must be designated as a Root and each of the other UNIs must be designated as a Leaf. A single root has connectivity to all the leaves. An ingress Service Frame mapped to the EVC at a Root UNI may be delivered to one or more of the associated UNIs (either Root or Leaf) in the EVC. An ingress Service Frame mapped to the EVC at a Leaf UNI can only result in an egress Service Frame at one, some or all of the Root UNIs.
fig4.gif
Figure 4 - Rooted-Multipoint EVC.