Mar 9, 2013

Simple Network Management Protocol(SNMP) Operations

SNMPv1 Protocol Operations

SNMP is a simple request/response protocol. The network-management system issues a request, and managed devices return responses. This behavior is implemented by using one of four protocol operations: Get, GetNext, Set, and Trap. The Get operation is used by the NMS to retrieve the value of one or more object instances from an agent. If the agent responding to the Get operation cannot provide values for all the object instances in a list, it does not provide any values. The GetNext operation is used by the NMS to retrieve the value of the next object instance in a table or a list within an agent. The Set operation is used by the NMS to set the values of object instances within an agent. The Trap operation is used by agents to asynchronously inform the NMS of a significant event.

SNMPv2 Protocol Operations

The Get, GetNext, and Set operations used in SNMPv1 are exactly the same as those used in SNMPv2. However, SNMPv2 adds and enhances some protocol operations. The SNMPv2 Trap operation, for example, serves the same function as that used in SNMPv1, but it uses a different message format and is designed to replace the SNMPv1 Trap.
SNMPv2 also defines two new protocol operations: GetBulk and Inform. The GetBulk operation is used by the NMS to efficiently retrieve large blocks of data, such as multiple rows in a table. GetBulk fills a response message with as much of the requested data as will fit. The Inform operation allows one NMS to send trap information to another NMS and to then receive a response. In SNMPv2, if the agent responding to GetBulk operations cannot provide values for all the variables in a list, it provides partial results.

SNMP Interoperability

As presently specified, SNMPv2 is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2 messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2 also uses two protocol operations that are not specified in SNMPv1. Furthermore, RFC 1908 defines two possible SNMPv1/v2 coexistence strategies: proxy agents and bilingual network-management systems.

Proxy Agents

An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1 managed devices, as follows:
  • An SNMPv2 NMS issues a command intended for an SNMPv1 agent.
  • The NMS sends the SNMP message to the SNMPv2 proxy agent.
  • The proxy agent forwards Get, GetNext, and Set messages to the SNMPv1 agent unchanged.
  • GetBulk messages are converted by the proxy agent to GetNext messages and then are forwarded to the SNMPv1 agent.
The proxy agent maps SNMPv1 trap messages to SNMPv2 trap messages and then forwards them to the NMS.

IP Multicast VPN Routing and Forwarding and Multicast Domains

IP Multicast VPN Routing and Forwarding and Multicast Domains

Multicast VPN introduces multicast routing information to the VPN routing and forwarding table. When a PE router receives multicast data or control packets from a customer-edge (CE) router, forwarding is performed according to the information in the Multicast VRF (MVRF).

A set of Multicast VPN Routing and Forwarding instances that can send multicast traffic to each other constitutes a multicast domain. For example, the multicast domain for a customer that wanted to send certain types of multicast traffic to all global employees would consist of all CE routers associated with that enterprise.

Multicast Distribution Trees

Multicast VPN establishes a static default MDT for each multicast domain. The default MDT defines the path used by PE routers to send multicast data and control messages to every other PE router in the multicast domain.

Multicast VPN also supports the dynamic creation of MDTs for high-bandwidth transmission. Data MDTs are a feature unique to Cisco IOS software. Data MDTs are intended for high-bandwidth sources such as full-motion video inside the VPN to ensure optimal traffic forwarding in the MPLS VPN core. The threshold at which the data MDT is created can be configured on a per-router or a per-VRF basis. When the multicast transmission exceeds the defined threshold, the sending PE router creates the data MDT and sends a User Datagram Protocol (UDP) message that contains information about the data MDT to all routers in the default MDT. The statistics to determine whether a multicast stream has exceeded the data MDT threshold are examined once every 10 seconds. If multicast distributed switching is configured, the time period can be up to twice as long.

Data MDTs are created only for (S, G) multicast route entries within the VRF multicast routing table. They are not created for (*, G) entries regardless of the value of the individual source data rate.

In the following example, a service provider has a multicast customer with offices in San Jose, New York, and Dallas. A one-way multicast presentation is occurring in San Jose. The service provider network supports all three sites associated with this customer, in addition to the Houston site of a different enterprise customer.

The default MDT for the enterprise customer consists of provider routers P1, P2, and P3 and their associated PE routers. PE4 is not part of the default MDT, because it is associated with a different customer. Figure 1 shows that no data flows along the default MDT, because no one outside of San Jose has joined the multicast.

Figure 1 Default Multicast Distribution Tree Overview

An employee in New York joins the multicast session. The PE router associated with the New York site sends a join request that flows across the default MDT for the multicast domain of the customer whether it is configured to use Sparse Mode, Bidir or SSM within a VRF which contains both the Dallas and the San Jose sites. PE1, the PE router associated with the multicast session source, receives the request. Figure 2 depicts that the PE router forwards the request to the CE router associated with the multicast source (CE1a).

Figure 2 Initializing the Data MDT

The CE router (CE1a) begins to send the multicast data to the associated PE router (PE1), which sends the multicast data along the default MDT. Immediately after sending the multicast data, PE1 recognizes that the multicast data exceeds the bandwidth threshold at which a data MDT should be created. Therefore, PE1 creates a data MDT, sends a message to all routers using the default MDT that contains information about the data MDT, and, three seconds later, begins sending the multicast data for that particular stream using the data MDT. Only PE2 has interested receivers for this source, so only PE2 will join the data MDT and receive traffic on it.

PE routers maintain a PIM relationship with other PE routers over the default MDT, and a PIM relationship with its directly attached PE routers.

Figure 3 depicts the final flow of multicast data sourced from the multicast sender in San Jose to the multicast client in New York. Multicast data sent from the multicast sender in San Jose is delivered in its original format to its associated PE router (PE1) using either sparse mode, bidir or SSM. PE1 then encapsulates the multicast data and sends it across the data MDT using the configured MDT data groups. The mode used to deliver the multicast data across the data MDT is determined by the service provider and has no direct correlation with the mode used by the customer. The PE router in New York (PE2) receives the data along the data MDT. The PE2 router deencapsulates the packet and forwards it in its original format toward the multicast client using the mode configured by the customer.

Figure 3 Multicast Distribution Tree with VRFs

PPPoA architectures Deployment Methods

How the Service Destination is Reached

In PPPoA architectures, the service destination can be reached in different ways. Some of the most commonly deployed methods are:
  • Terminating PPP sessions at the service provider
  • L2TP Tunneling
  • Using SSG
In all three methods there is a fixed set of PVCs defined from the CPE to the DSLAM that is switched to a fixed set of PVCs on the aggregation router. The PVCs are mapped from the DSLAM to the aggregation router through an ATM cloud.
The service destination can also be reached using other methods such PPPoA with SVCs, or Multiprotocol Label Switching/Virtual Private Network. These methods are beyond the scope of this document and will be discussed in separate papers.

Terminating PPP at Aggregation

The PPP sessions initiated by the subscriber are terminated at the service provider which authenticates users using either a local database on the router or through RADIUS servers. After the user is authenticated, IPCP negotiation takes place and the IP address is assigned to the CPE. After the IP address has been assigned, there is a host route established both on the CPE and on the aggregation router. The IP addresses allocated to the subscriber, if legal, are advertised to the edge router. The edge router is the gateway through which the subscriber can access the Internet. If the IP addresses are private, the service provider translates them before advertising them to the edge router.

IS-IS DIS Election

Election of the DIS

On a LAN, one of the routers elects itself the DIS, based on interface priority (the default is 64). 

If all interface priorities are the same, the router with the highest subnetwork point of attachment (SNPA) is selected. 

The SNPA is the MAC address on a LAN, and the local data link connection identifier (DLCI) on a Frame Relay network. 

If the SNPA is a DLCI and is the same at both sides of a link, the router with the higher system ID becomes the DIS. 

Every IS-IS router interface is assigned both a L1 priority and a L2 priority in the range from 0 to 127.

The DIS election is preemptive (unlike OSPF). If a new router boots on the LAN with a higher interface priority, the new router becomes the DIS. It purges the old pseudonode LSP and floods a new set of LSPs.

G.709 OPTICAL TRANSPORT NETWORK - Optical Payload Unit (OPU)


The optical transport network (OTN) was created with the intention of combining the benefits of SONET/SDH technology with the bandwidth expansion capabilities offered by dense wavelength-division multiplexing (DWDM) technology.

In addition to further enhancing the support for operations, administration, maintenance and provisioning (OAM&P)
functions of SONET/SDH in DWDM networks, the purpose of the ITU G.709 standard (based on ITU G.872) is threefold.

First, it defines the optical transport hierarchy of the OTN; second, it defines the functionality of its overhead in support of multiwavelength optical networks; and third, it defines its frame structures, bit rates and formats for mapping client signals.

Optical Payload Unit (OPU)

In order to begin describing the OTN as defined by the ITU G.709 standard, we must first enumerate its critical elements, their termination points, and the way they relate to one another in terms of hierarchy and function.

The primary overhead field associated with the OPU is the payload structure identifier (PSI).

This is a 256-byte multiframe whose first byte is defined as the payload type (PT). The remaining 255 bytes are currently reserved. The other fields in the OPU overhead are dependent on the mapping capabilities associated to the OPU.

For an asynchronous mapping (the client signal and OPU clock are different) justification control (JC) bytes are available to Application Note 153Telecom Test and Measurement compensate for clock rate differences. For a purely synchronous mapping (client source and OPU clock are the same), the JC bytes become reserved. Further details on mapping are available in ITU G.709.

MPLS TE Policy-based Tunnel Selection(PBTS)

Advantages and Disadvantages of PPPoA Architecture

PPP over ATM adaptation layer 5 (AAL5) (RFC 2364) uses AAL5 as the framed protocol, which supports both PVC and SVC. PPPoA was primarily implemented as part of ADSL. It relies on RFC1483, operating in either Logical Link Control-Subnetwork Access Protocol (LLC-SNAP) or VC-Mux mode. A customer premises equipment (CPE) device encapsulates the PPP session based on this RFC for transport across the ADSL loop and the digital subscriber line access multiplexer (DSLAM).

Advantages and Disadvantages of PPPoA Architecture

PPPoA architecture inherits most of the advantages of PPP used in the Dial model. Some of the key points are listed below.

• Advantages

- Per session authentication based on Password Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP). This is the greatest advantage of PPPoA as authentication overcomes the security hole in a bridging architecture.

- Per session accounting is possible, which allows the service provider to charge the subscriber based on session time for various services offered. Per session accounting enables a service provider to offer a minimum access level for minimal charge and then charge subscribers for additional services used.

- IP address conservation at the CPE. This allows the service provider to assign only one IP address for a CPE, with the CPE configured for network address translation (NAT). All users behind one CPE can use a single IP address to reach different destinations. IP management overhead for the Network Access Provider/Network Services Provider (NAP/NSP) for each individual user is reduced while conserving IP addresses. Additionally, the service provider can provide a small subnet of IP addresses to overcome the limitations of port address translation (PAT) and NAT.

- NAPs/NSPs provide secure access to corporate gateways without managing end-to-end PVCs and using Layer 3 routing or Layer 2 Forwarding/Layer 2 Tunneling Protocol (L2F/L2TP) tunnels. Hence, they can scale their business models for selling wholesale services.

- Troubleshooting individual subscribers. The NSP can easily identify which subscribers are on or off based on active PPP sessions, rather than troubleshooting entire groups as is the case with bridging architecture.

- The NSP can oversubscribe by deploying idle and session timeouts using an industry standard Remote Authentication Dial-In User Service (RADIUS) server for each subscriber.

- Highly scalable as we can terminate a very high number of PPP sessions on an aggregation router. Authentication, authorization, and accounting can be handled for each user using external RADIUS servers.

- Optimal use of features on the Service Selection Gateway (SSG).

• Disadvantages

- Only a single session per CPE on one virtual channel (VC). Since the username and password are configured on the CPE, all users behind the CPE for that particular VC can access only one set of services . Users cannot select different sets of services, although using multiple VCs and establishing different PPP sessions on different VCs is possible.

- Increased complexity of the CPE setup. Help desk personnel at the service provider need to be more knowledgeable. Since the username and password are configured on the CPE, the subscriber or the CPE vendor will need to make setup changes. Using multiple VCs increases configuration complexity. This, however, can be overcome by an autoconfiguration feature which is not yet released.

- The service provider needs to maintain a database of usernames and passwords for all subscribers. If tunnels or proxy services are used, then the authentication can be done on the basis of the domain name and the user authentication is done at the corporate gateway. This reduces the size of the database that the service provider has to maintain.

- If a single IP address is provided to the CPE and NAT/PAT is implemented, certain applications such as IPTV, which embed IP information in the payload, will not work. Additionally, if an IP subnet feature is used, an IP address also has to be reserved for the CPE.

MPLS Traffic Engineering Components

Traffic Engineering Components
• Information distribution
• Path selection/calculation
• Path setup
• Trunk admission control
• Forwarding traffic on to tunnel
• Path maintenance

Carrier supporting carrier (CSC) feature using the IP Solution Center (ISC) provisioning process

To configure the CSC network to exchange routes and carry labels between the backbone carrier provider edge (CSC-PE) routers and the customer carrier customer edge (CSC-CE) routers, use Label Distribution Protocol (LDP) to carry the labels and an Internal Gateway Protocol (IGP) to carry the routes.


A routing protocol is required between the CSC-PE and CSC-CE routers that connect the backbone carrier to the customer carrier. The routing protocol enables the customer carrier to exchange IGP routing information with the backbone carrier. RIP, OSPF, or static routing as the routing protocol can be selected.

Label distribution protocol (LDP) is required between the CSC-PE and CSC-CE routers that connect the backbone carrier to the customer carrier. LDP is also required on the CSC-PE to CSC-CE interface for VPN routing/forwarding (VRF).

• IPv4 BGP Label Distribution

BGP takes the place of an IGP and LDP in a VPN forwarding/routing instance (VRF) table. You can use BGP to distribute routes and MPLS labels. Using a single protocol instead of two simplifies the configuration and troubleshooting.

BGP is the preferred routing protocol for connecting two ISPs, mainly because of its routing policies and ability to scale. ISPs commonly use BGP between two providers. This feature enables those ISPs to use BGP.

When BGP (both EBGP and IBGP) distributes a route, it can also distribute an MPLS label that is mapped to that route. The MPLS label mapping information for the route is carried in the BGP update message that contains the information about the route. If the next hop is not changed, the label is preserved.

IS-IS Designated Intermediate System (DIS) Tasks

On broadcast multi-access networks, a single router is elected as the DIS. There is no backup DIS elected. The DIS is the router that creates the pseudonode and acts on behalf of the pseudonode.

Two major tasks are performed by the DIS:

1. Creating and updating pseudonode LSP for reporting links to all systems on the broadcast subnetwork. See the Pseudenode LSP section for more information.

2. Flooding LSPs over the LAN.

Flooding over the LAN means that the DIS sends periodic complete sequence number protocol data units (CSNPs) (default setting of 10 seconds) summarizing the following information:

Sequence Number
Remaining Lifetime

The DIS is responsible for flooding. It creates and floods a new pseudonode LSP for each routing level in which it is participating (Level 1 or Level 2) and for each LAN to which it is connected. A router can be the DIS for all connected LANs or a subset of connected LANs, depending on the IS-IS priority or the Layer 2 address. The DIS will also create and flood a new pseudonode LSP when a neighbor adjacency is established, torn down, or the refresh interval timer expires. The DIS mechanism reduces the amount of flooding on LANs.

What does r RIB-Failure mean in the show ip bgp command output?

When BGP tries to install the best path prefix into Routing Information Base (RIB) (for example, the IP Routing table), RIB might reject the BGP route due to any of these reasons:

1. Route with better administrative distance already present in IGP. For example, if a static route already exists in IP Routing table.

2. Memory failure.

3. The number of routes in VPN routing/forwarding (VRF) exceeds the route-limit configured under the VRF instance.

In such cases, the prefixes that are rejected for these reasons are identified by r RIB Failure in the show ip bgp command output and are not advertised to the peers. This feature was first made available in Cisco IOS Software Release 12.2(08.05)T.

Qnet symlink manager (QSM)

Qnet is a QNX Neutrino protocol for communication between processes residing on different nodes. It enables IPC to work across nodes. For example, LWM uses Qnet transparently to enable inter-node communication.

The use of a symbolic link (symlink) enables location transparency to the Qnet protocol. When a process needs to communicate with another process, it uses the symlink associated with the service and does not need to know where the service is located. A server process registers with Qnet symlink manager (QSM) and publishes its service using symlink.

NTP Version 4

According to the NTP Version 4 Release Notes found in release.htm, the new features of version four (as compared to version three) are:

Use of floating-point arithmetic instead of fixed-point arithmetic.
Redesigned clock discipline algorithm that improves accuracy, handling of network jitter, and polling intervals.
Support for the nanokernel kernel implementation that provides nanosecond precision as well as improved algorithms.
Public-Key cryptography known as autokey that avoids having common secret keys.
Automatic server discovery (manycast mode)
Fast synchronization at startup and after network failures (burst mode)
New and revised drivers for reference clocks
Support for new platforms and operating systems