Oct 12, 2007

和信超媒體股價飆漲14% 貝爾斯登認為表現超越大盤



(中央社台北2007年10月12日電)根據彭博社報導,在美國那斯達克股市掛牌的台灣線上遊戲及博弈業者和信超媒體 (US-GIGM)周四股價強強袞,貝爾斯登將該公司納入評等,列入「表現超越大盤」評級,激勵股價一飛衝天,衝上記錄新高。貝爾斯登駐香港分析師James Rhee指出,隨著歐盟博弈規定鬆綁,和信超媒體可望贏得當地客戶。他指出,以高速網路服務起家的和信超媒體已出售獲利欠佳事業,利用資金完成三次重大併購。Rhee指出,「和信超媒體在電玩遊戲與線上博弈兩大成長及獲利豐厚的事業穩扎穩打,表現可圈可點,因此深獲青睞。」他指出,2010年以前線上遊戲營收每年增幅將達83%,與此同時,博弈服務銷售每年可望成長26% 。和信超媒體股價飆漲2.49美元或14%報20.39美元,總部位在台北的和信超媒體今年來股價上漲逾一倍。Rhee預期該股明年底上看27美元。

Frame Relay Local Management Interface

The Local Management Interface (LMI) is a set of enhancements to the basic Frame Relay specification. The LMI was developed in 1990 by Cisco Systems, StrataCom, Northern Telecom, and Digital Equipment Corporation. It offers a number of features (called extensions) for managing complex internetworks. Key Frame Relay LMI extensions include global addressing, virtual circuit status messages, and multicasting.

The LMI global addressing extension gives Frame Relay data-link connection identifier (DLCI) values global rather than local significance. DLCI values become DTE addresses that are unique in the Frame Relay WAN. The global addressing extension adds functionality and manageability to Frame Relay internetworks. Individual network interfaces and the end nodes attached to them, for example, can be identified by using standard address-resolution and discovery techniques. In addition, the entire Frame Relay network appears to be a typical LAN to routers on its periphery.

LMI virtual circuit status messages provide communication and synchronization between Frame Relay DTE and DCE devices. These messages are used to periodically report on the status of PVCs, which prevents data from being sent into black holes (that is, over PVCs that no longer exist).

The LMI multicasting extension allows multicast groups to be assigned. Multicasting saves bandwidth by allowing routing updates and address-resolution messages to be sent only to specific groups of routers. The extension also transmits reports on the status of multicast groups in update messages.

Frame Relay Network Implementation

A common private Frame Relay network implementation is to equip a T1 multiplexer with both Frame Relay and non-Frame Relay interfaces. Frame Relay traffic is forwarded out the Frame Relay interface and onto the data network. Non-Frame Relay traffic is forwarded to the appropriate application or service, such as a private branch exchange (PBX) for telephone service or to a video-teleconferencing application.

A typical Frame Relay network consists of a number of DTE devices, such as routers, connected to remote ports on multiplexer equipment via traditional point-to-point services such as T1, fractional T1, or 56-Kb circuits. An example of a simple Frame Relay network is shown in Figure 10-3.

Figure 10-3 A Simple Frame Relay Network Connects Various Devices to Different Services over a WAN


The majority of Frame Relay networks deployed today are provisioned by service providers that intend to offer transmission services to customers. This is often referred to as a public Frame Relay service. Frame Relay is implemented in both public carrier-provided networks and in private enterprise networks. The following section examines the two methodologies for deploying Frame Relay.

Public Carrier-Provided Networks

In public carrier-provided Frame Relay networks, the Frame Relay switching equipment is located in the central offices of a telecommunications carrier. Subscribers are charged based on their network use but are relieved from administering and maintaining the Frame Relay network equipment and service.

Generally, the DCE equipment also is owned by the telecommunications provider.
DTE equipment either will be customer-owned or perhaps will be owned by the telecommunications provider as a service to the customer.

The majority of today's Frame Relay networks are public carrier-provided networks.

Private Enterprise Networks

More frequently, organizations worldwide are deploying private Frame Relay networks. In private Frame Relay networks, the administration and maintenance of the network are the responsibilities of the enterprise (a private company). All the equipment, including the switching equipment, is owned by the customer.

Frame Relay Frame Formats

To understand much of the functionality of Frame Relay, it is helpful to understand the structure of the Frame Relay frame. Figure 10-4 depicts the basic format of the Frame Relay frame, and Figure 10-5 illustrates the LMI version of the Frame Relay frame.

Flags indicate the beginning and end of the frame. Three primary components make up
the Frame Relay frame: the header and address area, the user-data portion, and the frame check sequence (FCS). The address area, which is 2 bytes in length, is comprised of 10
bits representing the actual circuit identifier and 6 bits of fields related to congestion management. This identifier commonly is referred to as the data-link connection identifier (DLCI). Each of these is discussed in the descriptions that follow.

Standard Frame Relay Frame

Standard Frame Relay frames consist of the fields illustrated in Figure 10-4.

Figure 10-4 Five Fields Comprise the Frame Relay Frame


The following descriptions summarize the basic Frame Relay frame fields illustrated in Figure 10-4.

•Flags—Delimits the beginning and end of the frame. The value of this field is always the same and is represented either as the hexadecimal number 7E or as the binary number 01111110.

•Address—Contains the following information:

–DLCI—The 10-bit DLCI is the essence of the Frame Relay header. This value represents the virtual connection between the DTE device and the switch. Each virtual connection that is multiplexed onto the physical channel will be represented by a unique DLCI. The DLCI values have local significance only, which means that they are unique only to the physical channel on which they reside. Therefore, devices at opposite ends of a connection can use different DLCI values to refer to the same virtual connection.

–Extended Address (EA)—The EA is used to indicate whether the byte in which the EA value is 1 is the last addressing field. If the value is 1, then the current byte is determined to be the last DLCI octet. Although current Frame Relay implementations all use a two-octet DLCI, this capability does allow longer DLCIs to be used in the future. The eighth bit of each byte of the Address field is used to indicate the EA.

–C/R—The C/R is the bit that follows the most significant DLCI byte in the Address field. The C/R bit is not currently defined.

–Congestion Control—This consists of the 3 bits that control the Frame Relay congestion-notification mechanisms. These are the FECN, BECN, and DE bits, which are the last 3 bits in the Address field.

Forward-explicit congestion notification (FECN) is a single-bit field that can be set to a value of 1 by a switch to indicate to an end DTE device, such as a router, that congestion was experienced in the direction of the frame transmission from source to destination. The primary benefit of the use of the FECN and BECN fields is the capability of higher-layer protocols to react intelligently to these congestion indicators. Today, DECnet and OSI are the only higher-layer protocols that implement these capabilities.

Backward-explicit congestion notification (BECN) is a single-bit field that, when set to a value of 1 by a switch, indicates that congestion was experienced in the network in the direction opposite of the frame transmission from source to destination.

Discard eligibility (DE) is set by the DTE device, such as a router, to indicate that the marked frame is of lesser importance relative to other frames being transmitted. Frames that are marked as "discard eligible" should be discarded before other frames in a congested network. This allows for a basic prioritization mechanism in Frame Relay networks.

•Data—Contains encapsulated upper-layer data. Each frame in this variable-length field includes a user data or payload field that will vary in length up to 16,000 octets. This field serves to transport the higher-layer protocol packet (PDU) through a Frame Relay network.

•Frame Check Sequence—Ensures the integrity of transmitted data. This value is computed by the source device and verified by the receiver to ensure integrity of transmission.

LMI Frame Format

Frame Relay frames that conform to the LMI specifications consist of the fields illustrated in Figure 10-5.

Figure 10-5 Nine Fields Comprise the Frame Relay That Conforms to the LMI Format


The following descriptions summarize the fields illustrated in Figure 10-5.

•Flag—Delimits the beginning and end of the frame.

•LMI DLCI—Identifies the frame as an LMI frame instead of a basic Frame Relay frame. The LMI-specific DLCI value defined in the LMI consortium specification is DLCI = 1023.

•Unnumbered Information Indicator—Sets the poll/final bit to zero.

•Protocol Discriminator—Always contains a value indicating that the frame is an LMI frame.

•Call Reference—Always contains zeros. This field currently is not used for any purpose.

•Message Type—Labels the frame as one of the following message types:

–Status-inquiry message—Allows a user device to inquire about the status of the network.

–Status message—Responds to status-inquiry messages. Status messages include keepalives and PVC status messages.

•Information Elements—Contains a variable number of individual information elements (IEs). IEs consist of the following fields:

–IE Identifier—Uniquely identifies the IE.

–IE Length—Indicates the length of the IE.

–Data—Consists of 1 or more bytes containing encapsulated upper-layer data.

•Frame Check Sequence (FCS)—Ensures the integrity of transmitted data.

Summary
Frame Relay is a networking protocol that works at the bottom two levels of the OSI reference model: the physical and data link layers. It is an example of packet-switching technology, which enables end stations to dynamically share network resources.

Frame Relay devices fall into the following two general categories:

•Data terminal equipment (DTEs), which include terminals, personal computers, routers, and bridges

•Data circuit-terminating equipment (DCEs), which transmit the data through the network and are often carrier-owned devices (although, increasingly, enterprises are buying their own DCEs and implementing them in their networks)

Frame Relay networks transfer data using one of the following two connection types:

•Switched virtual circuits (SVCs), which are temporary connections that are created for each data transfer and then are terminated when the data transfer is complete (not a widely used connection)

•Permanent virtual circuits (PVCs), which are permanent connections

The DLCI is a value assigned to each virtual circuit and DTE device connection point in the Frame Relay WAN. Two different connections can be assigned the same value within the same Frame Relay WAN—one on each side of the virtual connection.

In 1990, Cisco Systems, StrataCom, Northern Telecom, and Digital Equipment Corporation developed a set of Frame Relay enhancements called the Local Management Interface (LMI). The LMI enhancements offer a number of features (referred to as extensions) for managing complex internetworks, including the following:

•Global addressing

•Virtual circuit status messages

•Multicasting

Cisco EtherChannel Technology

Cisco EtherChannel Technology
Introduction

The increasing deployment of switched Ethernet to the desktop can be attributed to the proliferation of bandwidth-intensive intranet applications. Any-to-any communications of new intranet applications such as video to the desktop, interactive messaging, and collaborative white-boarding are increasing the need for scalable bandwidth within the core and at the edge of campus networks. At the same time, mission-critical applications call for resilient network designs. With the wide deployment of faster switched Ethernet links in the campus, users need to either aggregate their existing resources or upgrade the speed in their uplinks and core to scale performance across the network backbone.

Cisco EtherChannel® technology builds upon standards-based 802.3 full-duplex Fast Ethernet to provide network managers with a reliable, high-speed solution for the campus network backbone. EtherChannel technology provides bandwidth scalability within the campus by providing up to 800 Mbps, 8 Gbps, or 80 Gbps of aggregate bandwidth for a Fast EtherChannel, Gigabit EtherChannel, or 10 Gigabit EtherChannel connection, respectively. Each of these connection speeds can vary in amounts equal to the speed of the links used (100 Mbps, 1 Gbps, or 10 Gbps). Even in the most bandwidth-demanding situations, EtherChannel technology helps aggregate traffic and keep oversubscription to a minimum, while providing effective link-resiliency mechanisms.

Cisco EtherChannel Benefits

Cisco EtherChannel technology provides a solution for network managers who require higher bandwidth between servers, routers, and switches than single-link Ethernet technology can provide.

Cisco EtherChannel technology provides incremental scalable bandwidth and the following benefits:

•Standards-based—Cisco EtherChannel technology builds upon IEEE 802.3-compliant Ethernet by grouping multiple, full-duplex point-to-point links together. EtherChannel technology uses IEEE 802.3 mechanisms for full-duplex autonegotiation and autosensing, when applicable.

•Multiple platforms—Cisco EtherChannel technology is flexible and can be used anywhere in the network that bottlenecks are likely to occur. It can be used in network designs to increase bandwidth between switches and between routers and switches—as well as providing scalable bandwidth for network servers, such as large UNIX servers or PC-based Web servers.

•Flexible incremental bandwidth—Cisco EtherChannel technology provides bandwidth aggregation in multiples of 100 Mbps, 1 Gbps, or 10 Gbps, depending on the speed of the aggregated links. For example, network managers can deploy EtherChannel technology that consists of pairs of full-duplex Fast Ethernet links to provide more than 400 Mbps between the wiring closet and the data center. In the data center, bandwidths of up to 800 Mbps can be provided between servers and the network backbone to provide large amounts of scalable incremental bandwidth.

•Load balancing—Cisco EtherChannel technology is composed of several Fast Ethernet links and is capable of load balancing traffic across those links. Unicast, broadcast, and multicast traffic is evenly distributed across the links, providing higher performance and redundant parallel paths. When a link fails, traffic is redirected to the remaining links within the channel without user intervention and with minimal packet loss.

•Resiliency and fast convergence—When a link fails, Cisco EtherChannel technology provides automatic recovery by redistributing the load across the remaining links. When a link fails, Cisco EtherChannel technology redirects traffic from the failed link to the remaining links in less than one second. This convergence is transparent to the end user—no host protocol timers expire, so no sessions are dropped.

•Ease of management—Cisco EtherChannel technology takes advantage of Cisco experience developed over the years in troubleshooting and maintaining Ethernet networks. Existing network probes can be used for traffic management and troubleshooting, and management applications such as CiscoWorks and third-party management applications are now EtherChannel-aware.

•Transparent to network applications—Cisco EtherChannel technology does not require changes to networked applications. When EtherChannel technology is used within the campus, switches and routers provide load balancing across multiple links transparently to network users. To support EtherChannel technology on enterprise-class servers and network interface cards, smart software drivers can coordinate distribution of loads across multiple network interfaces.

•Compatible with Cisco IOS® Software—Cisco EtherChannel connections are fully compatible with Cisco IOS virtual LAN (VLAN) and routing technologies. The Inter-Switch Link (ISL) VLAN Trunking Protocol (VTP) can carry multiple VLANs across an EtherChannel link, and routers attached to EtherChannel trunks can provide full multiprotocol routing with support for hot standby using the Hot Standby Router Protocol (HSRP).

•100 Megabit, 1 Gigabit, and 10 Gigabit Ethernet-ready—Cisco EtherChannel technology is available in all Ethernet link speeds. EtherChannel technology allows network managers to deploy networks that will scale smoothly with the availability of next-generation, standards-based Ethernet link speeds.

•Interoperability with Coarse Wavelength Division Multiplexing (CWDM) Gigabit Interface Converters (GBICs)—By simultaneously implementing Gigabit EtherChannel and CWDM technologies, network managers can increase the bandwidth of their links without having to invest in new long runs of fiber. CWDM technologies allow the traffic aggregated by the Cisco EtherChannel link to be multiplexed on to a single strand of fiber.

Cisco EtherChannel Components

Cisco EtherChannel technology is a trunking technology based on grouping several full-duplex 802.3 Ethernet links to provide fault-tolerant, high-speed links between switches, routers, and servers. It is based on proven industry-standard technology—it has been extended from the EtherChannel technology offered by Kalpana in its switches in the early 1990s, and provides load sharing across multiple Fast Ethernet links while providing redundancy and subsecond convergence times.

Cisco EtherChannel technology consists of the following key elements:

•Fast Ethernet links—Cisco EtherChannel connections can consist of one to eight industry-standard Fast Ethernet links to load share traffic with up to 80 Gbps of usable bandwidth. EtherChannel connections can interconnect LAN switches, routers, servers, and clients. Because load balancing is integrated with Cisco Catalyst® LAN switch architectures, there is no performance degradation for adding links to a channel—high throughput and low latencies can be maintained while gaining more available bandwidth. EtherChannel technology provides link resiliency within a channel—if links fail, the traffic is immediately directed to the remaining links. Finally, EtherChannel technology is not dependent on any type of media—it can be used with Ethernet running on existing unshielded twisted pair (UTP) wiring, or single-mode and multimode fiber.

•Cisco EtherChannel technology is a standard feature across the entire Cisco Catalyst series of switches and Cisco IOS® Software-based routers. The load-sharing algorithms used vary between platforms, allowing for decisions based on source or destination Media Access Control (MAC) addresses, IP addresses, or Transmission Control Protocol/User Datagram Protocol (TCP/UDP) port numbers.

•Redundancy—Cisco EtherChannel technology does not require the use of 802.1D Spanning-Tree Protocol to maintain a topology state within the channel. Rather, it uses a peer-to-peer control protocol that provides autoconfiguration and subsecond convergence times for parallel links, yet allows higher-level protocols (such as Spanning-Tree Protocol) or existing routing protocols to maintain topology. This approach allows EtherChannel technology to use the recovery features of the network without adding complexity or creating incompatibilities with third-party equipment or software. Because the Spanning-Tree Protocol operation is completely standards-based, network managers can use their existing network topologies, augmenting bandwidth by installing EtherChannel technology where single Ethernet links were previously installed.

•Management—Cisco EtherChannel technology is easily configured by a command-line interface (CLI) or by Simple Network Management Protocol (SNMP) applications such as CiscoWorks. A network manager needs to identify and define the number of ports that will make up the channel, and then connect the devices. CiscoWorks for Switched Internetworks will graphically display EtherChannel connections between devices, collect statistics for both individual Ethernet links within the channel, and aggregate statistics for the EtherChannel connection. An integral benefit of EtherChannel technology is the ability to detect, report, and prevent the use of incorrectly paired interfaces within the channel. These may include interfaces that are not configured for full-duplex operation, have mismatched link speeds, or are incorrectly wired. Consistency checks are completed before the activation of a channel to help ensure network integrity.

Cisco EtherChannel Topologies

The following diagrams show some common applications of Cisco EtherChannel technology and how they solve the bandwidth requirements of today's networks. Fast EtherChannel and Fast Ethernet links will be used throughout these examples.

Figure 1 shows a network using Cisco EtherChannel connections. The bandwidth between the wiring closets and the data center has been doubled, from 200 Mbps to 400 Mbps. In addition to the increased bandwidth, the resiliency within the channel provides for subsecond convergence if one of the links fails.

Figure 1

Scaling Performance Between Wiring Closets and the Data Center



Figure 2 shows a topology where the network manager has increased bandwidth between the data center and the wiring closet to an aggregate of 800 Mbps, but has also used the physical diversity of the fiber plant to decrease the chances of a network outage. Using a Cisco EtherChannel connection consisting of four Fast Ethernet links, two fiber runs on the east side of the building provide 400 Mbps, and the west side of the building provides the remaining 400 Mbps. In this example, in the event of a fiber cut on one side of the building, the remaining side will pick up the traffic in less than one second, without wiring closet clients losing sessions.

Figure 2

Scaling Bandwidth with Resilience



Figure 3 shows a configuration where a switch has been configured with two Cisco EtherChannel connections consisting of two links each. Because these are separate channels, Spanning-Tree Protocol will block the second channel to avoid the looped topology. This design is applicable where EtherChannel connections are resident on separate line cards within the switch for resiliency.

Figure 3

Resilience with Cisco EtherChannel Technology Using Spanning-Tree Protocol



Figure 4 shows a complete network design based on Cisco EtherChannel technology. As in the previous examples, links from the wiring closets are brought into the data center using 400 Mbps channels, providing bandwidth and resiliency. In the data center, routers are interconnected with EtherChannel connections, providing improved performance by having more bandwidth available to route between subnets. Here the router is configured with two dual-link EtherChannel connections to provide 400 Mbps of bandwidth on each subnet. The EtherChannel technology provides load balancing across two links within the channel based on IP addresses, and the links within the channel can use ISL encapsulation to support multiple subnets per link. The last component in this network design is a server attached via a four-link EtherChannel connection, which provides 800 Mbps of bandwidth to the network. Typical platforms that would require such bandwidth would be high-end Pentium Pro servers, enterprise servers, and high-end graphics imaging and rendering servers. As shown in Figure 4, the server is connected via a multiple-link EtherChannel connection—an excellent match for the bandwidth needs of locally attached users and the users serviced via the router.

Figure 4

Cisco EtherChannel Technology Interconnecting Servers, Switches, and Routers Across the Campus



Figure 5 shows a sample network where Gigabit links are used with Gigabit EtherChannel and CWDM technologies. In Figure 5, four gigabit links have been combined to obtain a total aggregated bandwidth of 4 gigabits. Without incorporating CWDM technologies into the solution, four runs of fiber need to be installed between the two campus points of presence (POPs). By employing CWDM GBICs and two CWDM add/drop multiplexers, the number of runs of fiber can be reduced to one. This translates into significant savings depending on the distance to be spanned by the EtherChannel connection.

Figure 5

Cisco EtherChannel Technology over CWDM



Summary

Cisco EtherChannel technology leverages standards-based Ethernet links used in a parallel topology, taking advantage of existing technology to provide the additional bandwidth that network backbones require. EtherChannel technology provides flexible, scalable bandwidth with resiliency and load sharing across links for switches, router interfaces, and servers. EtherChannel technology provides the tools for network managers to build high-speed solutions for their campus network backbones, while using the existing cabling and network device infrastructure. EtherChannel technology can aggregate all available Ethernet link speeds, from 10 Mbps to 10 Gbps.

Operation of Multicast Source Discovery Protocol (MSDP)

這份文章的來源,如果各位曾經很認真讀過書的話,可能有印象,這是一個可以線上閱讀書本內容的網站,而我所摘錄的內容就是來自於Routing TCP IP Volume II CCIE Professional Development,這一個關於MSDP的說明我想對各位都蠻重要的,因為大部份的人都沒有使用過MSDP的經驗,但是如果要準備SP CCIE的Candidates就不得不認真學習一下了!

...(略)

Operation of Multicast Source Discovery Protocol (MSDP)

The purpose of MSDP is, as the name states, to discover multicast sources in other PIM domains. The advantage of running MSDP is that your own RPs exchange source information with RPs in other domains; your group members do not have to be directly dependent on another domain's RP.

NOTE

You will see in some subsequent case studies how MSDP can prove useful for sharing source information within a single domain, too.

MSDP uses TCP (port 639) for its peering connections. As with BGP, using point-to-point TCP peering means that each peer must be explicitly configured. When a PIM DR registers a source with its RP as illustrated in Figure 7-8. the RP sends a Source Active (SA) message to all of its MSDP peers.

Figure 7-8. RPs Advertise Sources to Their MSDP Neighbors with Source Active Messages



The SA contains the following:

.The address of the multicast source
.The group address to which the source is sending
.The IP address of the originating RP

Each MSDP peer that receives the SA floods the SA to all of its own peers downstream from the originator. In some cases, such as the RPs in AS 6 and AS 7 of Figure 7-8, an RP may receive a copy of an SA from more than one MSDP peer. To prevent looping, the RP consults the BGP next-hop database to determine the next hop toward the SA's originator. If both MBGP and unicast BGP are configured, MBGP is checked first, and then unicast BGP. That next-hop neighbor is the RPF peer for the originator, and SAs received from the originator on any interface other than the interface to the RPF peer are dropped. The SA flooding process is, therefore, called peer RPF flooding. Because of the peer RPF flooding mechanism, BGP or MBGP must be running in conjunction with MSDP.

When an RP receives an SA, it checks to see whether there are any members of the SA's group in its domain by checking to see whether there are interfaces on the group's (*, G) outgoing interface list. If there are no group members, the RP does nothing. If there are group members, the RP sends an (S, G) join toward the source. As a result, a branch of the source tree is constructed across AS boundaries to the RP. As multicast packets arrive at the RP, they are forwarded down its own shared tree to the group members in the RP's domain. The members' DRs then have the option of joining the RPT tree to the source using standard PIM-SM procedures.

The originating RP continues to send periodic SAs for the (S, G) every 60 seconds for as long as the source is sending packets to the group. When an RP receives an SA, it has the option to cache the message. Suppose, for example, that an RP receives an SA for (172.16.5.4, 228.1.2.3) from originating RP 10.5.4.3. The RP consults its mroute table and finds that there are no active members for group 228.1.2.3, so it passes the SA message to its peers downstream of 10.5.4.3 without caching the message. If a host in the domain then sends a join to the RP for group 228.1.2.3, the RP adds the interface toward the host to the outgoing interface list of its (*, 224.1.2.3) entry. Because the previous SA was not cached, however, the RP has no knowledge of the source. Therefore, the RP must wait until the next SA message is received before it can initiate a join to the source.

If, on the other hand, the RP is caching SAs, the router will have an entry for (172.16.5.4, 228.1.2.3) and can join the source tree as soon as a host requests a join. The trade-off here is that in exchange for reducing the join latency, memory is consumed caching SA messages that may or may not be needed. If the RP belongs to a very large MSDP mesh, and there are large numbers of SAs, the memory consumption can be significant.

By default, Cisco IOS Software does not cache SAs. You can enable caching with the command ip msdp cache-sa-state. To help alleviate possible memory stress, you can link the command to an extended access list that specifies what (S, G) pairs to cache.

If an RP has an MSDP peer that is caching SAs, you can reduce the join latency at the RP without turning on caching by using SA Request and SA Response messages. When a host requests a join to a particular group, the RP sends an SA Request message to its caching peer(s). If a peer has cached source information for the group in question, it sends the information to the requesting RP with an SA Response message. The requesting RP uses the information in the SA Response but does not forward the message to any other peers. If a noncaching RP receives an SA Request, it sends an error message back to the requestor.

To enable a Cisco router to send SA Request messages, use the ip msdp sa-request command to specify the IP address or name of a caching peer. You can use the command multiple times to specify multiple caching peers.

...(略)

Deploying Control Plane Policing

PROTECTING THE ROUTE PROCESSOR

A router can be logically divided into four functional components or planes:
1. Data Plane
2. Management Plane
3. Control Plane
4. Services Plane

The vast majority of traffic travels through the router via the data plane; however, the Route Processor must handle certain packets, such as routing updates, keepalives, and network management. This is often referred to as control and management plane traffic.

Because the Route Processor is critical to network operations, any service disruption to the Route Processor or the control and management planes can result in business-impacting network outages. A DoS attack targeting the Route Processor, which can be perpetrated either inadvertently or maliciously, typically involves high rates of punted traffic that result in excessive CPU utilization on the Route Processor itself. This type of attack, which can be devastating to network stability and availability, may display the following symptoms:

• High Route Processor CPU utilization (near 100%)

• Loss of line protocol keepalives and routing protocol updates, leading to route flaps and major network transitions

• Interactive sessions via the Command Line Interface (CLI) are slow or completely unresponsive due to high CPU utilization

• Route Processor resource exhaustion-resources such as memory and buffers are unavailable for legitimate IP data packets

• Packet queue backup, which leads to indiscriminate drops (or drops due to lack of buffer resources) of other incoming packets

CPP addresses the need to protect the control and management planes, ensuring routing stability, availability, and packet delivery.

It uses a dedicated control-plane configuration via the Modular QoS CLI (MQC) to provide filtering and rate limiting capabilities for control plane packets.

Figure 1 illustrates the flow of packets from various interfaces. Packets destined to the control plane are subject to control plane policy checking, as depicted by the control plane services block.

Figure 1. Packet Flow



COMMAND SYNTAX

CPP leverages MQC to define traffic classification criteria and to specify configurable policy actions for the classified traffic. Traffic of interest must first be identified via class-maps, which are used to define packets for a particular traffic class. Once classified, enforceable policy actions for the identified traffic are created with policy-maps. The control-plane global command allows the CP service policies to be attached to control plane itself.

There are four steps required to configure CPP:

1. Define a packet classification criteria
router(config)#class-map
router(config-cmap)#match

2. Define a service policy
router(config)#policy-map
router(config-pmap)#class
router(config-pmap-c)# police conform-action exceed-action
cir Committed information rate (Bits per second)
rate Specify policy rate in packets per second (pps)

3. Enter control-plane configuration mode
router(config)#control-plane

* When using the `match protocol' classification criteria, ARP is the only protocol supported. All other protocols need an ACE entry for classification purposes.

4. Apply QoS policy

service-policy {input output}
input Assign policy-map to the input of an interface
output** Assign policy-map to the output of an interface

...(略)

MPLS FAQ For Beginners

...(略)

Q. What protocol and port numbers do LDP and TDP use to distribute labels to LDP/TDP peers?

A. LDP uses TCP port 646, and TDP uses TCP port 711. These ports are opened on the router interface only when mpls ip is configured on the interface. The use of TCP as a transport protocol results in reliable delivery of LDP/TDP information with robust flow control and congestion handling mechanisms.

...(略)

CISCO IOS NETFLOW OVERVIEW

說真的,在我尋找重點的同時,我深深地覺得CCIE出題目的考官已經幾近瘋狂,各位可以發現我現在找到的相關資料已經幾乎都不在Cisco Press的書籍範圍內,而是來自於各時期Cisco Seminar PPT/PDF或Document CD中的某段文字內容,也就是說如果你沒有考古題也沒有看額外的Cisco Tech PowerPoint Slide,那有很大的機會是你從來沒有看過的題目及答案內容(就算有看過也記不得這麼清楚吧…),我想這也是為什麼考古題市場會這麼大的原因,不看考古題去應考那就是跟"錢"過不去,因為這已經不是考驗實力了…,追根究底還是因為出題目的人員並沒有花心思設計題目來考驗應考者的實力,而是從五花八門的Tech Material中直接Copy & Paste,這也難怪CCIE Written常常題目跟答案牛頭不對馬嘴…花了很多時間整理順便發發牢騷,請包涵~

Cisco IOS NetFlow Origination

• Developed and patented at Cisco® Systems in 1996
• NetFlow is now the primary network accounting technology in the industry
Answers questions regarding IP traffic: who, what, where, when, and how
• Provides a detailed view of network behavior

...(略)

NetFlow Principles

• Inbound traffic only today
• Unidirectional flow
Accounts for both transit traffic and traffic destined for the router
• Works with Cisco Express Forwarding or fast switching
=> Not a switching path
• Supported on all interfaces and Cisco IOS Software hardware products
Returns the sub-interface information in the flow records

...(略)

NetFlow Versions

Version 1: Original
Version 5: Standard and most common
Version 7: Specific to Cisco Catalyst 6500 and 7600 Series Switches. Similar to Version 5, but does not include AS,interface, TCP Flag & TOS information
Version 8: Choice of eleven aggregation schemes.Reduces resource usage
Version 9: Flexible, extensible file export format to enable easier support of additional fields & technologies; coming out now MPLS, Multicast, & BGP Next Hop

...(略)

Oct 11, 2007

Cisco IP/MPLS Interprovider Solution Deployment Overview

...(略)

Inter-AS/Interprovider specification in RFC2547bis

IETF, RFC2547bis, Paragraph 10 :
.10A: Simple IP interconnect: The other network looks like a CE for each cross-SP VPN

.10B: Trusted MPLS interconnect: One logical connection for all VPN’s but VPN routes still have to be maintained on provider border routers

.10C: Trusted and even more scalable MPLS interconnect: Provider border routers don’t have to maintain VPN routes










...(略)

Autonomous system interconnect using content identification and validation

...(略)

[0010] The industry has standardized on a few Inter-Autonomous System (AS) models that the service providers may deploy. The current industry standards for Inter-AS solutions include the models defined as 10a, 10b, and 10c.

[0011] The first model defined and deployed by many service providers is the 10a model. The 10a model requires the provider to build on their ASBR a VRF per VPN, a unique peering interface per VRF, and a unique routing process per VRF. The peer ASBR does the same thereby creating a one-for-one relationship between the two ASBR's. The advantages of the 10a model include discrete interfaces facilitating QoS mechanisms and explicit resource management methods that protect the memory and processing resources. Likewise, the exposure of the ASBR and the attached network is limited.

[0012] The second model defined and deployed by a few service providers is the 10b model. The 10b model only requires the provider to build a single interface for each peer and a single routing process on the interface. The routing process (MP-BGP) is able to maintain the segregation of VPN prefixes without having to use discrete VRF's per enterprise VPN. The advantages include less memory consumption for the routing prefixes and interfaces, less processor consumption for the routing process, and automatic VPN session binding between the ASBR's.

[0013] The third model defined and rarely deployed by service providers is the 10c method. The 10c model only requires the provider to build a single interface for each peer and a single routing process on the interface. A routing process (MP-BGP) is able to maintain the segregation of VPN prefixes without requiring a presence on the ASBR. The advantages include even less memory consumption for the routing prefixes since the VPN prefixes are passed around the ASBR. The ASBR has even less processor consumption since the ASBR serves as a core device providing connectivity between the two AS's.

[0014] The two most commonly used models--10a and 10b--have orthogonal capabilities. Where 10a is strong, 10b is weak and vice-a-versa. Table 1 provides a synopsis of the existing solutions. TABLE-US-00001 TABLE 1 ASBR 10a 10b 10c Routing Many One One Interfaces Many One One Memory Per-prefix Per-label Per-label QoS Per-VPN Global Global Configuration Manual Dynamic Dynamic Resource Strong Weak Weak Security Strong Weak Very Week

[0015] Routing processes are complex state machines that keep track of the prefixes and the paths to reach the prefixes. Routing processes can be constrained by a number of factors such as the number of peers or adjacencies, the number of routing entries, and the number of potentially viable paths for each routing entry. As the number of prefixes and interfaces increase, the computation complexity increases thereby requiring more processor schedule time. Excessive computational routing complexity on the ASBR may impact any or all the VPN's. As shown in Table 1, the 10a method requires many routing processes, while the 10b and 10c methods require a single routing process.

[0016] Interfaces consume memory constructs and typically require an operator to configure the interface and the associate peer entity. The cost of a VPN interface is usually not too cumbersome in an Inter-AS solution as the number of VPNs is typically small. Nevertheless, the interface must be created and correctly associated with the appropriate customer. The 10a method requires many interfaces, while the 10b and 10c methods require a single interface.

[0017] Memory is allocated for VPN prefixes. VPN prefixes can create a resource burden on the ASBR. The number of prefixes is not directly controlled by a single provider or customer, but by the aggregate set of operators and customers. For this reason, memory allocated for VPN prefixes may be very precious. The 10a method requires memory on a per-prefix basis, while the 10b and 10c methods require memory on a per-label and per-prefix basis.

[0018] The customers of the MPLS VPN are particularly interested in QoS, especially at provider boundaries where SLA's tend to be difficult to enforce. Each enterprise has unique QoS requirements that may be difficult to handle in aggregate; however, provisioning a QoS model per customer is also a challenge especially when there is no discrete point where the QoS model may be applied. The 10a method allows QoS on a per-VPN basis, whereas the 10b and 10c methods only allow QoS on a global basis.

[0019] The Inter-AS model requires a configuration that establishes a relationship between the ASBR's for each VPN. The configuration should be simple to implement and should be easy to replicate. All methods require manual configuration, either through CLI or a management tool, although 10a has additional configuration burden due to the number of VRFs/interfaces required.

[0020] Resources (memory, interfaces, and processor schedule time) are precious for a service provider. In particular, the provider is interested in conducting "One Time Provisioning" for many services. In addition, the management of the allocated resources can become a burden. To minimize the Operation Expenditures, the provider will frequently over-provision many of the components in a solution if the Capital Costs of the components are negligible. On the contrary, the expensive components are monitored closely and judiciously allocated. Resource management plays a critical role insuring SLA's are met. The 10a method provides strong resource management, while the 10b and 10c methods provide weak resource management.

[0021] Closely related with resource management is security. Security requirements permeate the solution such that the provider can protect their assets, their ability to provide services, as well as one customer from another customer. Security is based on a risk management model where the law of diminishing returns plays a critical role. The cost of security (capital costs, functional costs, operational costs) must be balanced against the potential risk (liability costs, credibility, etc.). Clearly, failure to address the security requirements of a solution makes the previous points highlighted somewhat pointless. The 10a method provides for strong security, while the 10b method provides weaker security and the 10c method provides even weaker security than the 10b method.

[0022] Conventional mechanisms such as those explained above suffer from a variety of deficiencies. One such deficiency is that the conventional 10a model consumes more resources on the ASBR which limits the scalability of the model. Resources include establishment of routing entries, interfaces, and routing processes. Routing entries and interfaces consume memory while routing processes consume processing resources. In addition, each of the constructs must be manually configured per customer.

...

[0064] One method of controlling the number of prefixes received from the peer ASBR is to bound the memory space allocated to the VPN. This is accomplished in the Inter-AS 10a model by only accepting a certain number of prefixes for the VRF associated with the customer. The identification of customer prefixes is determined by the specific routing adjacency with the peer ASBR (e.g. unique OSPF process or address family for BGP, EIGRP, or RIP). In the 10b model, there is no means of automatically identifying a customer's set of prefixes in the global LFIB. Each VPN prefix is tagged with the BGP next-hop, the Route Distinguisher (RD), and one or more Route Targets (RT). The BGP next-hop is not unique per customer and an administrative domain operator frequently uses multiple RD's for a single MPLS VPN customer. The only element that may uniquely define a customer's set of prefixes is the RT. The approach to bounding the set of VPN prefixes is to allocate memory for the customer's set of prefixes and to populate the memory by matching a subset of the RT values received via the BGP VPNv4 updates. A potential technique for accomplishing this is to partition the LFIB space on a per customer basis. The ASBR will receive VPNv4 prefixes, match those with a specified RT value for a given VPN LFIB memory allocation, and build a VPNv4 label switching entry in the partitioned LFIB. This prevents excessive VPN prefixes received from the peer ASBR from consuming a local ASBR's memory. The memory partition for a single VPN might be exhausted; however, the problem is contained to this individual VPN.

...(略)

MPLS VPN - Route Target Rewrite

The MPLS VPN—Route Target Rewrite feature allows the replacement of route targets on incoming and outgoing Border Gateway Protocol (BGP) updates. Typically, Autonomous System Border Routers (ASBRs) perform the replacement of route targets at autonomous system boundaries. Route Reflectors (RRs) and provider edge (PE) routers can also perform route target replacement.

The main advantage of the MPLS VPN - Route Target Rewrite feature is that it keeps the administration of routing policy local to the autonomous system.

Prerequisites for MPLS VPN - Route Target Rewrite

The MPLS VPN - Route Target Rewrite feature requires the following:

•You should know how to configure Multiprotocol Virtual Private Networks (MPLS VPNs).

•You need to configure your network to support interautonomous systems (Inter-autonomous system) with different route target (RT) values in each autonomous system.

•You need to identify the RT replacement policy and target router for each autonomous system.

Restrictions for MPLS VPN - Route Target Rewrite

You can apply multiple replacement rules using the route-map continue clause. The MPLS VPN - Route Target Rewrite feature does not support the continue clause on outbound route maps.

Information About MPLS VPN - Route Target Rewrite

To configure the MPLS VPN - Route Target Rewrite feature, you need to understand the following concepts:

• Route Target Replacement Policy

• Route Maps and Route Target Replacement

Route Target Replacement Policy

Routing policies for a peer include all configurations that may impact inbound or outbound routing table updates. The MPLS VPN - Route Target Rewrite feature can influence routing table updates by allowing the replacement of route targets on inbound and outbound BGP updates. Route targets are carried as extended community attributes in BGP Virtual Private Network IP Version 4 (VPNv4) updates. Route target extended community attributes are used to identify a set of sites and VPN routing and forwarding (VRF) instances that can receive routes with a configured route target.

In general, ASBRs perform route target replacement at autonomous system borders when the ASBRs exchange VPNv4 prefixes. You can also configure the MPLS VPN - Route Target Rewrite feature on PE routers and RR routers.

Figure 1 shows an example of route target replacement on ASBRs in an MPLS VPN Inter-autonomous system topology. This example includes the following configurations:

•PE1 is configured to import and export RT 100:1 for VRF VPN1.

•PE2 is configured to import and export RT 200:1 for VRF VPN2.

•ASBR1 is configured to rewrite all inbound VPNv4 prefixes with RT 200:1 to RT 100:1.

•ASBR2 is configured to rewrite all inbound VPNv4 prefixes with RT 100:1 to RT 200:1.

Figure 1 Route Target Replacement on ASBRs in an MPLS VPN Inter-AS Topology


Figure 2 shows an example of route target replacement on route reflectors in an MPLS VPN Inter-autonomous system topology. This example includes the following configurations:

•EBGP is configured on the route reflectors.

•EBGP and IBGP IPv4 label exchange is configured between all BGP routers.

•Peer groups are configured on the routers reflectors.

•PE2 is configured to import and export RT 200:1 for VRF VPN2.

•PE2 is configured to import and export RT 200:2 for VRF VPN3.

•PE1 is configured to import and export RT 100:1 for VRF VPN1.

•RR1 is configured to rewrite all inbound VPNv4 prefixes with RT 200:1 or RT 200:2 to RT 100:1.

•RR2 is configured to rewrite all inbound prefixes with RT 100:1 to RT 200:1 and RT 200:2.

Figure 2 Route Target Rewrite on Route Reflectors in an MPLS VPN Inter-autonomous system Topology


...(略)

Using IS-IS ATT-Bit Control Feature

Using the IS-IS Attach-Bit
Control Feature


Introduction

In Intermediate System-to-Intermediate System (IS-IS) networks, routing inter-area traffic from Layer 1 areas is accomplished by sending the traffic to the nearest Layer 1/Layer 2 router. A Layer 1/Layer 2 router identifies itself by setting an attach-bit (ATT-bit) in its Layer 1 link-state packet (LSP). In some situations, however, it might not be desirable to set the ATT-bit. For example, if there are multiple Layer 1/Layer 2 routers within a Layer 1 area and one of the Layer 1/Layer 2 routers loses its backbone connection, continuing to send inter-area traffic to this Layer 1/Layer 2 router can cause the traffic to be dropped. Cisco IOS® Software now introduces a new capability to allow network administrators to control when a Layer 1/Layer 2 router should set the ATT bit and avert dropped traffic.

Overview

In networks running hierarchical routing protocols—IS-IS or Open Shortest Path First (OSPF) Protocol, for example—it is beneficial, for redundancy purposes, to have multiple paths reach the backbone area from a local area. If one of the paths is lost to the backbone area, the other path can continue to be used for forwarding inter-area traffic. With IS-IS, routing the inter-area traffic is accomplished by sending the traffic to the closest Layer 1/Layer 2 router. Layer 1/Layer 2 routers identify themselves by setting the ATT-bit in their Layer 1 LSPs. Upon receiving an LSP with the ATT-bit set, a Layer 1 router knows that the LSP originator is a Layer 1/Layer 2 router that can be used to route inter-area traffic. When there are multiple Layer 1/Layer 2 routers in one local area, the Layer 1 routers within that local area forward inter-area traffic to the nearest Layer 1/Layer 2 router (Figure 1).

In Figure 1, the network element (NE) devices in Area 1 are acting as Layer 1 routers. They use either Rtr1 or Rtr2 Layer 1/Layer 2 routers to forward the traffic destined to areas outside of their local area. Assume all the links have equal cost. NE1 would use Rtr1 because it is closer than Rtr2. On the other hand, NE3 would use Rtr2. NE2 would perform load balancing to Rtr1 and Rtr2 because they are equidistant to NE2.

Figure 1

Sample Connectionless Network Service (CLNS) Network Topology



Issue

With the introduction of the multi-area support feature, Layer 1/Layer 2 routers can connect to multiple Layer 1 areas. This has effectively reduced the number of Layer 1/Layer 2 routers needed because multiple Layer 1 areas can share one Layer 1/Layer 2 router. On the other hand, it can complicate networks. In earlier Cisco IOS Software implementations, a Layer 1/Layer 2 router would set the ATT-bit in its Layer 1 LSP if it connects to multiple Layer 1 areas. Thus, if the backbone connection is lost, the Layer 1/Layer 2 router would still set the ATT-bit in the Layer 1 LSP. Consequently, the Layer 1 devices associated with that Layer 1/Layer 2 router would continue sending inter-area traffic to the Layer 1/Layer 2 router and cause the traffic to be dropped. For example, in Figure 1, Rtr1 has connections to two Layer 1 areas in addition to the backbone area. If the connection between Rtr1 and its upstream router were lost, Rtr1 would still set the ATT-bit in its LSP. Consequently, NE1 would still send inter-area traffic to Rtr1. However, because Rtr1 has lost its connection to the L2 area, it uses Rtr2 to route inter-area traffic. This causes the traffic to be sent back to NE1. Thus, a routing loop—an undesirable situation—is formed.

To address this problem, Cisco IOS Software implements a new capability to allow users to have greater control of setting the ATT-bit. Instead of setting the ATT-bit whenever seeing other areas, a Cisco router can now set the ATT-bit based on the criteria specified in a route map. Users can use the "match" command associated with a route map to match a Connectionless Network Service (CLNS) area address. When the specified area address is not found in the CLNS routing table, the "match" condition fails, the route map is said to "not be satisfied," and the ATT-bit will not be set. A complete configuration example will be discussed in the "Feature Usage Examples" section.

Command Syntax

This new command is configured under "router isis ". It enables the ATT-bit control capability.

router(config-router)#set-attach-bit route-map

Here is an example of a route map.

!
clns filter-set BB_Area_Address permit 39.0000
!
route-map permit 10
match clns address BB_Area_Address
!
Benefit

This procedure provides more control over setting the ATT-bit to avert the dropping of packets.

...(略)

Selective Packet Discard(SPD)

Selective Packet Discard

• When a link goes to a saturated state, you will drop packets. The problem is that you will drop any type of packets – Including your routing protocols.

• Selective Packet Discard (SPD) will attempt to drop non-routing packets instead of routing packets when the link is overloaded.

ip spd enable

• Enabled by default from 11.2(5)P and later releases, available option in 11.1CA/CC.

A standardized way for mapping IP Packets into SONET/SDH payloads

...(略)

How does it work?

The layer 2 protocol used by POS technology offers astandarized way for mapping IP packets into SONET/SDH payloads.

1. Data is first segmented into an IP datagram that includes a 20-byte IP header.

2. This datagram is encapsulated via Point-to-Point Protocol (PPP) packets and framing information is added with High-level Data Link Control (HDLC) – framing.

3. Gaps between frames are filled withflags, set to value 7E.

4. Octet stuffing occurs if any flags or resultant escape characters (of value 7D) are found in the data.

5. The resulting data is scrambled, and mapped synchronously by octet into the SONET/SDH frame.



POS is defined by the Internet Engineering Task Force (IETF) in the following ‘Request For Comment’(RFC) documents:

RFC-1661 The Point-to-Point Protocol (PPP)
RFC-1662 PPP in HDLC framing
RFC-2615 PPP over SONET/SDHA standardized way for mapping IP Packets into SONET/SDH payloads

...(略)

802.1q Tunneling

802.1q Tunneling

One of the enterprise's business requirements can entail sending multiple VLANs across the service provider's Metro Ethernet network. The enterprise can accomplish this via 802.1q tunneling, also known as QinQ. This chapter uses both names interchangeably.

802.1q tunneling is a tunneling mechanism that service providers can use to provide secure Ethernet VPN services to their customers. Ethernet VPNs using QinQ are possible because of the two-level VLAN tag scheme that QinQ uses. The outer VLAN tag is referred to as the service provider VLAN and uniquely identifies a given customer within the network of the service provider. The inner VLAN tag is referred to as the customer VLAN tag because the customer assigns it.

QinQ's use of double VLAN tags is similar to the label stack used in MPLS to enable Layer 3 VPNs and Layer 2 VPNs. It is also possible for multiple customer VLANs to be tagged using the same outer or service provider VLAN tag, thereby trunking multiple VLANs among customer sites. Note that by using two VLAN tags—outer and inner VLAN—you achieve a demarcation point between the domain of the customer and the domain of the service provider. The service provider can use any VLAN scheme it decides upon to identify a given customer within his provider network. Similarly, the enterprise customer can independently decide on a VLAN scheme for the VLANs that traverse the service provider network without consulting the service provider.

BGP Best Practices for ISPs(RFC 2827/BCP 38)

…(略)

RFC 2827/BCP 38

Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing

"Thou shalt only sendth and receiveth IP packets you have rights for"

Packets should be sourced from valid, allocated address space, consistent with the topology and space allocation

Guidelines for BCP38

Networks connecting to the Internet
=>Must use inbound and outbound packet filters to protect network

Configuration example:
=>Outbound—only allow my network source addresses out
=>Inbound—only allow specific ports to specific destinations in

Techniques for BCP 38 Filtering
.Static ACLs on the edge of the network
.Dynamic ACLs with AAA profiles
.Unicast RPF strict mode
.IP source guard
.Cable source verify (DHCP)

Using ACLs to Enforce BCP38

Static ACLs are the traditional method of
ensuring that source addresses are not
spoofed:

.Permit all traffic whose source address equals the allocation block
.Deny any other packet

Principles:
.Filter as close to the edge as possible
.Filter as precisely as possible
.Filter both source and destination where possible

RFC3931 - Layer Two Tunneling Protocol - Version 3 (L2TPv3)

...(略)

1. Introduction


The Layer Two Tunneling Protocol (L2TP) provides a dynamic mechanism for tunneling Layer 2 (L2) "circuits" across a packet-oriented data network (e.g., over IP). L2TP, as originally defined in RFC 2661, is a standard method for tunneling Point-to-Point Protocol (PPP) [RFC1661] sessions. L2TP has since been adopted for tunneling a number of other L2 protocols. In order to provide greater modularity, this document describes the base L2TP protocol, independent of the L2 payload that is being tunneled.
The base L2TP protocol defined in this document consists of (1) the control protocol for dynamic creation, maintenance, and teardown of L2TP sessions, and (2) the L2TP data encapsulation to multiplex and demultiplex L2 data streams between two L2TP nodes across an IP network. Additional documents are expected to be published for each L2 data link emulation type (a.k.a. pseudowire-type) supported by L2TP (i.e., PPP, Ethernet, Frame Relay, etc.). These documents will contain any pseudowire-type specific details that are outside the scope of this base specification.

When the designation between L2TPv2 and L2TPv3 is necessary, L2TP as defined in RFC 2661 will be referred to as "L2TPv2", corresponding to the value in the Version field of an L2TP header. (Layer 2 Forwarding, L2F, [RFC2341] was defined as "version 1".) At times, L2TP as defined in this document will be referred to as "L2TPv3". Otherwise, the acronym "L2TP" will refer to L2TPv3 or L2TP in general.

...(略)

6.6. Incoming-Call-Request (ICRQ)


Incoming-Call-Request (ICRQ) is the control message sent by an LCCE to a peer when an incoming call is detected (although the ICRQ may also be sent as a result of a local event). It is the first in a three-message exchange used for establishing a session via an L2TP control connection.
The ICRQ is used to indicate that a session is to be established between an LCCE and a peer. The sender of an ICRQ provides the peer with parameter information for the session. However, the sender makes no demands about how the session is terminated at the peer (i.e., whether the L2 traffic is processed locally, forwarded, etc.).

The following AVPs MUST be present in the ICRQ:

.Message Type
.Local Session ID
.Remote Session ID
.Serial Number
.Pseudowire Type
.Remote End ID
.Circuit Status


The following AVPs MAY be present in the ICRQ:

.Random Vector
.Message Digest
.Assigned Cookie
.Session Tie Breaker
.L2-Specific Sublayer
.Data Sequencing
.Tx Connect Speed
.Rx Connect Speed
.Physical Channel ID

...(略)

Cisco IOS MPLS Virtual Private LAN Service(VPLS): Q&A

Cisco IOS MPLS
Virtual Private LAN Service


Q. What is VPLS?

A. VPLS stands for Virtual Private LAN Service, and is a VPN technology that enables Ethernet multipoint services (EMSs) over a packet-switched network infrastructure. VPN users get an emulated LAN segment that offers a Layer 2 broadcast domain. The end user perceives the service as a virtual private Ethernet switch that forwards frames to their respective destinations within the VPN. Ethernet is the technology of choice for LANs due to its relative low cost and simplicity. Ethernet has also gained recent popularity as a metropolitan-area network (MAN or metro) technology.

VPLS helps extend the reach of Ethernet further to be used as a WAN technology. Other technologies also enable Ethernet across the WAN, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over SONET/SDH, Ethernet bridging over ATM, and ATM LAN emulation (LANE). However, they only provide point-to-point connectivity and their mass deployment is limited by high levels of complexity, or they require dedicated network architectures that do not facilitate network convergence. Figure 1 shows the logical view of a VPLS connecting three sites. Each customer edge device requires a single connection to the network to get full connectivity to the remaining sites.

Figure 1

Logical View of a VPLS



Q. What does it mean that VPLS enables an EMS?

A. A multipoint technology allows a user to reach multiple destinations through a single physical or logical connection. This requires the network to make a forwarding decision based on the destination of the packet. Within the context of VPLS, this means that the network makes a forwarding decision based on the destination MAC address of the Ethernet frame. A multipoint service is attractive because less connections are required to achieve full connectivity between multiple points. An equivalent level of connectivity based on a point-to-point technology requires a much larger number of connections or the use of suboptimal packet forwarding.

Q. What are the main components of VPLS?

A. In its simplest form, a VPLS consists of several sites connected to provider edge devices implementing the emulated LAN service. These provider edge devices make the forwarding decisions between sites and encapsulate the Ethernet frames across a packet-switched network using a virtual circuit or pseudo wire. A virtual switching instance (VSI) is used at each provider edge to implement the forwarding decisions of each VPLS. The provider edges use a full mesh of Ethernet emulated circuits (or pseudowires) to forward the Ethernet frames between provider edges. Figure 2 illustrates the components of a VPLS that connects three sites.

Figure 2

VPLS Components



Q. How are packets forwarded in VPLS?

A. Ethernet frames are switched between provider edge devices using the VSI forwarding information. Provider edge devices acquire this information using the standard MAC address learning and aging functions used in Ethernet switching. The VSI forwarding information is updated with the MAC addresses learned from physical ports and other provider edge devices via virtual circuits. These functions imply that all broadcast, multicast, and destination-unknown MAC addresses are flooded over all ports and virtual circuits associated with a VSI. Provider edge devices use split-horizon forwarding on the virtual circuits to form a loop-free topology. In this way, the full mesh of virtual circuits provides direct connectivity between the provider edge devices in a VPLS, and no protocols have to be used to generate a loop-free topology (Spanning Tree Protocol, for example).

Q. What are the signaling requirements of VPLS?

A. Two functional components in VPLS involve signaling—provider edge discovery and virtual circuit setup. Cisco® VPLS currently relies on static configuration of provider edge associations within a VPLS. However, the architecture can be easily enhanced to support several discovery protocols, including Border Gateway Protocol (BGP), RADIUS, Label Distribution Protocol (LDP), or Domain Name System (DNS). The virtual circuit setup uses the same LDP signaling mechanism defined for point-to-point services. Using a directed LDP session, each provider edge advertises a virtual circuit label mapping that is used as part of the label stack imposed on the Ethernet frames by the ingress provider edge during packet forwarding.

Q. How is reachability information distributed in a VPLS?

A. Cisco VPLS does not require the exchange of reachability (MAC addresses) information via a signaling protocol. This information is learned from the data plane using standard address learning, aging, and filtering mechanisms defined for Ethernet bridging. However, the LDP signaling used for setting up and tearing down the virtual circuits can be used to indicate to a remote provider edge that some or all MAC addresses learned over a virtual circuit need to be withdrawn from the VSI. This mechanism provides a convergence optimization over the normal address aging that would eventually flush the invalid addresses.

Q. Can VPLS be implemented over any packet network?

A. VPLS has been initially specified and implemented over an MPLS transport. From a purely technical point of view, the provider edge devices implementing VPLS could also transport the Ethernet frames over an IP backbone using different encapsulations, including generic routing encapsulation (GRE), Layer 2 Tunneling Protocol (L2TP), and IP Security (IPSec).

Q. Are there any differences in the encapsulation of Ethernet frames across the packet network between VPLS and Any Transport over MPLS (AToM)?

A. No. VPLS relies on the same encapsulation defined for point-to-point Ethernet over MPLS. The frame preamble and frame check sequence (FCS) are removed, and the remaining payload is encapsulated with a control word, a virtual circuit label, and an Interior Gateway Protocol (IGP) or transport label.

Q. Is VPLS limited to Ethernet?

A. Even though most VPLS sites are expected to connect via Ethernet, they may connect using other Layer 2 technologies (ATM, Frame Relay, or Point-to-Point Protocol [PPP], for example). Sites connecting with non-Ethernet links exchange packets with the provider edge using a bridged encapsulation. The configuration requirements on the customer edge device are similar to the requirements for Ethernet Interworking in point-to-point Layer 2 services.

Q. Are there any scalability concerns with VPLS?

A. Packet replication and the amount of address information are the two main scaling concerns for the provider edge device. When packets need to be flooded (because of broadcast, multicast, or destination-unknown unicast address), the ingress provider edge needs to perform packet replication. As the number of provider edge devices in a VPLS increases, the number of packet copies that need to be generated increases. Depending on the hardware architecture, packet replication can have an important impact on processing and memory resources. In addition, the number of MAC addresses that may be learned from the data plane may grow rapidly if many hosts connect to the VPLS. This situation can be alleviated by avoiding large, flat, network domains in the VPLS.

Q. What is hierarchical VPLS?

A. A hierarchical model can be used to improve the scalability characteristics of VPLS. Hierarchical VPLS (H-VPLS) reduces signaling overhead and packet replication requirements for the provider edge. Two types of provider edge devices are defined in this model—user-facing provider edge (u-PE) and network provider edge (n-PE). Customer edge devices connect to u-PEs directly and aggregate VPLS traffic before it reaches the n-PE, where the VPLS forwarding takes place based on the VSI. In this hierarchical model, u-PEs are expected to support Layer 2 switching and to perform normal bridging functions. Cisco VPLS uses 802.1Q Tunneling, a double 802.1Q or Q-in-Q encapsulation, to aggregate traffic between the u-PE and n-PE. The Q-in-Q trunk becomes an access port to a VPLS instance on an n-PE (Figure 3).

Figure 3

Hierarchical VPLS



Q. How does VPLS fit with metro Ethernet?

A. VPLS can play an important role to scale metro Ethernet services by increasing geographical coverage and service capacity. The H-VPLS model allows service providers to interconnect dispersed metro Ethernet domains to extend the geographical coverage of the Ethernet service. H-VPLS helps scale metro Ethernet services beyond the 4000-subscriber limit imposed by the VLAN address space. Conversely, having an Ethernet access network contributes to the scalability of VPLS by distributing packet replication and reducing signaling requirements. Metro Ethernet and VPLS are complementary technologies that enable more sophisticated Ethernet service offerings.

Q. Is Cisco VPLS standards-based?

A. Cisco VPLS is based on the IETF draft draft-ietf-pppvpn-vpls-ldp, which has wide industry support. VPLS specifications are still under development at the IETF. There are two proposed VPLS drafts (draft-ietf-pppvpn-vpls-ldp and draft-ietf-l2vpn-vpls-bgp). There are no current plans to support both drafts.

Q. How does VPLS compare with Cisco AToM?

A. Cisco AToM provides a standards-based implementation that enables point-to-point Layer 2 services. VPLS complements the portfolio of Layer 2 services with a multipoint offering based on Ethernet. These two kinds of services impose different requirements for the provider edge devices. A point-to-point service relies on a virtual circuit (or pseudowire) that provider edges set up to transport Layer 2 frames between two attachment circuits. The mapping between attachment circuits and virtual circuits is static and one-to-one. A multipoint service requires the provider edge to perform a lookup on the frame contents (typically, MAC addresses) to determine the virtual circuit to be used to forward the frame to the destination. This lookup creates the multipoint nature of a VPLS. The virtual circuit signaling and encapsulation characteristics performed by the provider devices are the same. The operation of provider edge devices is transparent from the type of service implemented by the devices.

Q. How does VPLS compare with MPLS VPNs?

A. VPLS and MPLS (Layer 3) VPN enable two very different services. VPLS offers a multipoint Ethernet service that can support multiple higher-level protocols. MPLS VPN also offers a multipoint service, but it is limited to the transport of IP traffic and all traffic that can be carried over IP. Both VPLS and MPLS VPN support multiple link technologies for the customer edge to provider edge connection (Ethernet, Frame Relay, ATM, PPP, and so on). VPLS, however, imposes additional requirements (bridged encapsulation) on the customer edge devices in order to support non-Ethernet links. MPLS VPN reduces the amount of IP routing design and operation required from the VPN user. VPLS leaves full control of IP routing to the VPN user. VPLS and MPLS VPN are two alternatives to implement a VPN. The selection of the appropriate VPN technology requires analysis of the specific service requirements of the VPN customer.

Q. Does VPLS preclude the use of the same network infrastructure for services such as Layer 3 VPNs (L3VPNs), point-to-point Layer 2 VPNs (L2VPNs), and Internet services?

A. No. MPLS allows service providers to deploy a converged network infrastructure that supports multiple services. Provider edge devices are required to implement the signaling and encapsulation requirements for any specific service. However, those devices do not have to be dedicated to a single service. Furthermore, the provider devices in the core of the network do not need to be aware of the service a packet is associated with. Provider devices are service- and customer-agonistic, giving the MPLS backbone unique scalability characteristics.

Q. Where can I find additional information on VPLS?
A. The following links provide additional information.
Cisco IOS® MPLS Page
http://www.cisco.com/en/US/tech/tk436/tk891/tech_protocol_family_home.html
http://www.cisco.com/en/US/products/hw/routers/ps368/products_white_paper09186a00801df1df.shtml
http://www.cisco.com/en/US/products/hw/routers/ps368/index.html

Understanding IS-IS Pseudonode LSP

Introduction

This Tech Note describes the line-state packet (LSP) pseudonode. A pseudonode is a logical representation of the LAN which is generated by a Designated Intermediate System (DIS) on a LAN segment. The document also describes the propagation of information to the routers.

What is the DIS?

On broadcast multi-access networks, a single router is elected as the DIS. There is no backup DIS elected. The DIS is the router that creates the pseudonode and acts on behalf of the pseudonode.

The DIS

There are two major tasks performed by the DIS:

.Creating and updating pseudonode LSP for reporting links to all systems on the broadcast subnetwork. See the Pseudenode LSP section for more information.
.Flooding LSPs over the LAN.

The flooding over the LAN means that the DIS sends periodic complete sequence number protocol data units (CSNPs) (default setting of 10 seconds) summarizing the following information:

.LSP ID
.Sequence Number
.Checksum
.Remaining Lifetime

The DIS is responsible for flooding. It creates and floods a new pseudonode LSP for each routing level in which it is participating (Level 1 or Level 2) and for each LAN to which it is connected. A router can be the DIS for all connected LANs or a subset of connected LANs, depending on the IS-IS priority or the Layer 2 address. The DIS will also create and flood a new pseudonode LSP when a neighbor adjacency is established, torn down, or the refresh interval timer expires. The DIS mechanism reduces the amount of flooding on LANs.

Election of the DIS

On a LAN, one of the routers elects itself the DIS, based on interface priority (the default is 64). If all interface priorities are the same, the router with the highest subnetwork point of attachment (SNPA) is selected. The SNPA is the MAC address on a LAN, and the local data link connection identifier (DLCI) on a Frame Relay network. If the SNPA is a DLCI and is the same at both sides of a link, the router with the higher system ID becomes the DIS. Every IS-IS router interface is assigned both a L1 priority and a L2 priority in the range of 0 to 127.

The DIS election is preemptive (unlike OSPF). If a new router boots on the LAN with a higher interface priority, the new router becomes the DIS. It purges the old pseudonode LSP and floods a new set of LSPs.

What is the Pseudonode (PSN)?

In order to reduce the number of full mesh adjacencies between nodes on multiaccess links, the multiaccess link itself is modeled as a pseudonode. This is a virtual node, as the name implies. The DIS creates the pseudonode. All routers on the broadcast link, including the DIS, form adjacencies with the pseudonode. Below is a visual representation of the pseudonode.



In IS-IS, a DIS does not synchronize with its neighbors. After the DIS creates the pseudonode for the LAN, it sends Hello packets for each Level (1 and 2) every three seconds and CSNPs every ten seconds. The Hellos indicate that it is the DIS on the LAN for that level, and the CSNPs describe the summary of all the LSPs, including the LSP ID, sequence number, checksum, and remaining lifetime. The LSPs are always flooded to the Multicast address and the CSNP mechanism only corrects for any lost PDUs. For example, a router can ask the DIS for a missing LSP using a partial sequence number packet (PSNP) or, in turn, give the DIS a new LSP.

CSNPs are used to tell other routers about all the LSPs in one router's database. Similar to an OSPF database descriptor packet, PSNPs are used to request an LSP and acknowledge receipt of an LSP

Pseudonode LSP

The pseudonode LSP is generated by the DIS. The DIS reports all LAN neighbors (including the DIS) in the pseudonode LSP with a metric of zero. All LAN routers, including the DIS, report connectivity to the pseudonode in their LSPs. This is similar in concept to the network LSA in OSPF.

...(略)

Configuring Redundancy for POS / APS

...(略)

K1/K2 Bytes
When you discuss APS, you first need to understand how SONET uses K1/K2 bytes in the LOH.

The K1/K2 bytes form a 16-bit field. Table 2 lists the usage of each bit.

Table 2 – K1 Bit Descriptions

Bits 5 through 8
nnnn: Channel number associated with the command code.
1111 (0xF): Lockout of protection request.
1110 (0xE): Forced switch request.
1101 (0xD): SF - high priority request.
1100 (0xC): SF - low priority request.
1011 (0xB): SD - high priority request.
1010 (0xA): SD - low priority request.

1001 (0x9): Not used.
1000 (0x8): Manual switch request.
0111 (0x7): Not used.
0110 (0x6): Wait to restore request.
0101 (0x5): Not used.
0100 (0x4): Exercise request.
0011 (0x3): Not used.
0010 (0x2): Reverse request.
0001 (0x1): Do not revert request.
0000 (0x0): No request.

Note: Bit 1 is the low-order bit.

...(略)

MPLS Basic Traffic Engineering Using OSPF Configuration Example

Introduction

This document provides a sample configuration for implementing traffic engineering (TE) on top of an existing Multiprotocol Label Switching (MPLS) network using Frame Relay and Open Shortest Path First (OSPF). Our example implements two dynamic tunnels (automatically set up by the ingress Label Switch Routers [LSR]) and two tunnels that use explicit paths.

TE is a generic name corresponding to the use of different technologies to optimize the utilization of a given backbone capacity and topology.

MPLS TE provides a way to integrate TE capabilities (such as those used on Layer 2 protocols like ATM) into Layer 3 protocols (IP). MPLS TE uses an extension to existing protocols (Intermediate System-to-Intermediate System (IS-IS), Resource Reservation Protocol (RSVP), OSPF) to calculate and establish unidirectional tunnels that are set according to the network constraint. Traffic flows are mapped on the different tunnels depending on their destination.

Functional Components

IP tunnel interfaces
Layer 2: an MPLS tunnel interface is the head of a Label Switched Path (LSP). It is configured with a set of resource requirements, such as bandwidth and priority. Layer 3: the LSP tunnel interface is the head-end of a unidirectional virtual link to the tunnel destination.

RSVP with TE extension
RSVP is used to establish and maintain LSP tunnels based on the calculated path using PATH and RSVP Reservation (RESV) messages. The RSVP protocol specification has been extended so that the RESV messages also distribute label information.

Link-State Interior Gateway Protocol (IGP) [IS-IS or OSPF with TE extension]
Used to flood topology and resource information from the link management module. IS-IS uses new Type-Length-Values (TLVs); OSPF uses type 10 Link-State Advertisements (also called Opaque LSAs).

MPLS TE path calculation module
Operates at the LSP head only and determines a path using information from the link-state database.

MPLS TE link management module
At each LSP hop, this module performs link call admission on the RSVP signaling messages, and bookkeeping of topology and resource information to be flooded by OSPF or IS-IS.

Label switching forwarding
Basic MPLS forwarding mechanism based on labels.

Network Diagram



Quick Configuration Guide

You can use the following steps to perform a quick configuration. Refer to MPLS Traffic Engineering and Enhancements for more detailed information.

Set up your network with the usual configuration. (In this case, we used Frame Relay.)

Note: It is mandatory to set up a loopback interface with an IP mask of 32 bits. This address will be used for the setup of the MPLS network and TE by the routing protocol. This loopback address must be reachable via the global routing table.

Set up a routing protocol for the MPLS network. It must be a link-state protocol (IS-IS or OSPF). In the routing protocol configuration mode, enter the following commands:

For IS-IS:
metric-style [wide both]
mpls traffic-eng router-id LoopbackN
mpls traffic-eng [level-1 level-2]


For OSPF:
mpls traffic-eng area X
mpls traffic-eng router-id


LoopbackN (must have a 255.255.255.255 mask)Enable MPLS TE. Enter ip cef (or ip cef distributed if available in order to enhance performance) in the general configuration mode. Enable MPLS (tag-switching ip) on each concerned interface. Enter mpls traffic-engineering tunnel to enable MPLS TE.

Enable RSVP by entering ip rsvp bandwidth XXX on each concerned interface.

Set up tunnels to be used for TE. There are many options that can be configured for MPLS TE Tunnel, but the tunnel mode mpls traffic-eng command is mandatory. The tunnel mpls traffic-eng autoroute announce command announces the presence of the tunnel by the routing protocol.

Note: Do not forget to use ip unnumbered loopbackN for the IP address of the tunnel interfaces.

This configuration shows two dynamic tunnels (Pescara_t1 and Pescara_t3) with different bandwidth (and priorities) going from the Pescara router to the Pesaro router, and two tunnels (Pesaro_t158 and Pesaro_t159) using an explicit path going from Pesaro to Pescara.

...(略)

Oct 9, 2007

Remote Triggering Black Hole Filtering(RTBH)

INTRODUCTION

Black hole filtering is a flexible ISP Security tool that will route packets to Null0 (i.e.black holed). The Cisco ISP Essentials book covers the fundamentals of the singlerouter based black hole routing technique. It does not cover the remote triggered black hole routing technique. Remote triggering via iBGP allows ISPs to active anetwork wide destination based black hole throughout their network. This techniqueis especially useful in some of the new ISP security classification, traceback, and reaction techniques. This supplement reviews, enhances, and adds to what is already in the book.

BLACK HOLE ROUTING AS A PACKET FILTER (FORWARDING TO NULL0)

Forwarding packets to Null 0 is a common way to filter packets to a specific destination. These are often done by creating specific static host routes and point them to the pseudo interface Null0. This technique commonly refereed as black hole routing or black hole filtering. Null0 is a pseudo-interface, which functions similarly to the null devices available on most operating systems. This interface is always up and can never forward or receive traffic. While Null0 is a pseudo interface, within CEF, it is not a valid interface. Hence, whenever a route is pointed to Null0, itwill be dropped via CEF’s and dCEF’s.

The null interface provides an alternative method of filtering traffic. You can avoid the overhead involved with using access lists by directing undesired network traffic to the null interface. The following example configures a null interface for IP route 127.0.0.0/16 and the specific host 171.68.10.1 (subnet mask 255.255.255.255):

interface Null0
no icmp unreachables
ip route 127.0.0.0 255.0.0.0 null 0
ip route 171.68.10.1 255.255.255.255 null 0




The no icmp unreachables command is used to prevent unnecessary ICMP Unreachable replies whenever traffic is passed to the Null0 interface. This minimizes the risk of router getting overloaded with hundreds of pending ICMP Unreachable replies. So it is common practice to use the no ip unreachablescommand on the Null0 interface. Yet, they may be cases where you want the routerto respond to with ICMP Unreachables. For these cases, you the ip icmp unreachable rate-limit command to minimize the risk of a router getting over loaded with ICMP Unreachable processing. This specific rate-limiting command adjusts the default of on ICMP Unreachable every 500ms to a value between 1ms to4294967295 ms.

ip icmp rate-limit unreachable 2000
ip icmp rate-limit unreachable DF 2000


Black Hole filtering uses the strength of the router’s forwarding performance to dropblack listed packets. A router's #1 job is for forward packets - not filtering packets. The black hole routing technique uses the packet forwarding power to drop all packets bound for sites on the black list. In the ASIC forwarding world, this black holing has zero impact in the performance of the router (packets black holed to Null0 are cleared through a register clock). Software forwarding devices have some extracycles needed to clear out and black holed packet. If a software-forwarding device is expected to do a lot of black hole work, consider a black hole shunt interface (seethe section on black hole shunts).

There are two main limitations to with the black hole filtering technique.

First, blackhole filtering is L3 only – not L4. So access to all L4 services at a give site will beblocked. If selective L4 filtering is necessary, use extended ACLs. For example, if you wish to drop all packets to a specific destination, the black hole filtering isapplicable. But, if you wish to drop all telnet packets to a destination, then black hole filtering is no applicable and a extended ACL is the optimum mitigation tool. Extended ACLs offer the fine L4 granularity needed to filter at the application level.

Second, it is hard to bypass or provide exceptions with the black hole filteringtechnique. Any organization that wishes to by-pass the black list must actually find away to by-pass the filtering router's forward table. Compensation for either limitationare not trivial tasks. Yet. With due consideration and planning, options are availablefor both.

REMOTE TRIGGERED BLACK HOLE FILTERING

Black Hole Filtering on a single router has been around the industry since the last 1980s. It is a useful tool on a single router. But, how do you use this tool when you have a network of hundreds of routers? How do you log into hundreds of routers onthe edge of a network and configure a black hole filter? The answer is in you don’t. ISPs engineers who respond to a security incident needs to think of their keystrength – routing. ISPs engineers know how to route traffic – putting the traffic where they want it to flow through their network. Remote Triggered Black HoleFiltering uses that routing strength to trigger all the routers in the network with arouting update. The routing update – sent via iBGP by a trigger router – actives apre-configured static route to activate a black hole for the destination address.

Lets use an example to illustrate the concept and strength of this technique. Figure 2 illustrates an ISP’s customer under attack by a DDOS. The DDOS is coming in from all the entry points of the ISP’s network. These entry points can number from a few to thousands – depending on the size of the ISP. DDOS traffic far exceeds the customer’s link, so the circuit saturates, causing either DOS Flapping1 or co-lateral damage inside the POP. This collateral threats other customers, the POP, and that section of the ISP’s network. An immediate reaction is necessary to shift the packet drops from the customer’s circuit and collateral routers to the edge of the network.



Remote Triggered Black Hole filtering is used to push the packet drops off the customer/POP routers and shift them to the edge of the network. Figure 3 shows how an ISP uses a trigger router in the NOC to send an iBGP advertisement. This iBGP advertisement has the prefix of the customer under attack with metric attached to insure it becomes the preferred path. This iBGP “trigger” advertisement goes to all the iBGP specking routers in the ISP’s network. These routers all have an unused prefix that points to Null 0. The iBGP “trigger” advertisement has its next-hop equal to this “Null0ed prefix.” When the “iBGP trigger advertisement reaches the router, it gets glued to the static, activating the Null0 black hole, and having all traffic to the customer’s prefix get dropped on the edge of the ISP’s network.

The key benefit in this situation is that dropping on the edge of the network mitigates the DDOS’s aggregated traffic load. This now gives the ISP and the customer time to work the attack with out the worries of collateral damage to other customers.



REMOTE TRIGGERING SAFETY MEASURES

Remote Triggering via iBGP requires the ISP to take some safety measures to insure the iBGP trigger advertisement does not leak out and affect other networks. There are several ways this can be done. Appling the principle of Murphy’s Law of Networking, it is recommended that an ISP implement several– if not all– of these safety measures.

.No-export BGP community.
The no-export community in BGP is a well-known value that most routers recognize by default. It should – when working properly – keep the prefix within the ISP (i.e. no advertisements to peers).

.Extra Community that filters.
The ISP can add a community that does the same as the no-export community. A BGP community filter will be used on with the ISP’s peers to mark which communities are exported. This step helps prevent a leak by someone who is cleaning up the excess communities in the prefix – inadvertently filtering the no-export community.

.Lower Boundary on the Egress Prefix Filter.
ISPs can place a lower boundary on the prefixes sent to their peers. For example, ISPs can block all prefixes less than /24. This would filter any iBGP trigger advertisement between /25 and /32 – which is a normal range of addresses blocks allocated to customers.

PREPARING THE NETWORK FOR REMOTE TRIGGERED BLACKHOLE FILTERING

It is imperative that ISPs prepare for remote triggered black hole filtering, practice the technique, and have everything ready long before using it to mitigate an attack. Fortunately, all the preparation steps involve non-intrusive configurations that have no impact on the operation of the network.

Step 1- Configure the Static Route to Null0 on All the Routers

The first of these preparation steps is the configuration of a static route on each ofthe routers that will be triggered. This is a prefix that will never be used in thenetwork. It can be a block of addresses allocated from the RIR allocations. It can bea RFC 1918 prefix. The author's favorite is to use the Test-Net: 192.0.2.0/24. Test-Net was a IANA allocation made for people to do documentation. The idea was fordocumentation to use a block of addresses that would never get used. That waycustomers who copy the documentation will not mess up someone else's network.Hence, Test-Net is one of the IANA Designated Special Use Addresses (DUSA) that should never appear on the Internet …… making it a great choice for the static route for remote-triggered Black Hole Filtering.

ip route 192.0.2.0 255.255.255.0 Null0

Step 2 – Prepare the Trigger Router

The trigger router does not have to be a big router. A Cisco 26XX or 36XX router configured as an iBGP route reflector client and accepting no routes works very wellas a trigger router. In fact, the trigger router does not have to be a dedicated router. A production router can be used. For this example, we will be using a dedicatedtrigger router.

On the router, the iBGP is configured to redistribute static routes. That way the “trigger” is an engineer or tool adding and removing static routes. A route-map isused to match the static tag and set all the metrics for the iBGP advertisement. That way all triggering is consistent and done the same way each time.

router bgp 109
redistribute static route-map static-to-bgp
!
route-map static-to-bgp permit 10
match tag 66
set ip next-hop 192.0.2.1
set local-preference 50
set community no-export 600:000
set origin igp
!
route-map static-to-bgp permit 20


In the above example, we match a static tag of 66. If matched, we set the iBGP next-hop to the Test-Net (pre-configured on the routers to Null0), set the local preference to 50 (to override the original customer advertisement), set the BGP community to no-export with a safety community of 600:000 (which blocks advertisement, and finally set the origin to igp. This sets up the trigger router to be ready for the timewhen the ISP needs for rapid reaction.

Step 3 - Activation

The ISP adds a static route with a tag of 66 to activate the remote-triggered blackhole. In this example, we'll use 171.68.1.1 as a the address under attack. So we add this static with the tag of 66:

ip route 171.68.1.1 255.255.255.255 Null0 Tag 66

The trigger router will then send a advertisement to all the iBGP speaking routers inthe network (see Figure 3). When the iBGP advertisement is received, the BGP RIB sees the local preference of 50 and selects this new path as the best path. The recursive look-up passes since there is a static route to this new path’s next-hop (i.e. the Test-Net). This iBGP best path is passed from the BGP RIB to the router’s FIB. The FIB sees the prefix, the next-hop of Test-Net, and Test-Net’s next-hop of Null0. It then glues them together (depending on the FIB technology used) resulting 171.68.1.1 now having a next-hop of Null0.

This is visually illustrated in Figure 4.



One of the key advantages of remote triggered black hole is the number of prefixes that can be filtered. The limit is the size of the FIB routers in the network can carry. This would mean thousands of black holed prefixes being added. It is just a factor of adding more static routes to the trigger router. Principles of aggregation can be used, but mindfulness needs to be applied to make sure the iBGP trigger advertisement is equal to or more specific to the original customer advertisement.

Step 4 – Removing Trigger Advertisement

The trigger advertisement will need to be removed when the attack is over or the ISP wishes to move to a different mitigation technique. Removing the static route doesthis. The trigger router will then send out a iBGP withdrawal to all its BGP peers, which in then will withdraw the route from the BGP RIB, which then pulls the routefrom the route's FIB. This clears the path for the router's BGP RIB to select theoriginal customer advertisement, placing that prefix as the best path, and allowingthe FIB to resume normal forwarding to the customer's network.