Dec 20, 2009

New CCIE Voice Locations in Asia

New CCIE Voice Locations in Asia

In recognition of the demand for CCIE skills development in Asia, the CCIE program is pleased to announce new CCIE Voice lab exam test center openings in three major cities:
  • Beijing
  • Hong Kong
  • Bangalore
The CCIE program serves a growing number of worldwide candidates by covering all major regions with the latest test center resources. In addition to the three new locations, CCIE Voice lab exam locations include:
  • San Jose
  • RTP (North Carolina)
  • Brussels
  • Sydney
  • Tokyo

Dec 16, 2009

Hierarchical Packet Fair Queueing (H-PFQ) vs Hierarchical Fair Service Curve (H-FSC).

While most of the previous research has focused on providing Quality of Service (QoS) on a per session basis, there is a growing need to also support hierarchical link-sharing, or QoS guarantees for traffic aggregate (such as those belonging to the same organization, service provider, or application family). Supporting QoS for both single sessions and traffic aggregates is difficult as it requires the network to meet multiple QoS requirements at different granularities simultaneously. This problem is exacerbated by the fact that there are no formal models that specify all the requirements.

We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC). H-PFQ is the the first algorithm to simultaneously support all three of real-time, adaptive best-effort, and link-sharing services. H-FSC allows a more flexible resource management than H-PFQ by decoupling the delay and bandwidth allocation. From a conceptual point of view, H-FSC is the first algorithm that goes beyond Fair Queueing and still satisfies all important requirements in integrated services networks. To find out more about and H-PFQ and HFSC's technical advantages, click here.

HFSC, along with a packet classifier and a graphical user interface program has been implemented in both NetBSD 1.2D and FreeBSD 2.2.6. This implementation supports IETF Integrated Services (intserv) service models and RSVP signaling. It also supports most standard CAIRN testbed network hardware. It is currently under testing on CAIRN.

EMC Data Domain

Data Domain提供給客戶的是建立Disk Staging備份架構中的一種以硬碟為備份資料儲存及恢復的Hardware Appliance. 它以串列SATA硬碟技術為主, 並提供DD OS作業系統軟體, 使得它不只是一種含有RAID 6功能的低價位高性能磁碟陣列外, 並且它的 "容量最加化存儲技術(Capacity Optimized Storage)" 大量減少實際資料所需的儲存空間 和 "資料零缺點架構(Data Invulnerability Architecture)"技術, 提供前所未有的資料多重保護功能. 更近一步的將 "容量最加化存儲技術" 發揮在建立異地備援架構上的資料複製, 讓客戶不在為了傳統資料保護架構缺點而煩惱, 這三大獨特功能更是讓每GB資料儲存成本接近磁帶自動化備份設備, 但卻能提供給客戶比傳統資料保護更完善的機制與架構. 簡單說, 它是一種為了符合專業備份及恢復設備的特殊要求而設計的產品.


容量最加化存儲技術(Capacity Optimized Storage)
-備份資料拆解
-只儲存獨特的資料碎片, 大幅減少所需的空間需求及成本
-
壓縮比可以有效地達到20:1

EMC Avamar



EMC Avamar是一套針對遠端辦公室備份環境,內建重複資料刪除技術的備份軟體,透過前端代理程式的處理,可刪除備份資料中的冗餘,大幅降低備份資料傳輸佔用的網路頻寬,並節省儲存媒體的消耗量。

系統支援非常廣泛,代理程式可支援Windows、Linux、Solaris、HP-UX、AIX等主要Unix平臺,Mac OS X以及VMware等作業環境,另外也能支援DB2、Exchange、SQL Server、Oracle等應用程式。可適應大多數的企業IT環境。

集中式備份管理
Avamar源自Avamar被EMC併購前的Axion,是一套很典型的集中控管式備份軟體,採用「用戶端—伺服器端」的架構。Avamar伺 服器端的主程式必須安裝在IA32平臺的Red Hat Enterprise Linux AS/ES 3.0上,而用戶端代理程式的支援就非常廣。安裝完成後,可由管理者從Avamar伺服器上,啟動裝有代理程式的前端系統,執行備份作業,將指定的資料經 網路送到Avamar伺服器控制的儲存設備上。除了由伺服器端發起備份外,也能由前端系統的使用者自行啟動備份。管理者還能從其他電腦透過瀏覽器登入 Avamar伺服器,執行各項作業。

由於Avamar本質上是一套備份軟體,除了重複資料刪除這個功能外,其餘功能均與一般軟體類似,如作業狀態監控介面、報表、儲存區管理功能等, 操作算是相當簡便。為簡化建置與維護的問題,EMC銷售時可提供預載了Avamar的應用伺服器,原廠本身也驗証過IBM、HP、Dell等廠商數款市售 伺服器的硬體相容性。

優異的資料刪除效率
冗餘資料的刪減是Avamar的最大特色,透過區塊級的比對能有效剔除資料底層中的重複部份。由於比對動作是由代理程式執行,因此送到網路上的資 料已經是經過De-Dupe,故能大幅縮減對傳輸頻寬的需求;而且代理程式不僅只是比對前端主機上的資料,還能將前端資料的特徵值,比對已儲存在伺服器上 的現有資料的特徵值,因此比對範圍是整個Avamar的儲存區域,這也就是所謂的「全域壓縮」。

Avamar在測試中展現了令人印象深刻的資料刪減能力,是這次測試表現最好的產品之一,這或許與其具備可調式資料分段方式有關,系統分析檔案是可自動以1~64KB的「窗口」對資料進行分段比對,以適應不同的資料類型。文⊙張明德 

Dec 15, 2009

Riverbed Steelhead QoS Class Queue Methods

Optionally, select one of the following queue methods for the class from the drop-down list:
  • SFQ. Shared Fair Queueing (SFQ) is the default queue for all classes. Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class.
  • FIFO. Transmits all flows in the order that they are received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages.
  • MXTCP. Has very different use cases than the other queue parameters. MX-TCP also has secondary effects that you need to understand before configuring:
  1. When optimized traffic is mapped into a QoS class with the MX-TCP queuing parameter, the TCP congestion control mechanism for that traffic is altered on the Steelhead appliance. The normal TCP behavior of reducing the outbound sending rate when detecting congestion or packet loss is disabled, and the outbound rate is made to match the minimum guaranteed bandwidth configured on the QoS class.
  2. You can use MX-TCP to achieve high-throughput rates even when the physical medium carrying the traffic has high loss rates. For example, MX-TCP is commonly used for ensuring high throughput on satellite connections where a lower-layer-loss recovery technique is not in use.
  3. Another usage of MX-TCP is to achieve high throughput over highbandwidth, high-latency links, especially when intermediate routers do not have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high throughput rates.
Important: Use caution when specifying MX-TCP. The outbound rate for the optimized traffic in the configured QoS class immediately increases to the specified bandwidth, and does not decrease in the presence of network congestion. The Steelhead appliance always tries to transmit traffic at the specified rate. If no QoS mechanism (either parent classes on the Steelhead appliance, or another QoS mechanism in the WAN or WAN infrastructure) is in use to protect other traffic, that other traffic might be impacted by MX-TCP not backing off to fairly share bandwidth.
When MX-TCP is configured as the queue parameter for a QoS class, the following parameters for that class are also affected:
  • Link share weight. The link share weight parameter has no effect on a QoS class configured with MX-TCP.
  • Upper limit. The upper limit parameter has no effect on a QoS classconfigured with MX-TCP.

Riverbed Steelhead QoS Classification for the FTP Data Channel



QoS Classification for the FTP Data Channel
When configuring QoS classification for FTP, the QoS rules differ depending on whether the FTP data channel is using active or passive FTP. Active versus passive FTP determines whether the FTP client or the FTP server select the port connection for use with the data channel, which has implications for QoS classification.

Active FTP Classification
With active FTP, the FTP client logs in and issues the PORT command, informing the server which port it must use to connect to the client for the FTP data channel. Next, the FTP server initiates the connection towards the client. From a TCP perspective, the server and the client swap roles: The FTP server becomes the client because it sends the SYN packet, and the FTP client becomes the server because it receives the SYN packet.
Although not defined in the RFC, most FTP servers use source port 20 for the active FTP data channel. For active FTP, configure a QoS rule on the server-side Steelhead appliance to match source port 20. On the client-side Steelhead appliance, configure a QoS rule to match destination port 20.


Passive FTP Classification
With passive FTP, the FTP client initiates both connections to the server. First, it requests passive mode by issuing the PASV command after logging in. Next, it requests a port number for use with the data channel from the FTP server. The server agrees to this mode, selects a random port number, and returns it to the client. Once the client has this information, it initiates a new TCP connection for the data channel to the server-assigned port. Unlike active FTP, there is no role swapping and the FTP client initiates the SYN packet for the data channel.
It is important to note that the FTP client receives a random port number from the FTP server. Because the FTP server cannot return a consistent port number to use with the FTP data channel, RiOS does not support QoS Classification for passive FTP in versions earlier than RiOS v4.1.8, v5.0.6, or v5.5.1. Newer RiOS releases support passive FTP and the QoS Classification configuration for passive FTP is the same as active FTP.
When configuring QoS Classification for passive FTP, port 20 on both the server and client-side Steelhead appliances simply means the port number being used by the data channel for passive FTP, as opposed to the literal meaning of source or destination port 20.

Riverbed Steelhead Adaptive Data Streamlining Modes

  • Default This setting is enabled by default and works for most implementations. The default setting: 
  1. Provides the most data reduction.
  2. Reduces random disk seeks and improves disk throughput by discarding very small data margin segments that are no longer necessary. This Margin Segment Elimination (MSE) process provides network-based disk defragmentation.
  3. Writes large page clusters.
  4. Monitors the disk write I/O response time to provide more throughput.
    • SDR-Adaptive Specify to include the default settings and also.
    1. Balances writes and reads.
    2. Monitors both read and write disk I/O response and, based on statistical trends, can employ a blend of disk-based and non-disk-based data reduction techniques to enable sustained throughput during periods of high diskintensive workloads. Important: Use caution with this setting, particularly when you are optimizing CIFS or NFS with prepopulation. Please contact Riverbed Technical Support for more information.
      • SDR-M Performs data reduction entirely in memory, which prevents the Steelhead appliance from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. SDR-M is most efficient when used between two identical high-end Steelhead appliance models; for example, 6050 - 6050. When used between two different Steelhead appliance models, the smaller model limits the performance. Important: You cannot use peer data store synchronization with SDR-M.

      Riverbed Steelhead In-Path Rule - Neural Framing Mode

      Optionally, if you have selected Auto-Discover or Fixed Target, you can select a neural framing mode for the in-path rule. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer. You can specify the following neural framing settings:
      • Never. Never use the Nagle algorithm. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used.
      • Always. Always use the Nagle algorithm. All data is passed to the codec which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A timer (6 ms) backs up the codec and causes leftover data to be consumed. Neural heuristics are computed in this mode but are not used.
      • TCP Hints. This is the default setting which is based on the TCP hints. If data is received from a partial frame packet or a packet with the TCP PUSH flag set, the encoder encodes the data instead of immediately coalescing it. Neural heuristics are computed in this mode but are not used.
      • Dynamic. Dynamically adjust the Nagle parameters. In this option, the system discerns the optimum algorithm for a particular type of traffic and switches to the best algorithm based on traffic characteristic changes.
      For different types of traffic, one algorithm might be better than others. The considerations include: latency added to the connection, compression, and SDR performance.
      To configure neural framing for an FTP data channel, define an in-path rule with the destination port 20 and set its optimization policy. To configure neural framing for a MAPI data channel, define an in-path rule with the destination port 7830 and set its optimization policy.

      Dec 14, 2009

      TCP Vegas

      TCP 的傳送端利用RTT 針測由傳送端到接收端之間queue的長度並藉此調整congestion window 的值,主要修改的部份有三點:
      1. Slow Start: 大約2 個RTT時間,cwnd 才會增加一倍;
      2. Congestion Avoidance:Vegas 藉由比較預期的速率與實際傳送的速率算出Diff 的值,並限制Diff 的值必須介於alpha 與beta 之間,若Diff < alpha,則增加傳送的速率,反之,若Diff > beta,則減少傳送的速率;
      3. 藉由觀察RTT 的值比判斷是否已經有packet timeout

      資料來源: http://admin.csie.ntust.edu.tw/IEET/syllabus/course/962_CS5021701_106_5pyq5YiG5oiQ57i+6auY5L2OMS5wZGY=.pdf

      Dec 13, 2009

      When Are ICMP Redirects Sent?

      How ICMP Redirect Messages Work

      ICMP redirect messages are used by routers to notify the hosts on the data link that a better route is available for a particular destination.
      For example, the two routers R1 and R2 are connected to the same Ethernet segment as Host H. The default gateway for Host H is configured to use router R1.Host H sends a packet to router R1 to reach the destination on Remote Branch office Host 10.1.1.1.
      Router R1, after it consults its routing table, finds that the next-hop to reach Host 10.1.1.1 is router R2.
      Now router R1 must forward the packet out the same Ethernet interface on which it was received. Router R1 forwards the packet to router R2 and also sends an ICMP redirect message to Host H.
      This informs the host that the best route to reach Host 10.1.1.1 is by way of router R2.
      Host H then forwards all the subsequent packets destined for Host 10.1.1.1 to router R2.
      43_01.gif

      Dec 12, 2009

      Markov Model


      What is a Markov Model?
      Markov models are some of the most powerful tools available to engineers and scientists for analyzing complex systems. This analysis yields results for both the time dependent evolution of the system and the steady state of the system.

      For example, in Reliability Engineering, the operation of the system may be represented by a state diagram, which represents the states and rates of a dynamic system. This diagram consists of nodes (representing a possible state of the system, which is determined by the states of the individual components & sub-components) connected by arrows (representing the rate at which the system operation transitions from one state to the other state). Transitions may be determined by a variety of possible events, for example the failure or repair of an individual component. A state-to-state transition is characterized by a probability distribution. Under reasonable assumptions, the system operation may be analyzed using a Markov model.
      A Markov model analysis can yield a variety of useful performance measures describing the operation of the system. These performance measures include the following:

      • system reliability
      • availability
      • mean time to failure (MTTF)
      • mean time between failures (MTBF)
      • the probability of being in a given state at a given time
      • the probability of repairing the system within a given time period (maintainability)
      • the average number of visits to a given state within a given time period
      and many other measures.

      The name Markov model is derived from one of the assumptions which allows this system to be analyzed; namely the Markov property. The Markov property states: given the current state of the system, the future evolution of the system is independent of its history. The Markov property is assured if the transition probabilities are given by exponential distributions with constant failure or repair rates. In this case, we have a stationary, or time homogeneous, Markov process. This model is useful for describing electronic systems with repairable components, which either function or fail. As an example, this Markov model could describe a computer system with components consisting of CPUs, RAM, network card and hard disk controllers and hard disks.
      The assumptions on the Markov model may be relaxed, and the model may be adapted, in order to analyze more complicated systems. Markov models are applicable to systems with common cause failures, such as an electrical lightning storm shock to a computer system. Markov models can handle degradation, as may be the case with a mechanical system. For example, the mechanical wear of an aging automobile leads to a non-stationary, or non-homogeneous, Markov process, with the transition rates being time dependent. Markov models can also address imperfect fault coverage, complex repair policies, multi-operational-state components, induced failures, dependent failures, and other sequence dependent events.

      Dec 8, 2009

      Out-of-Band (OOB) Splice


      What is the OOB Splice?

      An OOB splice is an independent, separate TCP connection made on the first connection between two peer Steelhead appliances used to transfer version, licensing and other OOB data between peer Steelhead appliances. An OOB connection must exist between two peers for connections between these peers to be optimized. If the OOB splice dies all optimized connections on the peer Steelhead appliances will be terminated.

      The OOB connection is a single connection existing between two Steelhead appliances regardless of the direction of flow. So if you open one or more connections in one direction, then initiate a connection from the other direction, there will still be only one connection for the OOB splice. This connection is made on the first connection between two peer Steelhead appliances using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any concern except in full transparency deployments.

      Case Study
      In the example below, the Client is trying to establish connection to Server-1:

      Issue 1: After establishing inner connection, the Client will try to establish an OOB connection to the Server-1. It will address it by the IP address reported by Steelhead (SFE-1) which is in probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x addresses are invalid outside of the firewall (FW-2).

      Resolution 1: In the above example, there is one combination of address and port (IP:port) we know about, the connection the client is destined for which is Server-1. The client should be able to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create a transparent OOB connection from the Client to Server-1 if the corresponding inner connection is transparent.

      How to Configure
      There are three options to address the problem of the OOB splice connection established mentioned in Issue 1 above. In a default configuration the out-of-band connectio uses the IP addresses of the client-side Steelhead and server-side Steelhead. This is known as correct addressing and is our default behavior. However, this configuration will fail in the network topology described above but works for the majority of networks. The command below is the default setting in a Steelhead appliance’s configuration.

      in-path peering oobtransparency mode none

      In the network topology discussed in Issue 1, the default configuration does not work. There are
      two oobtransparency modes that may work in establishing the peer connections; destination and
      full. When destination mode is used, the client uses the first server IP and port pair to go through the Steelhead appliance with which to connect to the server-side Steelhead appliance and the client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To change to this configuration use the following CLI command:

      in-path peering oobtransparency mode destination

      In oobtransparency full mode, the IP of the first client is used and a pre-configured on the clientside Steelhead appliance to use port 708. The destination IP and port are the same as in destination mode, i.e., that of the server. This is the recommended configuration when VLAN transparency is required. To change to this configuration use the following CLI command:

      in-path peering oobtransparency mode full

      To change the default port used the by the client-side Steelhead appliance when oobtransparency mode full is configured, use the following CLI command:

      in-path peering oobtransparency port

      It is important to note that these oobtransparency options are only used with full transparency. If the first inner-connection to a Steelhead was not transparent, the OOB will always use correct
      addressing.

      RIVERBED ANNOUNCES STEELHEAD MOBILE 3.0

      RIVERBED ANNOUNCES STEELHEAD MOBILE 3.0

      Mobile Solution Complements Broader Steelhead Appliance Deployment and Speeds Enterprise IT Infrastructure Performance; Provides Acceleration for Windows 7 and 64-bit Systems

      SAN FRANCISCO – November 02, 2009 – Riverbed Technology (NASDAQ: RVBD), the IT infrastructure performance company for networks, applications and storage, today announced upcoming enhancements to its Mobile WAN optimization solution to address the productivity challenges global organizations face when managing remote and mobile workforces. Riverbed® Steelhead® Mobile increases employee productivity while on the road, working from home or connected wirelessly in the office by providing application performance improvements. With this release, Riverbed will provide acceleration for Windows 7 and 64-bit systems for mobile end users. In addition, organizations will be able to take advantage of improved flexibility and simplified manage ment functionality to provide mobile workers with accelerated performance no matter where they are working throughout the world.

      "As companies focus on consolidating their data through private cloud initiatives, the distance between their employees and critical data is growing--employees are becoming more mobile, working from a variety of locations outside of the office. This means that as centralization of data and IT infrastructure continues, remote access is becoming more of a challenge," said Eric Wolford, senior vice president of marketing and business development at Riverbed. "With Steelhead Mobile 3.0, we are able to deliver to customers a solution that not only accelerates important Windows 7 and 64-bit applications while on the road, but also complements their broader Steelhead appliance deployment."

      "Riverbed's leadership in the WAN optimization market can be attributed to product innovation and focus on creating a comprehensive WAN optimization solution. The company has maintained a focus on customer priorities and ever-changing IT requirements," said Cindy Borovick, research vice president at IDC. "Enterprise IT departments are being pressed to improve IT efficiency and employee productivity with a reduced IT budget. As more companies move forward with IT consolidation projects to cut costs, performance for end users is a concern. Mobile WAN optimization can help overcome this challenge by improving the performance of critical enterprise applications for remote and mobile workers."

      Steelhead Mobile 3.0 introduces enhancements that allow organizations to provide their mobile workers with better access and performance, eliminating slow performance as a barrier to mobility.

      Accelerate Windows 7 and 64-bit Systems for Mobile Users
      Riverbed continues to deepen its optimization of Windows and other Microsoft applications, such as SharePoint, Office, Server and CRM, so that enterprises can provide accelerated access to and improved performance of Microsoft applications to remote and mobile workers. With Steelhead Mobile 3.0, Riverbed provides acceleration benefits to Windows 7 and 64-bit systems.

      Organizations will have the ability to support Microsoft's catalog of modern operating systems and advanced platforms. Through Riverbed's comprehensive solution, companies will have the flexibility to upgrade operating systems or migrate to platforms while obtaining consistent application performance.

      Web Applications – Up to 60X Faster
      Enterprises are utilizing HTTP and HTTPS for everything from e-commerce to mission-critical applications. They are the underlying protocols for all Web-based applications used to communicate internally with employees and externally with partners and customers. By optimizing these protocols, users can dramatically reduce the amount of data that they need to send over the WAN, while at the same time streamlining the chatty behavior of transport and application protocols. With Mobile 3.0, as with the Steelhead appliances, these benefits are extended further to the application layer for HTTP and HTTPS users to make Web applications even faster while maintaining the preferred enterprise trust model. The common architecture of the Steelhead appliance and Steelhead Mobile gives organizations a single comprehensive solution to increase the performance of key Web-based applications.

      By employing URL learning, page parsing, embedded object pre-fetching and metadata acceleration modes, Steelhead Mobile further reduces the chattiness and delays that plague enterprise Web-based applications. With these additional tools, business-critical applications used today such as SharePoint, intranet portals and Web-based document management systems, as well as Web-enabled ERP and CRM applications like SAP NetWeaver, JD Edwards and Siebel, all receive application acceleration of up to 60x.

      Branch Warming – Immediate Acceleration Regardless of Location
      Riverbed has extended its technological leadership by improving integration with the Steelhead appliance, allowing workers to take advantage of even more of the benefits of the Riverbed Optimization System (RiOS®) while working remotely.

      With Steelhead Mobile 3.0, Riverbed introduces Branch Warming, which allows mobile and branch office users to share optimized data and experience even greater overall acceleration. By sharing the data references between the data stores of the Steelhead Mobile client and the branch office Steelhead appliance, mobile workers not only take advantage of all of the optimization benefits of the Steelhead appliance but are also able to contribute data references from their data store to help improve performance for the entire branch office, enabling "warm" performance regardless of location.

      Dancker, Sellew & Douglas has approximately 40 users that have benefited from the acceleration capabilities of Steelhead Mobile for the past two years. "With Mobile 3.0 we are experiencing the same stellar acceleration that our team has grown accustomed to – for example, I have a user working in a remote office in East Syracuse that is experiencing 67% data reduction and 3X performance gains," said Michael Vassallo, Senior Network Administrator. "We've found that the new Branch Warming feature helps our users switch seamlessly from working wirelessly to connecting back to a branch office with a Steelhead appliance. It greatly improves efficiency."

      Steelhead Mobile 3.0 is expected to be generally available on December 2, 2009.


      Forward Looking Statements
      This press release contains forward-looking statements, including statements relating to the expected demand for Riverbed's products and services, statements regarding performance results of Riverbed solutions, including Steelhead Mobile 3.0, and statements relating to Riverbed’s ability to meet the needs of distributed organizations. These forward-looking statements involve risks and uncertainties, as well as assumptions that, if they do not fully materialize or prove incorrect, could cause our results to differ materially from those expressed or implied by such forward-looking statements. The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our distribution partners; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; shortages or price fluctuations in our supply chain; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission. More information about these and other risks that may impact Riverbed’s business are set forth in our Form 10-Q filed with the SEC on October 30, 2009. All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements. Any future product, feature or related specification that may be referenced in this release are for information purposes only and are not commitments to deliver any technology or enhancement. Riverbed reserves the right to modify future product plans at any time.

      About Riverbed
      Riverbed Technology is the IT infrastructure performance company. The Riverbed family of wide area network (WAN) optimization solutions liberates businesses from common IT constraints by increasing application performance, enabling consolidation, and providing enterprise-wide network and application visibility – all while eliminating the need to increase bandwidth, storage or servers. Thousands of companies with distributed operations use Riverbed to make their IT infrastructure faster, less expensive and more responsive. Additional information about Riverbed (NASDAQ: RVBD) is available atwww.riverbed.com.

      Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor, Think Fast, the Riverbed logo, Mazu, Profiler and Cascade are trademarks or registered trademarks of Riverbed Technology All other trademarks used or mentioned herein belong to their respective owners.

      MEDIA CONTACT
      Kristalle Ward
      Riverbed Technology
      415-247-8140
      Kristalle.Ward@riverbed.com

      INVESTOR RELATIONS CONTACT
      Renee Lyall
      Riverbed Technology
      415-247-6353
      Renee.Lyall@riverbed.com

      Dec 7, 2009

      Riverbed Cascade Gateways vs Cascade Profiler vs Cascade Sensor

      Cascade Gateways collects network flow data already existing in an organization’s network provides intelligent de-duplication retaining information on where each flow was recorded, and sends this condensed data to the Cascade Profiler.

      Cascade Profiler complements this information with layer 7 application and response time data retrieved from a Cascade Sensor deployed in the datacenter. These records are then further enhanced with user identification information provided by active directories, switch port information, QoS, and SNMP data. The result is a complete view of a business application flow from the back end server to the users desktop.

      Cascade also provides an extensive set of integrations, with management systems typically deployed in an IT environment to further streamline workflows and provide value across multiple operations teams.

      http://www.riverbed.com/images/product_screenshots/diagram_implementation_cascade_lrg.gif

      Dec 2, 2009

      EIGRP: Packet from ourselves ignored

      本週在上ICND2的時候,有同學突然問了我一個問題,他在測試debug eigrp時出現了一個 log message

      02:36:26: EIGRP: Packet from ourselves ignored

      一下子我還真的無法回答上來,後來趁著休息時間上網找了一些資料,終於在CCIE Practical Studies: Security (CCIE Self-Study)這本書中找到相關說明(感謝Google圖書館)。

      原來是因為在Router上建立的loopback interface啟動了EIGRP,EIGRP Router也會在loopback interface上發送hello的封包,然後又被自己接收到,EIGRP知道這個hello封包是由自己送出的,因此忽略它不再嘗試去建立neighbor關係。

      解決方案就是如果有類似的情況又不想看到這樣的訊息,可以試著加上passive-interface的指令來讓EIGRP router不要在loopback interface上送出hello封包。

      Nov 25, 2009

      BGP Best Path Criteria

      Updated from Cisco 360 Workshop 1 Vol.1

      1. Highest weight(default=0)
      2. Highest local preference(default=100)
      3. Locally originated(Next hop:0.0.0.0, weight=32768)
      4. Shortest AS path length
      5. Lowest origin code(IGP < EGP < incomplete)
      6. Lowest MED(default=0)
      7. EBGP over IBGP
      8. If internal, prefer path with lowest IGP metric to next hop
      9. If external, consider multipath (NEW!)
      10. If external, prefer old one
      11. Lowest router ID or originator ID
      12. Minimum cluster list length (NEW!)
      13. Lowest neighbor address

      Nov 23, 2009

      Understanding BGP TTL Security - Packet Life

      Understanding BGP TTL Security - Packet Life

      By default, IOS sends BGP messages to EBGP neighbors with an IP time-to-live (TTL) of 1. (This can be adjusted with ebgp-multihop attached to the desired neighbor or peer group under BGP configuration.) Sending BGP messages with a TTL of one requires that the peer be directly connected, or the packets will expire in transit. Likewise, a BGP router will only accept incoming BGP messages with a TTL of 1 (or whatever value is specified by ebgp-multihop), which can help mitigate spoofing attacks.

      However, there is an inherent vulnerability to this approach: it is trivial for a remote attacker to adjust the TTL of sent packets so that they appear to originating from a directly-connected peer.

      ttl-security1.png

      By spoofing legitimate-looking packets toward a BGP router at high volume, a denial of service (DoS) attack may be accomplished.

      A very simple solution to this, as discussed in RFC 3682, is to invert the direction in which the TTL is counted. The maximum value of the 8-bit TTL field in an IP packet is 255; instead of accepting only packets with a TTL set to 1, we can accept only packets with a TTL of 255 to ensure the originator really is exactly one hop away. This is accomplished on IOS with the TTL security feature, by appending ttl-security hops to the BGP peer statement.

      ttl-security2.png

      Only BGP messages with an IP TTL greater than or equal to 255 minus the specified hop count will be accepted. TTL security and EBGP multihop are mutually exclusive; ebgp-multihop is no longer needed when TTL security is in use.

      Examples

      The following example sets the expected incoming TTL value for a directly connected eBGP peer. The hop-count argument is set to 2 configuring BGP to only accept IP packets with a TTL count in the header that is equal to or greater than 253. If the 10.1.1.1 neighbor is more than 2 hops away, the peering session will not be accepted.

      neighbor 10.1.1.1 ttl-security hops 2 

      Nov 12, 2009

      What is Multicast Designated Router (DR) ?

      晚上閒來無事檢查一下e-mule的成果,看到了一份古早的multicast course slide + content的PDF,真的有點衝動想要把它整本印出來,裏面的內容真的很詳細很棒,但是現在完全沒有教這些來龍去脈,所以我想現在的學生就算是考過CCNP/CCIP對於multicast也只是一知半解。

      在文章裏面我剛好看到了一個也出現在BSCI課文中的一個專有名詞 Designated Router/Querier,更重要的是…Cisco BSCI 教科書中那段文章還是錯的,它把highest IP寫成了lowest IP...(心中OS:習慣就好 習慣就好 教書教了三年還真的仍不習慣~)

      • Designated Router (DR)
      – For multi-access networks, a Designated Router (DR) is elected. In PIM Sparse mode networks, the DR is responsible for sending Joins to the RP for members on the multi-access network and for sending Registers to the RP for sources on the multi-access network. For Dense mode, the DR has no meaning. The exception to this is when IGMPv1 is in use. In this case, the DR also functions as the IGMP Querier for the Multi-Access network.

      • Designated Router (DR) Election
      – To elect the DR, each PIM node on a multi-access network examines the received PIM Hello messages from its neighbors and compares the IP Address of its interface with the IP Address of its PIM Neighbors. The PIM Neighbor with the highest IP Address is elected the DR.

      – If no PIM Hellos have been received from the elected DR after some period (configurable), the DR Election mechanism is run again to elect a new DR.

      Nov 11, 2009

      Configure RSVP Agents in Cisco IOS Software

      CIPT2 Vol. 1 Page 3-59 shows how to configure a Cisco IOS router to enable RSVP-agent functionality.

      !
      sccp local FastEthernet 0/0
      sccp ccm 10.1.1.1 identifier 1 version 6.0
      sccp
      !
      sccp ccm group 1
      associate ccm 1 priority 1
      associate profile 1 register HQ-1_MTP
      !
      dspfarm profile 1 mtp
      codec pass-through
      rsvp
      maximum sessions software 20
      associate application SCCP

      !
      interface Serial0/1
      description IP-WAN
      ip address 10.1.4.101 255.255.255.0
      duplex auto
      speed auto
      ip rsvp bandwidth 40

      Implementing Cisco Unified Mobility

      今天是我自2000年來Cisco筆試第二次的fail紀錄(上一次是 DCN,這一次是CIPT2),現在voice的筆試有很多需要強記的觀念及步驟,今天就遇到一題要求我將所有的指令依序排列(內容完全跟課本中某個slide完全相同),但是我實在沒有什麼機會作lab因為不熟練所以沒有得分。另外一題則是觀念問題,要我把Mobility Connect 以及 Mobile Voice Access步驟選出來,這也是完全要考驗考生對Lab的實作能力,但是這部份的內容在課本說得並不清楚,所以我在這邊把CIPT2 Lab中的步驟大綱點出來,方便各位快速查詢:

      Mobile Connect
      Task 1: Add the Mobility Softkey to IP Phones
      a. Configure a Softkey Template with the Mobility Softkey
      b. Assign the Softkey Template to the IP Phone

      Task 2: Associate an End User Account with the IP Phone and Enable the Use of Mobility
      a. Configure an End User for Device Mobility
      b. Configure the Office Phone to be Owned by the End User

      Task 3: Configure Remote Destination Profiles and Remote Destinations
      a. Configure a Remote Destination Profile
      b. Configure a Remote Destination

      Mobile Voice Access
      Task 4: Enable Cisco Unified Mobile Voice Access
      a. Activate the Cisco Unified Mobile Voice Access Service
      b. Configure Cisco Unified Mobility Service Parameters
      c. Configure End Users to Be Allowed to Use Cisco Unified Mobile Voice Access

      Task 5: Configure Cisco Unified Mobility Media Resources

      Task 6: Configure the Cisco IOS Gateway for Cisco Unified Mobility
      a. Configure the H.323 Gateway for the IVR Application
      b. Configure a POTS Dial-Peer for the Mobile Voice Access Number
      c. Configure a VoIP Dial-Peer to the Mobile Voice Access Media Resource

      Nov 10, 2009

      New Video Certifications Track and Cisco 360 Learning Program for CCIE Voice

      Cisco Announces New Video Certifications Track and Cisco 360 Learning Program for CCIE Voice

      In response to sharply increasing market demand for advanced video and collaboration solutions, Cisco is pleased to announce the release of two forthcoming Video specialist certifications and the new Cisco 360 Learning Program for CCIE® Voice.

      The Cisco TelePresence Installation and Cisco TelePresence Solutions Specialist certifications offer IT professionals and students a career opportunity to integrate, operate and manage Cisco TelePresence systems and solutions. Both certifications will be made available in early 2010.

      The Cisco 360 Learning Program for CCIE Voice provides experienced network engineers with an effective, job-relevant, and proven program to build expert-level skills and to prepare for the rigorous Cisco CCIE Voice lab exam. The Cisco 360 Learning Program for CCIE Voice will be made available on December 7, 2010.

      For more information access the Cisco Learning Network.

      Nov 5, 2009

      Understanding Denial-of-Service Attacks

      You may have heard of denial-of-service attacks launched against websites,
      but you can also be a victim of these attacks. Denial-of-service attacks can
      be difficult to distinguish from common network activity, but there are some
      indications that an attack is in progress.

      What is a denial-of-service (DoS) attack?

      In a denial-of-service (DoS) attack, an attacker attempts to prevent
      legitimate users from accessing information or services. By targeting your
      computer and its network connection, or the computers and network of the
      sites you are trying to use, an attacker may be able to prevent you from
      accessing email, websites, online accounts (banking, etc.), or other
      services that rely on the affected computer.

      The most common and obvious type of DoS attack occurs when an attacker
      "floods" a network with information. When you type a URL for a particular
      website into your browser, you are sending a request to that site's computer
      server to view the page. The server can only process a certain number of
      requests at once, so if an attacker overloads the server with requests, it
      can't process your request. This is a "denial of service" because you can't
      access that site.

      An attacker can use spam email messages to launch a similar attack on your
      email account. Whether you have an email account supplied by your employer
      or one available through a free service such as Yahoo or Hotmail, you are
      assigned a specific quota, which limits the amount of data you can have in
      your account at any given time. By sending many, or large, email messages to
      the account, an attacker can consume your quota, preventing you from
      receiving legitimate messages.

      What is a distributed denial-of-service (DDoS) attack?

      In a distributed denial-of-service (DDoS) attack, an attacker may use your
      computer to attack another computer. By taking advantage of security
      vulnerabilities or weaknesses, an attacker could take control of your
      computer. He or she could then force your computer to send huge amounts of
      data to a website or send spam to particular email addresses. The attack is
      "distributed" because the attacker is using multiple computers, including
      yours, to launch the denial-of-service attack.

      How do you avoid being part of the problem?

      Unfortunately, there are no effective ways to prevent being the victim of a
      DoS or DDoS attack, but there are steps you can take to reduce the
      likelihood that an attacker will use your computer to attack other
      computers:
      * Install and maintain anti-virus software (see Understanding Anti-Virus
      Software for more information).
      * Install a firewall, and configure it to restrict traffic coming into and
      leaving your computer (see Understanding Firewalls for more
      information).
      * Follow good security practices for distributing your email address (see
      Reducing Spam for more information). Applying email filters may help you
      manage unwanted traffic.

      How do you know if an attack is happening?

      Not all disruptions to service are the result of a denial-of-service attack.
      There may be technical problems with a particular network, or system
      administrators may be performing maintenance. However, the following
      symptoms could indicate a DoS or DDoS attack:
      * unusually slow network performance (opening files or accessing websites)
      * unavailability of a particular website
      * inability to access any website
      * dramatic increase in the amount of spam you receive in your account

      What do you do if you think you are experiencing an attack?

      Even if you do correctly identify a DoS or DDoS attack, it is unlikely that
      you will be able to determine the actual target or source of the attack.
      Contact the appropriate technical professionals for assistance.
      * If you notice that you cannot access your own files or reach any
      external websites from your work computer, contact your network
      administrators. This may indicate that your computer or your
      organization's network is being attacked.
      * If you are having a similar experience on your home computer, consider
      contacting your internet service provider (ISP). If there is a problem,
      the ISP might be able to advise you of an appropriate course of action.
      _________________________________________________________________

      Author: Mindi McDowell
      _________________________________________________________________

      Produced 2004 by US-CERT, a government organization.

      Note: This tip was previously published and is being re-distributed to increase awareness.

      Terms of use

      http://www.us-cert.gov/legal.html

      This document can also be found at

      http://www.us-cert.gov/cas/tips/ST04-015.html

      For instructions on subscribing to or unsubscribing from this mailing list, visit
      http://www.us-cert.gov/cas/signup.html.

      Nov 3, 2009

      DISA (Direct Inward System Access)

      The DISA, Direct Inward System Access, application allows someone from outside the telephone switch (PBX) to obtain an "internal" system dialtone and to place calls from it as if they were placing a call from within the switch.

      DISA plays a dialtone. The user enters their numeric passcode, followed by the pound sign (#). If the passcode is correct, the user is then given system dialtone on which a call may be placed.

      Nov 2, 2009

      My First Riverbed Certification - RCSP


      算一算時間,距離上次Riverbed通知我寄送證書的時間還不到一週,今天就收到了國際快遞,這一張是我的第一張Riverbed證書(希望不需再考第二張),不過我才只上了一門Riverbed課程而己呢,所以其實說實話對於Riverbed產品線的掌控程度還是很心虛地…希望能夠儘快去把其他的Riverbed課程上完,加強一下自己對Riverbed產品的了解及不同架構的各種可行解決方案。

      不過說實話這份證書上沒有任何的序號或認證編號,其實很容易就可以偽造的說...


      Oct 22, 2009

      Connect to E911 via PRI or CAMA trunk?

      There are two types of circuits an enterprise can use to route 911 calls to the proper PSAPs (public safety answering points) and deliver the 10-digit caller ID: ISDN PRIs or Centralized Automatic Message Accounting (CAMA) trunks.

      "ISDN is the wea port of choice from our perspective," says Guy Clinch, Avaya's government solutions director and a member of the National Emergency Number Association's PBX/multi-line telephone system technical subcommittee, because you can fit more phone lines in a PRI and assign each of those numbers to represent a separate ERL. In short, you can map your location and send more granular location information to the PSAP.

      But as many as 85% of enterprises choose instead to retrofit their IP PBXs using legacy analog CAMA trunks for entry into the public safety system, Clinch says. It's cheaper, because the CAMA trunk is equivalent to on phone number, so they get charged only once.

      The downside: CAMA trunks cannot deliver a custom caller ID like PRIs can, notes Mark Fletcher, chair of the multi-line telephone system...

      Oct 21, 2009

      Cisco Monitor Director

      Cisco Monitor Director, a centralized proactive solutions-management tool for Cisco channel partners to offer multicustomer outsourced 24-hour network management services. It communicates with the Cisco Monitor Manager residing on a customer's premises to provide comprehensive, real-time monitoring, alerting and reporting to help troubleshoot and fix issues remotely. Combining these management tools helps enable Cisco channel partners become trusted advisers to their customers.

      The Smart Phone Control Protocol (SPCP)

      The Smart Phone Control Protocol, or SPCS, is an addition to the Simple Call Control Protocol (SCCP) that is only supported on the CP-500 phones and the SPA525 phone.

      79XX phones register with their SCCP call control using a proprietary handshake that consists of a series of messages. During the registration phase a number of things happen:

      1. Phone (Station) sends a Registration Request

      2. Request is acknowledged by the Call Control (CME/UC500)

      3. Capabilities are advertised

      4. Button template, Softkeys, Line Status, Display message, Speed dials Time and date are requested by the phone and sent by CME/UC500

      When the phone is a 500 series phone, two additional messages precede all this handshake, namely an SPCP Token Request and an SPCP Token Acknowledgement (if successful) or Token rejection (if unsuccessful). A Token ACK can only be sent by UC500, while an ISR will always send a Token Reject. This prevents the phone from registering to any call control different than UC500.

      Oct 15, 2009

      PTZ(Pan/Tilt/Zoom)攝影機

      2008/08/27-杜念魯

      所謂的PTZ攝影機,其實與一般的監控用攝影機並沒有太多的差異,但是所謂的PTZ則是表示攝影機的鏡頭可以進行左右轉動(Pan)、上下傾斜(Tilt)與放大(Zoom)等不同的功能。

      而透過PTZ攝影機,可以隨時改變攝影的角度與所涵蓋的範圍與清晰度,相較於傳統僅能單一運動的攝影機,可以獲得更好的監控效果。而且透過多台PTZ攝影機之間在空間位置上的相互關係,更有利於智慧型監控目的的達成。所以PTZ攝影機在目前監控市場中,已經成為多數使用者所偏好的產品。

      Sep 23, 2009

      迎戰台灣大 遠傳擬買中嘉

      • 工商時報 2009-09-23
      • 【朱漢崙/台北報導】

       為迎戰台灣大跨足有線電視,遠東集團旗下遠傳電信已密商韓系知名私募基金安博凱(MBK),要買下中嘉!外資圈權威消息人士透露,遠傳數月前已接觸安博凱,希望買下中嘉,上週台灣大哥大搶先一步買下凱擘的東森系統台,加深遠傳鞏固市場優勢的警覺,與安博凱的協商已進入緊鑼密鼓階段,近期可望傳出佳音。

       而遠東集團、富邦集團的電信事業對決,也將因彼此都拿下有線電視系統台,而進入新的里程碑。

       先前台灣大買下有線電視目前市佔率第一名的東森系統台,出價568億元,據了解,遠傳此次買下安博凱所有的中嘉,目前雙方談判的成交價,以中嘉收視戶120萬戶,而每戶約5萬元的市場行情估算,上看600億元。

       消息人士說,遠傳早已對此布局,而台灣大接手凱擘為一道有力「觸媒」,促使遠東集團對此案,從鴨子划水開始加速進行,包括遠傳董事長徐旭東、安博凱的台灣區代表人龔國權,已有所接觸。

       中嘉曾是有線電視系統台的龍頭,儘管目前已被東森追上,名列第二,但市佔率仍以20%緊追東森系統台的22%,若電信「二哥」遠傳成功拿下中嘉,將可避免「一哥」台灣大拿下東森系統台之後,挾電信業結盟有線電視的優勢,擴大彼此差距。

       在財務規劃上,據消息人士透露,成交價上看600億元的這起交易,是採全數現金交易,或換股、現金各半,尚未決定;若全採現金交易,則遠傳擬向外募集的資金,約在350億元左右,若採取現金、換股各半的模式進行交易,則對外募資金額,將可大幅降低,至於以承接負債來抵減部分交易金額的方式,也在考慮之列。

       安博凱2年半前以14.35億美元,約台幣470億元的價格,取得中嘉所屬11個系統台60%股權。若雙方交易以上看600億元的價格底定,那不到3年時間,安博凱就淨賺130億元。

       中嘉目前有三股勢力,最大的就是這次談判主體安博凱,全球媒體大王梅鐸與中信集團辜濂松家族,各持股20%,股東結構比起東森系統台較複雜,不過由於安博凱持有過半股權,一旦成交,遠東集團將入主中嘉。

      Sep 20, 2009

      Wide area file services (WAFS)

      Wide area file services (WAFS) products allow remote office users to access and share files globally at LAN speeds over the WAN. Distributed enterprises that deploy WAFS solutions are able to consolidate storage to corporate datacenters, eliminating the need to back up and manage data that previously resided in their remote offices. WAFS uses techniques such as CIFS and MAPI protocol optimization, data compression, and sometimes storing recurrent data patterns in a local cache.


      WAFS is a subset of WAN optimization, which also caches SSL Intranet and ASP applications and elearning multimedia traffic as well, to accelerate a greater percentage of WAN traffic.

      Sep 17, 2009

      Advanced TCP Implementation(HS-TCP vs S-TCP vs BIC-TCP)

      自從接觸了廣域網路加速器這個領域,愈來愈覺得自己對TCP的了解實在是只懂得皮毛而已。原來在TCP上的實現有這麼多種改進方式,使得TCP的傳輸表現更加優越。

      以下的內容是節錄自Cisco Press Application Acceleration and WAN Optimization Fundamentals中的章節,由於這幾個協定可以說是各家廣域網路加速器都會參考的標準,用以改善原有TCP設計運作上的缺陷,所以我把它們整理出來加以分較,希望對各位有所幫助!

      • HS-TCP(High Speed TCP)
      High-Speed TCP is a advanced TCP implementation that was developed primarily to address bandwidth scalability. HS-TCP uses an adaptive cwnd increase that is is based on the current cwnd value of the connection. When the cwnd value is large, HS-TCP uses a larger cwnd increase when a segment is successfully acknowledged. In effect, this helps HS-TCP to more quickly find the available bandwidth, which leads to higher levels of throughput on large networks much more quickly.

      HS-TCP also uses an adaptive cwnd decrease based on the current cwnd value. When the cwnd value for a connection is large, HS-TCP uses a very small decrease to the connection's cwnd value when loss of a segment is detected. In this way, HS-TCP allows a connection to remain at very high levels of throughput even in the presence of packet loss but can also lead to longer stabilization of TCP throughput when other, non-HS-TCP connections are contending for available network capacity. The aggressive cwnd handling of HS-TCP can lead to a lack of fairness when non-HS-TCP flows are competing for available network bandwidth. Over time, non-HS-TCP flows can stabilize with HS-TCP flows, but this period of time may be extended due to the aggressive behavior of HS-TCP.

      • S-TCP(Scalable TCP)
      Scalable TCP(S-TCP) is similar to HS-TCP in that it uses an adaptive increase to cwnd. S-TCP will increase cwnd by a value of (cwnd x .01) when increasing the congestion windows, which means the increment is large when cwnd is large and the increment is small when cwnd is small.

      Rather than use an adaptive decrease in cwnd, S-TCP will decrease cwnd by 12.5%(1/8) upon encountering a loss of a segment. In this way, S-TCP is more TCP friendly than HS-TCP in high-bandwidth environments. Like HS-TCP, S-TCP is not fair among flows where an RTT disparity exists due to the overly aggressive cwnd handling.

      • BIC-TCP(Binary Increase Congestion TCP)
      Bindary Increase Congestion TCP(BIC-TCP) is an advanced TCP stack that uses a more adaptive increase than that used by HS-TCP and S-TCP. HS-TCP and S-TCP use a variable increment to cwnd directly based on the value of cwnd. BIC-TCP uses connection loss history to adjust the behavior of congestion avoidance to provide fairness.

      BIC-TCP's congestion avoidance algorithm uses two search modes - linear search and binary search - as compared to the single search mode (linear or linear relative to cwnd) provided by standard TCP, HS-TCP, and S-TCP. These two search modes allow BIC-TCP to adequately maintain bandwidth scalability and fairness while also avoiding additional levels of packet lost caused by excessive cwnd aggressiveness.
      1. Linear search: Uses a calculation of the difference between the current cwnd and the previous cwnd prior to the loss event the determine the rate of linear search.
      2. Bindary search: Used as congestion avoidance approaches the previous cwnd value prior to the loss event. This allows BIC-TCP to mitigate additional loss events caused by the connection exceeding available network capacity after a packet loss event.
      The linear search provides aggressive handling to ensure a rapid return to previous levels of throughput, while the binary search not only helps to minimize an additional loss even, but also helps to improve fairness of environments with RTT disparity(that is, two nodes exchanging data are closer than two other nodes that are exchanging data) in that it allows convergence of TCP throughput across connections much more fairly and quickly.


      Sep 16, 2009

      Cisco Networkers 2009 - CCIE Lunch Invitation


      等了很久,終於等到這一封邀請信 - CCIE Lunch Invitation

      傳聞在Cisco Networker活動中都會有一場聚會,與會人員是來自世界各地的CCIE,主持人通常是Cisco金字塔高層主管,可以想見這個活動一定是意義非凡(如果各位是比較資深的CCIE應該也曾經參加過台灣思科主辦的CCIE Club活動,可惜的是無以後繼,CCIE Club就這樣地悄悄消失於地球的表面,只留下每人一只Cisco Logo紀念錶)

      再過一個多星期就要準備踏上朝聖的道路,心中充滿期待與興奮…(只要是出遠門心情就會不自主的亢奮起來)

      台灣大買凱擘吃東森 將躍居有線電視龍頭

      一直很希望看到台灣網路業者之後能夠站在公平公正公開的平等地位上互相競爭互相抗衡,終於看到了一線曙光~

      台灣大哥大可望買下有線電視第二大系統台凱擘,最快今(16)日召開董事會討論,傾向以現金交易,一舉吃下110萬戶的有線電視用戶與東森電視的十個頻道,躍居台灣最大的有線電視系統台與頻道業者,併購金額上看600億元。

      至截稿為止,尚無法聯絡上台灣大與凱擘的發言管道。消息來源透露,台灣大一直有意擴張有線電視經營版圖,之前對麥格理經營的台灣寬頻有興趣,經過評估後,則相中在台北市布局甚深的凱擘,要藉此切入消費實力最高、又最難布線的地區。

      此外,凱擘還掌握東森電視台的十個頻道,包括新聞台、幼幼台、戲劇台等,這對有心跨足數位內容的富邦金董事長蔡明忠、台灣大董事長蔡明興而言,更是吸引力十足。

      不過,雙方之前一直卡在價格問題,台灣大有意以手中持有21%庫藏股和凱擘交換,原本去年底可望交易,但台灣大當時的股價滑落,近期股價逐步回升。在雙方協調過程中,仍傾向於全部以現金併購,或半換股與半現金的交易方式。

      以凱擘約110萬用戶推算約500多億元,加上凱雷持有約64%的東森電視台股權,投入約五、六十億元,併購金額上看600億元。台灣大可望今天召開董事會討論此事,台灣大昨天收盤價約51.9元,小跌0.1元。

      之前有線電視併購案的用戶數價格屢屢推高,一度高達每戶4.7 萬元。但在金融風暴後,雖然市場景氣回溫,價格要再創新高不易,傳言每戶金額約在4萬至5萬元。

      值得注意的是,台灣大有線電視用戶數約60萬戶,居於市場第四大,凱擘達到110萬戶,是市場第二大,兩者合計達到170萬用戶數,居有線電視市場龍頭。不過,政府目前對有線電視用戶數仍有不得高於三分之一的限制,雙方如何以轉投資架構布局,也有待觀察。

      台灣大目前擁有600多萬手機用戶,又經營購物頻道momo 台與親子台,希望藉由行動電話、有線電視、頻道內容等全方位的經營,擴大市場版圖。

      目前電信市場的龍頭業者是中華電,也是台灣大最大的對手,中華電最近在推動隨選視訊(MOD)上著力甚深,達到70萬用戶,雙方競爭激烈,台灣大有意藉由更廣的布局,深入用戶的客廳,與中華電全面開戰。

      【2009/09/16 經濟日報】@ http://udn.com/

      Cisco 360 Learning Program for CCIE R&S: Cisco Lab Safe Promotion

      Together, the Cisco 360 Learning Program for CCIE Routing and Switching and the Cisco Lab Safe promotion allow qualifying learners who do not pass their first Cisco CCIE® R&S lab exam attempt to retake the CCIE lab exam at no additional cost (a $1,400 US value).

      To be eligible for this promotion, individuals must meet these requirements:

      • Obtain their instructor’s recommendation
      • Complete the Cisco 360 Learning Program for CCIE R&S Essentials Package or a CIERS instructor-led workshop
      • Score 80 percent or better on at least one of the performance assessments that are listed below

      Only select Cisco Authorized Learning Partners offer the Cisco 360 Learning Program curriculum and Cisco Lab Safe promotion, which combined provide you with added assurance in the quality and value of your training investment.


      CCIELabSafe.jpg