Posts

Showing posts with the label Riverbed

ip wccp redirect exclude in

The ip wccp redirect exclude in command should be used on interfaces facing WAAS devices when outbound redirection is configured on other interfaces on the device.  Let's say you have a simple configuration where the router has three interfaces - one LAN facing, one WAN facing, and one used for the WAAS device: ! interface FastEthernet0/0   description ** LAN Interface **   ip address 10.10.10.1 255.255.255.0   duplex auto   speed auto ! interface FastEthernet0/1   description ** WAAS Interface **   ip address 11.11.11.1 255.255.255.248   duplex auto   speed auto !         interface FastEthernet1/0   description ** WAN Interface **   ip address 10.88.81.99 255.255.255.248   duplex auto   speed auto ! You have two choices for how to apply WCCP here: Configure inbound redirection on the LAN (FastEthernet0/0) and WAN (FastEthernet1/0) interfaces. C...

Hierarchical Packet Fair Queueing (H-PFQ) vs Hierarchical Fair Service Curve (H-FSC).

While most of the previous research has focused on providing Quality of Service (QoS) on a per session basis, there is a growing need to also support hierarchical link-sharing, or QoS guarantees for traffic aggregate (such as those belonging to the same organization, service provider, or application family). Supporting QoS for both single sessions and traffic aggregates is difficult as it requires the network to meet multiple QoS requirements at different granularities simultaneously. This problem is exacerbated by the fact that there are no formal models that specify all the requirements. We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC)....

EMC Data Domain

Data Domain提供給客戶的是建立Disk Staging備份架構中的一種以硬碟為備份資料儲存及恢復的Hardware Appliance. 它以串列SATA硬碟技術為主, 並提供DD OS作業系統軟體, 使得它不只是一種含有RAID 6功能的低價位高性能磁碟陣列外, 並且它的 "容量最加化存儲技術(Capacity Optimized Storage)" 大量減少實際資料所需的儲存空間 和 "資料零缺點架構(Data Invulnerability Architecture)"技術, 提供前所未有的資料多重保護功能. 更近一步的將 "容量最加化存儲技術" 發揮在建立異地備援架構上的資料複製, 讓客戶不在為了傳統資料保護架構缺點而煩惱, 這三大獨特功能更是讓每GB資料儲存成本接近磁帶自動化備份設備, 但卻能提供給客戶比傳統資料保護更完善的機制與架構. 簡單說, 它是一種為了符合專業備份及恢復設備的特殊要求而設計的產品. 容量最加化存儲技術(Capacity Optimized Storage) - 備份資料拆解 - 只儲存獨特的資料碎片, 大幅減少所需的空間需求及成本 - 壓縮比可以有效地達到20:1

EMC Avamar

Image
EMC Avamar是一套針對遠端辦公室備份環境,內建重複資料刪除技術的備份軟體,透過前端代理程式的處理,可刪除備份資料中的冗餘,大幅降低備份資料傳輸佔用的網路頻寬,並節省儲存媒體的消耗量。 系統支援非常廣泛,代理程式可支援Windows、Linux、Solaris、HP-UX、AIX等主要Unix平臺,Mac OS X以及VMware等作業環境,另外也能支援DB2、Exchange、SQL Server、Oracle等應用程式。可適應大多數的企業IT環境。 集中式備份管理 Avamar源自Avamar被EMC併購前的Axion,是一套很典型的集中控管式備份軟體,採用「用戶端—伺服器端」的架構。Avamar伺 服器端的主程式必須安裝在IA32平臺的Red Hat Enterprise Linux AS/ES 3.0上,而用戶端代理程式的支援就非常廣。安裝完成後,可由管理者從Avamar伺服器上,啟動裝有代理程式的前端系統,執行備份作業,將指定的資料經 網路送到Avamar伺服器控制的儲存設備上。除了由伺服器端發起備份外,也能由前端系統的使用者自行啟動備份。管理者還能從其他電腦透過瀏覽器登入 Avamar伺服器,執行各項作業。 由於Avamar本質上是一套備份軟體,除了重複資料刪除這個功能外,其餘功能均與一般軟體類似,如作業狀態監控介面、報表、儲存區管理功能等, 操作算是相當簡便。為簡化建置與維護的問題,EMC銷售時可提供預載了Avamar的應用伺服器,原廠本身也驗証過IBM、HP、Dell等廠商數款市售 伺服器的硬體相容性。 優異的資料刪除效率 冗餘資料的刪減是Avamar的最大特色,透過區塊級的比對能有效剔除資料底層中的重複部份。由於比對動作是由代理程式執行,因此送到網路上的資 料已經是經過De-Dupe,故能大幅縮減對傳輸頻寬的需求;而且代理程式不僅只是比對前端主機上的資料,還能將前端資料的特徵值,比對已儲存在伺服器上 的現有資料的特徵值,因此比對範圍是整個Avamar的儲存區域,這也就是所謂的「全域壓縮」。 Avamar在測試中展現了令人印象深刻的資料刪減能力,是這次測試表現最好的產品之一,這或許與其具備可調式資料分段方式有關,系統分析檔案是可自動以1~64KB的「窗口」對資料進行分段比對,以適應不同的資料類型。文⊙張明德...

Riverbed Steelhead QoS Class Queue Methods

Optionally, select one of the following queue methods for the class from the drop-down list: SFQ . Shared Fair Queueing (SFQ) is the default queue for all classes. Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class. FIFO . Transmits all flows in the order that they are received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages. MXTCP . Has very different use cases than the other queue parameters. MX-TCP also has secondary effects that ...

Riverbed Steelhead QoS Classification for the FTP Data Channel

Image
QoS Classification for the FTP Data Channel When configuring QoS classification for FTP, the QoS rules differ depending on whether the FTP data channel is using active or passive FTP. Active versus passive FTP determines whether the FTP client or the FTP server select the port connection for use with the data channel, which has implications for QoS classification. Active FTP Classification With active FTP, the FTP client logs in and issues the PORT command, informing the server which port it must use to connect to the client for the FTP data channel. Next, the FTP server initiates the connection towards the client. From a TCP perspective, the server and the client swap roles: The FTP server becomes the client because it sends the SYN packet, and the FTP client becomes the server because it receives the SYN packet. Although not defined in the RFC, most FTP servers use source port 20 for the active FTP data channel.  For active FTP, configure...

Riverbed Steelhead Adaptive Data Streamlining Modes

Default This setting is enabled by default and works for most implementations. The default setting:  Provides the most data reduction. Reduces random disk seeks and improves disk throughput by discarding very small data margin segments that are no longer necessary. This Margin Segment Elimination (MSE) process provides network-based disk defragmentation. Writes large page clusters. Monitors the disk write I/O response time to provide more throughput. SDR-Adaptive Specify to include the default settings and also. Balances writes and reads. Monitors both read and write disk I/O response and, based on statistical trends, can employ a blend of disk-based and non-disk-based data reduction techniques to enable sustained throughput during periods of high diskintensive workloads.  Important : Use caution with this setting, particularly when you are optimizing CIFS or NFS with prepopulation. Please contact Riverbed Technical Support for...

Riverbed Steelhead In-Path Rule - Neural Framing Mode

Optionally, if you have selected Auto-Discover or Fixed Target, you can select a neural framing mode for the in-path rule. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer. You can specify the following neural framing settings: Never . Never use the Nagle algorithm. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used. Always . Always use the Nagle algorithm. All data is passed to the codec which attempts to coalesce consume calls ...

TCP Vegas

TCP 的傳送端利用RTT 針測由傳送端到接收端之間queue的長度並藉此調整congestion window 的值,主要修改的部份有三點: Slow Start: 大約2 個RTT時間,cwnd 才會增加一倍; Congestion Avoidance:Vegas 藉由比較預期的速率與實際傳送的速率算出Diff 的值,並限制Diff 的值必須介於alpha 與beta 之間,若Diff < alpha,則增加傳送的速率,反之,若Diff > beta,則減少傳送的速率; 藉由觀察RTT 的值比判斷是否已經有packet timeout 資料來源: http://admin.csie.ntust.edu.tw/IEET/syllabus/course/962_CS5021701_106_5pyq5YiG5oiQ57i+6auY5L2OMS5wZGY=.pdf

When Are ICMP Redirects Sent?

Image
How ICMP Redirect Messages Work ICMP redirect messages are used by routers to notify the hosts on the data link that a better route is available for a particular destination. For example, the two routers R1 and R2 are connected to the same Ethernet segment as Host H. The default gateway for Host H is configured to use router R1.Host H sends a packet to router R1 to reach the destination on Remote Branch office Host 10.1.1.1. Router R1, after it consults its routing table, finds that the next-hop to reach Host 10.1.1.1 is router R2. Now router R1 must forward the packet out the same Ethernet interface on which it was received. Router R1 forwards the packet to router R2 and also sends an ICMP redirect message to Host H. This informs the host that the best route to reach Host 10.1.1.1 is by way of router R2. Host H then forwards all the subsequent packets destined for Host 10.1.1.1 to router R2.

Markov Model

What is a Markov Model? Markov models are some of the most powerful tools available to engineers and scientists for analyzing complex systems. This analysis yields results for both the time dependent evolution of the system and the steady state of the system. For example, in Reliability Engineering, the operation of the system may be represented by a state diagram, which represents the states and rates of a dynamic system. This diagram consists of nodes (representing a possible state of the system, which is determined by the states of the individual components & sub-components) connected by arrows (representing the rate at which the system operation transitions from one state to the other state). Transitions may be determined by a variety of possible events, for example the failure or repair of an individual component. A state-to-state transition is characterized by a probability distribution. Under reasonable assumptions, the system operation may be analyzed usin...

Out-of-Band (OOB) Splice

Image
What is the OOB Splice? An OOB splice is an independent, separate TCP connection made on the first connection between two peer Steelhead appliances used to transfer version, licensing and other OOB data between peer Steelhead appliances. An OOB connection must exist between two peers for connections between these peers to be optimized. If the OOB splice dies all optimized connections on the peer Steelhead appliances will be terminated. The OOB connection is a single connection existing between two Steelhead appliances regardless of the direction of flow. So if you open one or more connections in one direction, then initiate a connection from the other direction, there will still be only one connection for the OOB splice. This connection is made on the first connection between two peer Steelhead appliances using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any concern except in full transparency deployments. Case Study In the example below, the Client...

RIVERBED ANNOUNCES STEELHEAD MOBILE 3.0

RIVERBED ANNOUNCES STEELHEAD MOBILE 3.0 Mobile Solution Complements Broader Steelhead Appliance Deployment and Speeds Enterprise IT Infrastructure Performance; Provides Acceleration for Windows 7 and 64-bit Systems SAN FRANCISCO – November 02, 2009 – Riverbed Technology (NASDAQ: RVBD), the IT infrastructure performance company for networks, applications and storage, today announced upcoming enhancements to its Mobile WAN optimization solution to address the productivity challenges global organizations face when managing remote and mobile workforces. Riverbed® Steelhead® Mobile increases employee productivity while on the road, working from home or connected wirelessly in the office by providing application performance improvements. With this release, Riverbed will provide acceleration for Windows 7 and 64-bit systems for mobile end users. In addition, organizations will be able to take advantage of improved flexibility and simplified manage ment functionality to provide mobile work...

Riverbed Cascade Gateways vs Cascade Profiler vs Cascade Sensor

Image
Cascade Gateways collects network flow data already existing in an organization’s network provides intelligent de-duplication retaining information on where each flow was recorded, and sends this condensed data to the Cascade Profiler . Cascade Profiler complements this information with layer 7 application and response time data retrieved from a Cascade Sensor deployed in the datacenter. These records are then further enhanced with user identification information provided by active directories, switch port information, QoS, and SNMP data. The result is a complete view of a business application flow from the back end server to the users desktop. Cascade also provides an extensive set of integrations, with management systems typically deployed in an IT environment to further streamline workflows and provide value across multiple operations teams.

My First Riverbed Certification - RCSP

Image
算一算時間,距離上次Riverbed通知我寄送證書的時間還不到一週,今天就收到了國際快遞,這一張是我的第一張Riverbed證書(希望不需再考第二張),不過我才只上了一門Riverbed課程而己呢,所以其實說實話對於Riverbed產品線的掌控程度還是很心虛地…希望能夠儘快去把其他的Riverbed課程上完,加強一下自己對Riverbed產品的了解及不同架構的各種可行解決方案。 不過說實話這份證書上沒有任何的序號或認證編號,其實很容易就可以偽造的說...

Wide area file services (WAFS)

Wide area file services (WAFS) products allow remote office users to access and share files globally at LAN speeds over the WAN . Distributed enterprises that deploy WAFS solutions are able to consolidate storage to corporate datacenters, eliminating the need to back up and manage data that previously resided in their remote offices. WAFS uses techniques such as CIFS and MAPI protocol optimization, data compression , and sometimes storing recurrent data patterns in a local cache . WAFS is a subset of WAN optimization , which also caches SSL Intranet and ASP applications and elearning multimedia traffic as well, to accelerate a greater percentage of WAN traffic.

Advanced TCP Implementation(HS-TCP vs S-TCP vs BIC-TCP)

Image
自從接觸了廣域網路加速器這個領域,愈來愈覺得自己對TCP的了解實在是只懂得皮毛而已。原來在TCP上的實現有這麼多種改進方式,使得TCP的傳輸表現更加優越。 以下的內容是節錄自Cisco Press Application Acceleration and WAN Optimization Fundamentals中的章節,由於這幾個協定可以說是各家廣域網路加速器都會參考的標準,用以改善原有TCP設計運作上的缺陷,所以我把它們整理出來加以分較,希望對各位有所幫助! HS-TCP(High Speed TCP) High-Speed TCP is a advanced TCP implementation that was developed primarily to address bandwidth scalability. HS-TCP uses an adaptive cwnd increase that is is based on the current cwnd value of the connection. When the cwnd value is large, HS-TCP uses a larger cwnd increase when a segment is successfully acknowledged. In effect, this helps HS-TCP to more quickly find the available bandwidth, which leads to higher levels of throughput on large networks much more quickly. HS-TCP also uses an adaptive cwnd decrease based on the current cwnd value. When the cwnd value for a connection is large, HS-TCP uses a very small decrease to the connection's cwnd value when loss of a segment is detected. In this way, HS-TCP allows a connection...

TCP window scale option

Image
看了這麼久的TCP Windows Size相關的文章,終於搞懂為何可以突破TCP Windows Size的最大值(2^16 = 0~65535 bytes)。之前我一直被TCP header長度的問題困擾,因為Windows Size欄位就只有16 bits,那麼要如何才能紀錄使用超過TCP Windows 65535 bytes長度的資料呢? 不過說也奇怪,明明是一個很簡單的理論,但是找來找去總是找不到一份很簡單的文章來說明為什麼? 透過一些相關文章的佐證,我就在這邊用比較淺顯易懂的文字來表達。 我們先來看看TCP Header的樣子: 我們可以看到在TCP header中共有20 bytes,其中包含了16 bits的Windows Size。因為原有的TCP Windows Size最大值無法超過65536 bytes,所以後來在IETF RFC 1323 中定義了TCP Windows scale option的功能,讓我們可以使用TCP options欄位(共32 bits)中的14 bits當成是延伸的Windows Size。因此我們現在的TCP Windows Size最大長度可以達到2^(16+14) = 1GB(1,073,741,824 bytes) 以下是摘錄自WiKi上的相關資料: TCP window scale option From Wikipedia, the free encyclopedia The TCP window scale option is an option to increase the TCP receive window size above its maximum value of 65,535 bytes. This TCP option, along with several others, is defined in IETF RFC 1323 which deals with Long-Fat Networks , or LFN. In fact, the throughput of a communication is limited by two windows: congestion window and receive window. The first one tries ...

RiOS 5.5 SSL Enhancements

Image
With 5.0, we have SSL auto-discovery so that administrators can whitelist or blacklist peers very easily and the peers are automatically discovered upon the first SSL connection and appear in the self-signed peer gray list. You simply mark them as trusted. The connections are not optimized until after you move the peers to the trusted whitelist . Both the client-side and server-side Steelhead appliances must use RiOS 5.0 or later. • SSL Certificates and private keys copied to server-side Steelhead appliance (no certificate faking in branch offices) • Auto-discovery of SSL Steelhead peers with gray-list capability • Automatic optimization of SSL traffic • Support for certificate domain wildcards

Riverbed WAN Visibility Modes

Image
Topology 如下: Client Client Steelhead(CSH) Server Steelhead(SSH) Server Correct Addressing This is how our product works today and is the safest and most scalable mode. Traffic on the WAN is between the Steelhead appliances on port 7800. We have thousands of customers using Steelhead appliances and we believe this will continue to be the most used mode going forward. 這是Riverbed預設模式,在這個模式中,Client與CSH以及Server與SSH之間使用Client & Server Source IP&Port,在CSH與SSH之間則是使用CSH&SSH的Source IP&Port。(Destination Port為7800) TCP Options使用0x4c(76) Correct Addressing & Port Visibility Traffic on the WAN is still between the Steelhead appliances, but we preserve the port information. 在這個模式中,除了CSH與SSH之間的Port改用Client與Server之間的Destination Port之外,其它部份跟Correct Addressing模式相同。 (RiOS 5.0+才有支援) TCP Options使用0x4c(76) Full IP & Port Transparency Traffic on the WAN preserves the IP addresses and port numbers of the end stations. 在這個模式中,CSH與SSH之間的Inner Connection全部改用Client與Server之間的Source IP...