Dec 16, 2009

Hierarchical Packet Fair Queueing (H-PFQ) vs Hierarchical Fair Service Curve (H-FSC).

While most of the previous research has focused on providing Quality of Service (QoS) on a per session basis, there is a growing need to also support hierarchical link-sharing, or QoS guarantees for traffic aggregate (such as those belonging to the same organization, service provider, or application family). Supporting QoS for both single sessions and traffic aggregates is difficult as it requires the network to meet multiple QoS requirements at different granularities simultaneously. This problem is exacerbated by the fact that there are no formal models that specify all the requirements.

We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC). H-PFQ is the the first algorithm to simultaneously support all three of real-time, adaptive best-effort, and link-sharing services. H-FSC allows a more flexible resource management than H-PFQ by decoupling the delay and bandwidth allocation. From a conceptual point of view, H-FSC is the first algorithm that goes beyond Fair Queueing and still satisfies all important requirements in integrated services networks. To find out more about and H-PFQ and HFSC's technical advantages, click here.

HFSC, along with a packet classifier and a graphical user interface program has been implemented in both NetBSD 1.2D and FreeBSD 2.2.6. This implementation supports IETF Integrated Services (intserv) service models and RSVP signaling. It also supports most standard CAIRN testbed network hardware. It is currently under testing on CAIRN.

EMC Data Domain

Data Domain提供給客戶的是建立Disk Staging備份架構中的一種以硬碟為備份資料儲存及恢復的Hardware Appliance. 它以串列SATA硬碟技術為主, 並提供DD OS作業系統軟體, 使得它不只是一種含有RAID 6功能的低價位高性能磁碟陣列外, 並且它的 "容量最加化存儲技術(Capacity Optimized Storage)" 大量減少實際資料所需的儲存空間 和 "資料零缺點架構(Data Invulnerability Architecture)"技術, 提供前所未有的資料多重保護功能. 更近一步的將 "容量最加化存儲技術" 發揮在建立異地備援架構上的資料複製, 讓客戶不在為了傳統資料保護架構缺點而煩惱, 這三大獨特功能更是讓每GB資料儲存成本接近磁帶自動化備份設備, 但卻能提供給客戶比傳統資料保護更完善的機制與架構. 簡單說, 它是一種為了符合專業備份及恢復設備的特殊要求而設計的產品.


容量最加化存儲技術(Capacity Optimized Storage)
-備份資料拆解
-只儲存獨特的資料碎片, 大幅減少所需的空間需求及成本
-
壓縮比可以有效地達到20:1

EMC Avamar



EMC Avamar是一套針對遠端辦公室備份環境,內建重複資料刪除技術的備份軟體,透過前端代理程式的處理,可刪除備份資料中的冗餘,大幅降低備份資料傳輸佔用的網路頻寬,並節省儲存媒體的消耗量。

系統支援非常廣泛,代理程式可支援Windows、Linux、Solaris、HP-UX、AIX等主要Unix平臺,Mac OS X以及VMware等作業環境,另外也能支援DB2、Exchange、SQL Server、Oracle等應用程式。可適應大多數的企業IT環境。

集中式備份管理
Avamar源自Avamar被EMC併購前的Axion,是一套很典型的集中控管式備份軟體,採用「用戶端—伺服器端」的架構。Avamar伺 服器端的主程式必須安裝在IA32平臺的Red Hat Enterprise Linux AS/ES 3.0上,而用戶端代理程式的支援就非常廣。安裝完成後,可由管理者從Avamar伺服器上,啟動裝有代理程式的前端系統,執行備份作業,將指定的資料經 網路送到Avamar伺服器控制的儲存設備上。除了由伺服器端發起備份外,也能由前端系統的使用者自行啟動備份。管理者還能從其他電腦透過瀏覽器登入 Avamar伺服器,執行各項作業。

由於Avamar本質上是一套備份軟體,除了重複資料刪除這個功能外,其餘功能均與一般軟體類似,如作業狀態監控介面、報表、儲存區管理功能等, 操作算是相當簡便。為簡化建置與維護的問題,EMC銷售時可提供預載了Avamar的應用伺服器,原廠本身也驗証過IBM、HP、Dell等廠商數款市售 伺服器的硬體相容性。

優異的資料刪除效率
冗餘資料的刪減是Avamar的最大特色,透過區塊級的比對能有效剔除資料底層中的重複部份。由於比對動作是由代理程式執行,因此送到網路上的資 料已經是經過De-Dupe,故能大幅縮減對傳輸頻寬的需求;而且代理程式不僅只是比對前端主機上的資料,還能將前端資料的特徵值,比對已儲存在伺服器上 的現有資料的特徵值,因此比對範圍是整個Avamar的儲存區域,這也就是所謂的「全域壓縮」。

Avamar在測試中展現了令人印象深刻的資料刪減能力,是這次測試表現最好的產品之一,這或許與其具備可調式資料分段方式有關,系統分析檔案是可自動以1~64KB的「窗口」對資料進行分段比對,以適應不同的資料類型。文⊙張明德 

Dec 15, 2009

Riverbed Steelhead QoS Class Queue Methods

Optionally, select one of the following queue methods for the class from the drop-down list:
  • SFQ. Shared Fair Queueing (SFQ) is the default queue for all classes. Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class.
  • FIFO. Transmits all flows in the order that they are received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages.
  • MXTCP. Has very different use cases than the other queue parameters. MX-TCP also has secondary effects that you need to understand before configuring:
  1. When optimized traffic is mapped into a QoS class with the MX-TCP queuing parameter, the TCP congestion control mechanism for that traffic is altered on the Steelhead appliance. The normal TCP behavior of reducing the outbound sending rate when detecting congestion or packet loss is disabled, and the outbound rate is made to match the minimum guaranteed bandwidth configured on the QoS class.
  2. You can use MX-TCP to achieve high-throughput rates even when the physical medium carrying the traffic has high loss rates. For example, MX-TCP is commonly used for ensuring high throughput on satellite connections where a lower-layer-loss recovery technique is not in use.
  3. Another usage of MX-TCP is to achieve high throughput over highbandwidth, high-latency links, especially when intermediate routers do not have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high throughput rates.
Important: Use caution when specifying MX-TCP. The outbound rate for the optimized traffic in the configured QoS class immediately increases to the specified bandwidth, and does not decrease in the presence of network congestion. The Steelhead appliance always tries to transmit traffic at the specified rate. If no QoS mechanism (either parent classes on the Steelhead appliance, or another QoS mechanism in the WAN or WAN infrastructure) is in use to protect other traffic, that other traffic might be impacted by MX-TCP not backing off to fairly share bandwidth.
When MX-TCP is configured as the queue parameter for a QoS class, the following parameters for that class are also affected:
  • Link share weight. The link share weight parameter has no effect on a QoS class configured with MX-TCP.
  • Upper limit. The upper limit parameter has no effect on a QoS classconfigured with MX-TCP.

Riverbed Steelhead QoS Classification for the FTP Data Channel



QoS Classification for the FTP Data Channel
When configuring QoS classification for FTP, the QoS rules differ depending on whether the FTP data channel is using active or passive FTP. Active versus passive FTP determines whether the FTP client or the FTP server select the port connection for use with the data channel, which has implications for QoS classification.

Active FTP Classification
With active FTP, the FTP client logs in and issues the PORT command, informing the server which port it must use to connect to the client for the FTP data channel. Next, the FTP server initiates the connection towards the client. From a TCP perspective, the server and the client swap roles: The FTP server becomes the client because it sends the SYN packet, and the FTP client becomes the server because it receives the SYN packet.
Although not defined in the RFC, most FTP servers use source port 20 for the active FTP data channel. For active FTP, configure a QoS rule on the server-side Steelhead appliance to match source port 20. On the client-side Steelhead appliance, configure a QoS rule to match destination port 20.


Passive FTP Classification
With passive FTP, the FTP client initiates both connections to the server. First, it requests passive mode by issuing the PASV command after logging in. Next, it requests a port number for use with the data channel from the FTP server. The server agrees to this mode, selects a random port number, and returns it to the client. Once the client has this information, it initiates a new TCP connection for the data channel to the server-assigned port. Unlike active FTP, there is no role swapping and the FTP client initiates the SYN packet for the data channel.
It is important to note that the FTP client receives a random port number from the FTP server. Because the FTP server cannot return a consistent port number to use with the FTP data channel, RiOS does not support QoS Classification for passive FTP in versions earlier than RiOS v4.1.8, v5.0.6, or v5.5.1. Newer RiOS releases support passive FTP and the QoS Classification configuration for passive FTP is the same as active FTP.
When configuring QoS Classification for passive FTP, port 20 on both the server and client-side Steelhead appliances simply means the port number being used by the data channel for passive FTP, as opposed to the literal meaning of source or destination port 20.

Riverbed Steelhead Adaptive Data Streamlining Modes

  • Default This setting is enabled by default and works for most implementations. The default setting: 
  1. Provides the most data reduction.
  2. Reduces random disk seeks and improves disk throughput by discarding very small data margin segments that are no longer necessary. This Margin Segment Elimination (MSE) process provides network-based disk defragmentation.
  3. Writes large page clusters.
  4. Monitors the disk write I/O response time to provide more throughput.
    • SDR-Adaptive Specify to include the default settings and also.
    1. Balances writes and reads.
    2. Monitors both read and write disk I/O response and, based on statistical trends, can employ a blend of disk-based and non-disk-based data reduction techniques to enable sustained throughput during periods of high diskintensive workloads. Important: Use caution with this setting, particularly when you are optimizing CIFS or NFS with prepopulation. Please contact Riverbed Technical Support for more information.
      • SDR-M Performs data reduction entirely in memory, which prevents the Steelhead appliance from reading and writing to and from the disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. SDR-M is most efficient when used between two identical high-end Steelhead appliance models; for example, 6050 - 6050. When used between two different Steelhead appliance models, the smaller model limits the performance. Important: You cannot use peer data store synchronization with SDR-M.

      Riverbed Steelhead In-Path Rule - Neural Framing Mode

      Optionally, if you have selected Auto-Discover or Fixed Target, you can select a neural framing mode for the in-path rule. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer. You can specify the following neural framing settings:
      • Never. Never use the Nagle algorithm. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used.
      • Always. Always use the Nagle algorithm. All data is passed to the codec which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A timer (6 ms) backs up the codec and causes leftover data to be consumed. Neural heuristics are computed in this mode but are not used.
      • TCP Hints. This is the default setting which is based on the TCP hints. If data is received from a partial frame packet or a packet with the TCP PUSH flag set, the encoder encodes the data instead of immediately coalescing it. Neural heuristics are computed in this mode but are not used.
      • Dynamic. Dynamically adjust the Nagle parameters. In this option, the system discerns the optimum algorithm for a particular type of traffic and switches to the best algorithm based on traffic characteristic changes.
      For different types of traffic, one algorithm might be better than others. The considerations include: latency added to the connection, compression, and SDR performance.
      To configure neural framing for an FTP data channel, define an in-path rule with the destination port 20 and set its optimization policy. To configure neural framing for a MAPI data channel, define an in-path rule with the destination port 7830 and set its optimization policy.

      Dec 14, 2009

      TCP Vegas

      TCP 的傳送端利用RTT 針測由傳送端到接收端之間queue的長度並藉此調整congestion window 的值,主要修改的部份有三點:
      1. Slow Start: 大約2 個RTT時間,cwnd 才會增加一倍;
      2. Congestion Avoidance:Vegas 藉由比較預期的速率與實際傳送的速率算出Diff 的值,並限制Diff 的值必須介於alpha 與beta 之間,若Diff < alpha,則增加傳送的速率,反之,若Diff > beta,則減少傳送的速率;
      3. 藉由觀察RTT 的值比判斷是否已經有packet timeout

      資料來源: http://admin.csie.ntust.edu.tw/IEET/syllabus/course/962_CS5021701_106_5pyq5YiG5oiQ57i+6auY5L2OMS5wZGY=.pdf

      Dec 13, 2009

      When Are ICMP Redirects Sent?

      How ICMP Redirect Messages Work

      ICMP redirect messages are used by routers to notify the hosts on the data link that a better route is available for a particular destination.
      For example, the two routers R1 and R2 are connected to the same Ethernet segment as Host H. The default gateway for Host H is configured to use router R1.Host H sends a packet to router R1 to reach the destination on Remote Branch office Host 10.1.1.1.
      Router R1, after it consults its routing table, finds that the next-hop to reach Host 10.1.1.1 is router R2.
      Now router R1 must forward the packet out the same Ethernet interface on which it was received. Router R1 forwards the packet to router R2 and also sends an ICMP redirect message to Host H.
      This informs the host that the best route to reach Host 10.1.1.1 is by way of router R2.
      Host H then forwards all the subsequent packets destined for Host 10.1.1.1 to router R2.
      43_01.gif