Posts

Showing posts from December 13, 2009

Hierarchical Packet Fair Queueing (H-PFQ) vs Hierarchical Fair Service Curve (H-FSC).

While most of the previous research has focused on providing Quality of Service (QoS) on a per session basis, there is a growing need to also support hierarchical link-sharing, or QoS guarantees for traffic aggregate (such as those belonging to the same organization, service provider, or application family). Supporting QoS for both single sessions and traffic aggregates is difficult as it requires the network to meet multiple QoS requirements at different granularities simultaneously. This problem is exacerbated by the fact that there are no formal models that specify all the requirements. We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC).

EMC Data Domain

Data Domain提供給客戶的是建立Disk Staging備份架構中的一種以硬碟為備份資料儲存及恢復的Hardware Appliance. 它以串列SATA硬碟技術為主, 並提供DD OS作業系統軟體, 使得它不只是一種含有RAID 6功能的低價位高性能磁碟陣列外, 並且它的 "容量最加化存儲技術(Capacity Optimized Storage)" 大量減少實際資料所需的儲存空間 和 "資料零缺點架構(Data Invulnerability Architecture)"技術, 提供前所未有的資料多重保護功能. 更近一步的將 "容量最加化存儲技術" 發揮在建立異地備援架構上的資料複製, 讓客戶不在為了傳統資料保護架構缺點而煩惱, 這三大獨特功能更是讓每GB資料儲存成本接近磁帶自動化備份設備, 但卻能提供給客戶比傳統資料保護更完善的機制與架構. 簡單說, 它是一種為了符合專業備份及恢復設備的特殊要求而設計的產品. 容量最加化存儲技術(Capacity Optimized Storage) - 備份資料拆解 - 只儲存獨特的資料碎片, 大幅減少所需的空間需求及成本 - 壓縮比可以有效地達到20:1

EMC Avamar

Image
EMC Avamar是一套針對遠端辦公室備份環境,內建重複資料刪除技術的備份軟體,透過前端代理程式的處理,可刪除備份資料中的冗餘,大幅降低備份資料傳輸佔用的網路頻寬,並節省儲存媒體的消耗量。 系統支援非常廣泛,代理程式可支援Windows、Linux、Solaris、HP-UX、AIX等主要Unix平臺,Mac OS X以及VMware等作業環境,另外也能支援DB2、Exchange、SQL Server、Oracle等應用程式。可適應大多數的企業IT環境。 集中式備份管理 Avamar源自Avamar被EMC併購前的Axion,是一套很典型的集中控管式備份軟體,採用「用戶端—伺服器端」的架構。Avamar伺 服器端的主程式必須安裝在IA32平臺的Red Hat Enterprise Linux AS/ES 3.0上,而用戶端代理程式的支援就非常廣。安裝完成後,可由管理者從Avamar伺服器上,啟動裝有代理程式的前端系統,執行備份作業,將指定的資料經 網路送到Avamar伺服器控制的儲存設備上。除了由伺服器端發起備份外,也能由前端系統的使用者自行啟動備份。管理者還能從其他電腦透過瀏覽器登入 Avamar伺服器,執行各項作業。 由於Avamar本質上是一套備份軟體,除了重複資料刪除這個功能外,其餘功能均與一般軟體類似,如作業狀態監控介面、報表、儲存區管理功能等, 操作算是相當簡便。為簡化建置與維護的問題,EMC銷售時可提供預載了Avamar的應用伺服器,原廠本身也驗証過IBM、HP、Dell等廠商數款市售 伺服器的硬體相容性。 優異的資料刪除效率 冗餘資料的刪減是Avamar的最大特色,透過區塊級的比對能有效剔除資料底層中的重複部份。由於比對動作是由代理程式執行,因此送到網路上的資 料已經是經過De-Dupe,故能大幅縮減對傳輸頻寬的需求;而且代理程式不僅只是比對前端主機上的資料,還能將前端資料的特徵值,比對已儲存在伺服器上 的現有資料的特徵值,因此比對範圍是整個Avamar的儲存區域,這也就是所謂的「全域壓縮」。 Avamar在測試中展現了令人印象深刻的資料刪減能力,是這次測試表現最好的產品之一,這或許與其具備可調式資料分段方式有關,系統分析檔案是可自動以1~64KB的「窗口」對資料進行分段比對,以適應不同的資料類型。文⊙張明德 

Riverbed Steelhead QoS Class Queue Methods

Optionally, select one of the following queue methods for the class from the drop-down list: SFQ . Shared Fair Queueing (SFQ) is the default queue for all classes. Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class. FIFO . Transmits all flows in the order that they are received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages. MXTCP . Has very different use cases than the other queue parameters. MX-TCP also has secondary effects that you need to understand before configuring: When optim

Riverbed Steelhead QoS Classification for the FTP Data Channel

Image
QoS Classification for the FTP Data Channel When configuring QoS classification for FTP, the QoS rules differ depending on whether the FTP data channel is using active or passive FTP. Active versus passive FTP determines whether the FTP client or the FTP server select the port connection for use with the data channel, which has implications for QoS classification. Active FTP Classification With active FTP, the FTP client logs in and issues the PORT command, informing the server which port it must use to connect to the client for the FTP data channel. Next, the FTP server initiates the connection towards the client. From a TCP perspective, the server and the client swap roles: The FTP server becomes the client because it sends the SYN packet, and the FTP client becomes the server because it receives the SYN packet. Although not defined in the RFC, most FTP servers use source port 20 for the active FTP data channel.  For active FTP, configure a QoS rule on the server-side Steelhea

Riverbed Steelhead Adaptive Data Streamlining Modes

Default This setting is enabled by default and works for most implementations. The default setting:  Provides the most data reduction. Reduces random disk seeks and improves disk throughput by discarding very small data margin segments that are no longer necessary. This Margin Segment Elimination (MSE) process provides network-based disk defragmentation. Writes large page clusters. Monitors the disk write I/O response time to provide more throughput. SDR-Adaptive Specify to include the default settings and also. Balances writes and reads. Monitors both read and write disk I/O response and, based on statistical trends, can employ a blend of disk-based and non-disk-based data reduction techniques to enable sustained throughput during periods of high diskintensive workloads.  Important : Use caution with this setting, particularly when you are optimizing CIFS or NFS with prepopulation. Please contact Riverbed Technical Support for more information. SDR-M Performs data reduction

Riverbed Steelhead In-Path Rule - Neural Framing Mode

Optionally, if you have selected Auto-Discover or Fixed Target, you can select a neural framing mode for the in-path rule. Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer. You can specify the following neural framing settings: Never . Never use the Nagle algorithm. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used. Always . Always use the Nagle algorithm. All data is passed to the codec which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A t

TCP Vegas

TCP 的傳送端利用RTT 針測由傳送端到接收端之間queue的長度並藉此調整congestion window 的值,主要修改的部份有三點: Slow Start: 大約2 個RTT時間,cwnd 才會增加一倍; Congestion Avoidance:Vegas 藉由比較預期的速率與實際傳送的速率算出Diff 的值,並限制Diff 的值必須介於alpha 與beta 之間,若Diff < alpha,則增加傳送的速率,反之,若Diff > beta,則減少傳送的速率; 藉由觀察RTT 的值比判斷是否已經有packet timeout 資料來源: http://admin.csie.ntust.edu.tw/IEET/syllabus/course/962_CS5021701_106_5pyq5YiG5oiQ57i+6auY5L2OMS5wZGY=.pdf

When Are ICMP Redirects Sent?

Image
How ICMP Redirect Messages Work ICMP redirect messages are used by routers to notify the hosts on the data link that a better route is available for a particular destination. For example, the two routers R1 and R2 are connected to the same Ethernet segment as Host H. The default gateway for Host H is configured to use router R1.Host H sends a packet to router R1 to reach the destination on Remote Branch office Host 10.1.1.1. Router R1, after it consults its routing table, finds that the next-hop to reach Host 10.1.1.1 is router R2. Now router R1 must forward the packet out the same Ethernet interface on which it was received. Router R1 forwards the packet to router R2 and also sends an ICMP redirect message to Host H. This informs the host that the best route to reach Host 10.1.1.1 is by way of router R2. Host H then forwards all the subsequent packets destined for Host 10.1.1.1 to router R2.