Jul 24, 2009

Riverbed Steelhead Password Recovery

To reset your password, you must have access to the serial console or monitor and be able to see the entire boot process to perform these steps:

1. Start, or reboot the appliance.

2. Once you see the "Press any key to continue" message, press a key.

3. Immediately press E.
You should see a GNU GRUB menu.

For a Steelhead upgraded to 4.0 from 2.0 or 3.0, the menu prompts you to select the Riverbed Steelhead, diagnostics, or a restore/recovery image. Select Riverbed Steelhead and skip to Step 5.

For a Steelhead manufactured with 4.0 (that has not had previous versions), the menu prompts you to select the disk image to use. Continue with Step 4.

For software versions prior to 4.0, the menu displays root and kernel parameters. Skip to Step 6.

4. Press V or ^ to select the disk image to boot.

5. Press E.
Another GRUB menu appears, with options similar to these:
------------------
0: root (hd0,1)
1: kernel /vmlinuz ro root=/dev/sda5 console=tty0 console=ttyS0,9600n8
-----------------


6. Press V or ^ to select the kernel boot parameters entry.

7. Press E to edit the kernel boot parameters.
You should be given a partially filled in line of text.

8. Append " single fastboot" at the end of this line. Note the space before 'single', it is very important. (And do not enter the quotes.)

9. The line of text will contain TWO "console=" entries. Delete the one containing "tty0" (unless you are using a keyboard/monitor on the Steelhead, in which case delete the one containing "ttyS0").
TIP: Use the arrow keys to access the entire command line.

10. Press Enter.

11. Press the B key to continue booting.

The system starts.

12. Once at the command prompt, type "/sbin/resetpw.sh" and press Enter.

The password will be blank.

13. Type "reboot" and press Enter to reboot the appliance.

Riverbed gets Websense

Riverbed’s Saldich hopes the firm’s tie-up with Websense will mark its solution out as market leading in its comprehensiveness and flexibility.

Riverbed’s Saldich hopes the firm’s tie-up with Websense will mark its solution out as market leading in its comprehensiveness and flexibility.

Network infrastructure outfit Riverbed has joined forces with internet security firm Websense to offer web security with its WAN optimisation solutions.

Riverbed’s Services Platform (RSP), which is a virtualised data services platform integrated into its Steelhead appliance, will now come equipped with Websense Web Security. Both vendors have also stated they are planning to extent the relationship further in the future with the addition of Websense’s Hosted Web Security to Riverbed’s product range in an attempt to provide the most “comprehensive, flexible and up-to-date” security solutions on the market.
The manufacturer points out that by installing Websense Web Security software on Riverbed’s Steelhead appliances, organisations can consolidate WAN and Web security deployments, to further secure and boost their IT infrastructure.

“Through our virtualised RSP, enterprises can take advantage of core network services for branch offices, like best-of-breed Websense Web Security, and obtain extended value in the Riverbed Steelhead appliances,” proclaimed Alan Saldich, vice president of product marketing at Riverbed.

“Enterprises are executing on initiatives to reconcile the competing demands of IT consolidation and the growing distributed workforce. The RSP provides enterprises with a platform to address this challenge while cutting costs,” added Saldich.
Building on what it describes as a seamless, “serverless” deployment of the Websense‘s solution at the branch level, the end-user also has the option of benefiting from centralised reporting and policy management with the Websense V10000 secure Web gateway appliance in the central data centre.

“As we move forward with the next phase of the relationship, customers will have the choice of appliance, software or SaaS deployment options,” explained Ray Kruck, senior director of business development at Websense.

“The Websense and Riverbed joint offering is a powerful security solution architecture that protects essential information within Web-based applications and optimises its access and delivery across the distributed enterprise,” he concluded.

Jul 23, 2009

Tabular Data Stream (TDS)

From Wikipedia, the free encyclopedia

Tabular Data Stream (TDS) is an application layer protocol, used to transfer data between a database server and a client. Initially designed and developed by Sybase Inc. for their Sybase SQL Server relational database engine in 1984, and later by Microsoft in Microsoft SQL Server.

Background

During the early development of Sybase SQL Server, the developers at Sybase realized that there was no commonly accepted application-level protocol to transfer data between a database server and its client. To encourage the use of their products, Sybase came up with a solution through the use of a flexible pair of libraries called netlib, and db-lib to implement standard SQL. A further library was included to implement "Bulk Copy" called blk. While netlib's job is to ferry data between the two computers through the underlying network protocol, db-lib provides an API to the client program, and communicates with the server via netlib. db-lib sends to the server a structured stream of bytes meant for tables of data, hence a Tabular Data Stream. blk provides, like db-lib, an API to the client programs and communicates with the server via netlib. Unlike SQL, it provides a proprietary but much faster protocol for loading data into a database table.

In 1990, Sybase entered into a technology sharing agreement with Microsoft which resulted in Microsoft marketing its own SQL Server — Microsoft SQL Server — based on Sybase's code. Microsoft kept the db-lib API and added ODBC. (Microsoft has since added additional APIs.) At about the same time, Sybase introduced a more powerful "successor" to db-lib, called ct-lib, and called the pair Open Client.

The TDS protocol comes in several varieties, most of which had not been openly documented because they were considered to be proprietary technology. The exception was TDS 5.0, used exclusively by Sybase, for which documentation is available from Sybase. This state changed when Microsoft published the TDS specification, probably due to the Open Specification Promise.

Opportunistic Locking (OpLocks)

In the SMB protocol, Opportunistic Locking (also referred to as OpLocks) is a file locking mechanism designed to improve performance by controlling caching of files on the client. Contrary to traditional locking, OpLocks are not used in order to provide mutual exclusion, rather the main goal of OpLocks is to provide synchronization for caching.

Locking types
In the SMB protocol there are 3 types of Opportunistic Locks:
Batch Locks
Batch OpLocks were created originally to support a particular behavior of MS-DOS batch file execution operation in which the file is opened and closed many times in a short period. This is an obvious performance problem. To solve this, a client may ask for a Batch type OpLock. In this case, the client delays sending the close request and if a subsequent open request is given, the two requests cancel each other.
Exclusive Locks
When a client opens a file hosted on an SMB server which is not opened by any other process (or other clients) the client receives anExclusive OpLock from the server. This means that the client may now assume that it is the only process with access to this particular file, and the client may now cache all changes to the file before committing it to the server. This is an obvious performance boost, since fewer round-trips are required in order to read and write to the file. If another client/process tries to open the same file, the server sends a message to the client (called a break or revocation) which invalidates the exclusive lock previously given to the client. The client then flushes all changes to the file.
Level 2 OpLocks
If a file is opened by a third party while an Exclusive OpLock is held by a client, the client has to relinquish its exclusive OpLock to allow the other client's write/read access. A client may then receive a "Level 2 OpLock" from the server. A Level 2 OpLock allows the caching of read requests but excludes write caching.

Bandwidth-Delay Product(BDP) 頻寬-延遲乘積

From Wikipedia, the free encyclopedia

In data communications, bandwidth-delay product refers to the product of a data link's capacity (in bits per second) and its end-to-end delay(in seconds). The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e. data that has been transmitted but not yet received. Sometimes it is calculated as the data link's capacity times its round trip time[1].

Obviously, the bandwidth-delay product is higher for faster circuits with long-delay links such as GEO satellite connections. The product is particularly important for protocols such as TCP that guarantee reliable delivery, as it describes the amount of yet-unacknowledged data that the sender has to duplicate in a buffer memory in case the client requires it to re-transmit a garbled or lost packet.[2]

A network with a large bandwidth-delay product is commonly known as a long fat network (shortened to LFN and often pronounced "elephant"). As defined in RFC 1072, a network is considered an LFN if its bandwidth-delay product is significantly larger than 105 bits (~12kB).

[edit]Examples

  • Customer on a DSL link, 1 Mbit/s, 200 ms one-way delay: 200 kbit = 25 kB
  • High-speed terrestrial network: 100 Mbit/s, 100 ms: 10 Mbit = 1.25 MB
  • Server on a long-distance 1 Gbit/s link, average one-way delay 300 ms = 300 Mbit = 37.5 MB total required for buffering

    TCP 的性能取決於幾個方面的因素。兩個最重要的因素是鏈接帶寬(link bandwidth)(報文在網絡上傳輸的速率)和 往返時間(round-trip time) 或 RTT(發送報文與接收到另一端的響應之間的延時)。這兩個值確定了稱為 Bandwidth Delay Product(BDP)的內容。

    給定鏈接帶寬和 RTT 之後,您就可以計算出 BDP 的值了,不過這代表什麼意義呢?BDP 給出了一種簡單的方法來計算理論上最優的 TCP socket 緩衝區大小(其中保存了排隊等待傳輸和等待應用程序接收的數據)。如果緩衝區太小,那麼 TCP 窗口就不能完全打開,這會對性能造成限制。如果緩衝區太大,那麼寶貴的內存資源就會造成浪費。如果您設置的緩衝區大小正好合適,那麼就可以完全利用可用的帶寬。

    下面我們來看一個例子:
      BDP = link_bandwidth * RTT
    如果應用程序是通過一個 100Mbps 的局域網進行通信,其 RRT 為 50 ms,那麼 BDP 就是:
      100MBps * 0.050 sec / 8 = 0.625MB = 625KB
    注意:此處除以 8 是將位轉換成通信使用的字節。因此,我們可以將 TCP 窗口設置為 BDP 或 1.25MB。但是在 Linux 2.6 上默認的 TCP 窗口大小是 110KB,這會將連接的帶寬限製為 2.2MBps,計算方法如下:
      throughput = window_size / RTT
      110KB / 0.050 = 2.2MBps
    如果使用上面計算的窗口大小,我們得到的帶寬就是 12.5MBps,計算方法如下:
      625KB / 0.050 = 12.5MBps
    差別的確很大,並且可以為 socket 提供更大的吞吐量。因此現在您就知道如何為您的 socket 計算最優的緩衝區大小了。

Jul 21, 2009

SFQ vs FIFO vs MXTCP Queue in Riverbed Steelhead

Queuing is a method for prioritizing traffic. The following queuing mechanisms are supported in Riverbed Steelhead Appliance:

  • SFQ(Stochastic Fairness Queuing) SFQ is the default queue for all classes. SFQ services all flows in a round-robin fashion, reducing the latency for competing flows. SFQ ensures that each flow has fair access to network resources and prevents a bursty flow from consuming more than its fair share of output bandwidth.
  • FIFO Transmits all flows in the order that they are received (first in, first out). Bursty sources can cause long delays in delivering time-sensitive application traffic and potentially to network control and signaling messages.
  • MX-TCP MX-TCP, which stands for "Maximum Speed TCP", is an optional acceleration mode that allows Steelhead appliances to achieve maximum throughput for environments where it is a challenge to fill the pipe. Optimizes high-loss links where regular TCP would cause under utilization. With MX-TCP, the TCP congestion control algorithm is removed on the inner connections. This allows the link to be saturated in a much faster time frame and eliminates the possibility of under utilizing the link. Any class that is defined on the Steelhead appliance can be MX-TCP enabled. Link Share Weight and Upper BW do not apply to MX-TCP. These environments include long fat networks where the pipe is big, but the delay or latency is impacting throughput. MX-TCP suited environments also include WAN links that experience high packet loss. MX-TCP in simple terms is TCP without the congestion control bottlenecks. Without congestion control, the optimized traffic does not have the "friendliness" of normal TCP where it behaves with other traffic. MX-TCP simply blasts traffic through the link as fast as it can go. To mitigate the unfriendly behavior, MX-TCP is recommended for point to point, non-shared links. MX-TCP also requires that you use the Steelhead's appliance's QoS facility to "control" the MX-TCP traffic between the Steelhead appliances. You dial in how much bandwidth you want to use and MX-TCP uses it!

Flat QoS vs Hierarchical QoS(H-QoS)

Flat QoS:

In flat QoS, all classes are created at the same level. When all classes are on the same level, the types of QoS policies that can be represented are limited. For example, suppose you want to create a hub with two classes to represent remote offices for San Francisco and New York, then apply a QoS policy to segregate traffic flow within the San Francisco office. Because you cannot define a subclass within the San Francisco spoke, the traffic flow is difficult to segregate. The following figure illustrates a flat QoS structure.


H-QoS:

H-QoS provides a way to create a hierarchical QoS structure that supports parent and child classes. You can use a parent and child structure to segregate traffic for remote offices based on flow source or destination. This is a way to effectively manage and support remote sites with different bandwidth characteristics.

For example, a QoS hierarchy can represent a hub site that uses parent classes for the two spoke offices of San Francisco and New York. For each parent class, you can define child subclasses. The child subclasses can then use both rate shaping and prioritization to regulate the interaction of traffic at the site.

When to Use Flat versus H-QoS
Generally speaking, you create a flat QoS structure when you have a simple QoS schema or need to use link share weights.

You create a H-QoS structure when you:
  • have a complex QoS schema and need to simplify its management. The Management Console provides a view of the QoS class tree and its associated rules table for easy administration.
  • have remote sites that require a different mix of guaranteed bandwidth or latency for their applications.
  • want to share excess bandwidth between class groupings (remote sites).
  • have virtual in-path deployments; in this case H-QoS is required.