By Subhasish Gupta Today’s datacentre environment encompasses everything from small in-house network servers and data storage, to large purpose built facilities providing outsourced data management and storage. In all cases the key requirement remains fast and reliable transfer of data, making the most efficient use of the network. Ethernet has become ever more ubiquitous in the datacentre network, due to its wide deployment and performance now matching more traditional options. With an Ethernet network already in place, the ability to leverage this existing infrastructure for server storage data transfer, adds value and simplifies management by converging data transfer onto a single physical network. [caption id=“attachment_2013921” align=“alignleft” width=“380”]
Pic[/caption] The iSCSI (Internet Small Computer Systems Interface) protocol enables SCSI packets to traverse an IP Ethernet network, which facilitates data transfer between application servers and centralised storage. This utilisation of the existing network provides infrastructure consolidation, and overcomes distance limitations, presenting disk drives in SAN as if they are local to the physical server hardware. iSCSI provides a number of benefits: – It lowers storage costs, by leveraging the existing network. – It uses familiar Ethernet and IP networking standards and protocols, simplifying management and troubleshooting. – It is an industry standard, and widely supported. – It removes distance limitations, which enables remote backup where required. – It provides scalable performance, as Ethernet networks are available up to 10Gbps and beyond with 40Gbps Ethernet becoming common place in the datacentre. With its many benefits, iSCSI has become an instrumental part of simplifying the deployment of datacentre networks, and their storage requirements. Using virtualised servers in conjunction with an iSCSI SAN, allows resources to be allocated based on the requirements of different applications at any given time. This in turn requires that Ethernet switches have the ability to support high-speed iSCSI throughput, and prioritise and protect this critical data on the network. Choosing the right network equipment is paramount to implementing a superior high performing datacentre solution. While many switch vendors tout packet buffers as the be all and end all when it comes to iSCSI performance, there is still the chance the packet buffers will fill and when this happens, packets will be dropped. iSCSI uses the Transmission Control Protocol or TCP for transmission across the network and TCP does a number of things when a packet is dropped. The first thing that happens is that the packets that were dropped never arrive at their destination so need to be retransmitted resulting in latency while these packets are retransmitted. Which, while inconvenient, is not the end of the earth? The second thing that happens is that the TCP sliding window opens which lowers the effective throughput of the network and is a much bigger deal as this will have a significant impact on performance. And while the sliding window will close again, if packets are dropped on a semi-regular basis, the effective bandwidth through the network will be continuously limited. A far better solution is to use flow control to pause the transmission of traffic from the sender which ensures packets are not lost, resulting in far better performance and ensuring exceptional support for iSCSI traffic. The ability to manage hardware based flow control maximizes throughput and minimise latency. And the key to maximising iSCSI throughput is to never drop packets, that is, prevent any link oversubscription. Achieving this requires highly responsive flow control, whereby the Ethernet switch can very carefully control the rate of data delivery from the SAN. Optimising the performance for a range of switches improves the responsiveness and accuracy of the flow control and ensures optimum performance of iSCSI traffic. The switch guides the data storage units to send at a consistent high data rate that never quite oversubscribes the switch’s ports. This ensures any burstiness of iSCSI SAN to server traffic is eliminated. Along with the optimal performance and delivery of storage traffic, there are a number of other factors that are also importance when it comes to a storage environment. A storage network is of little use if it is not available so availability and redundancy are just as important as performance with iSCSI traffic. There are a number of options to ensure availability within a network. The first of these is to use a chassis switch with N+1 redundancy which ensures that every component within the switch is duplicated and that there is no single point of failure. So, links between the network, servers and storage are duplicated which provides the added benefit of increasing the bandwidth between nodes under most circumstances. The second option is to extend the virtualisation concept to the network and deploy virtual chassis stacking which allows multiple switches to appear and act as a single virtual chassis. Used in conjunction with aggregated links to servers and upstream switches, Virtual chassis stacking eliminates any single point of failure, while the full power of the network is always utilised. Extending this concept further, it is important that the power supplies within the switches (which have been deployed in a redundant fashion), are connected to separate power rails within the rack and in turn these are connected to separate Uninterruptable Power Supplies or UPS’s and separate power circuits – ideally using different phases. Beyond the resilience of the device or devices itself, it is also important for the network topology to be resilient. This may mean using link aggregation and cross-stack or cross-chassis link aggregation but alternately could utilise a highly resilient ring protection technology. Ethernet Protected Switched Rings or EPSR delivers a highly resilient transport that can be used throughout a datacentre or even between datacentres and provides sub 50 millisecond restoration in the event of a failure. And finally, in a converged network where a single physical network is deployed to support all applications, bandwidth provision and traffic prioritisation allows switches to intelligently manage any congestion that may occur and ensures the fast reliable delivery of iSCSI storage traffic where it must contend with bandwidth being used by other applications. The convergence of storage and network architectures in datacentres has bought many challenges to the equipment used in storage environments. When selecting a networking vendor to work with for your storage environment and ensure you have a future proof and scalable network, ensure you have considered the features and performance the vendor can provide and that these meet or exceed your expectations and requirements. (The author is country manager – India & SAARC at Allied Telesis.)
)