Centralising storage at the corporate datacentre makes all the sense in the world, particularly for information that must be consistently available to employees at headquarters, in branch offices, and sometimes executives on the road. Replicating key data between multiple corporate datacentres so that important information is never lost is also sensible. So is periodically transmitting live backups to an offsite disaster recovery facility.
Such continuous movement of large volumes of data across the network would have been impractical just a few years ago. Now, it’s practically a necessity. For example, an effective disaster recover strategy requires that all data be stored redundantly at multiple physical locations. In the event of a disaster that destroys data stored at one location, there will always be a second, redundant copy of the lost data at a different physical location. Such a strategy ensures that a single cataclysmic event, such as hurricane, flood, or terrorist act, will never destroy all of the data that a corporation has.
Traditional techniques of running nightly backups to tapes or other physical media and then trucking those tapes off to a secure location are starting to look positively quaint. Data volumes have grown to the point where backups can’t necessarily be accomplished in a nightly backup window. In any case, many businesses can’t tolerate losing even a few hours of corporate data or risk the possibility of tapes being lost in transit. That makes live incremental backups throughout the day a much better alternative. However, this transformation of how we manage data storage comes with its own complications. In particular, if not managed properly, highly distributed network storage can result in an excessive strain on wide area network (WAN) connections. That translates into poor performance for file transfer and replication protocols, as well as applications that depend on them. Meanwhile, a WAN circuit that is overloaded because a backup job is running may not provide the performance required for other applications to work properly.
With the advent of cloud storage services for backup and recovery, efficient use of network capacity becomes even more important because these services rely on the Internet, meaning they are even more subject to congestion and latency than applications on the corporate WAN.
The good news is that there are solutions to these challenges. WAN optimisation technology can squeeze more capacity out of whatever network bandwidth is available. Often, this technology is delivered in the form of an appliance – a network device preconfigured to run the software that will perform these optimisation tasks. In highly virtualised and cloud computing environments, it can also be delivered as a software-only virtual appliance that can be loaded into a virtual machine like any other application.
While these deployment options are largely a matter of preference, some of the more high-end appliances do have advantages like encryption co-processors for accelerating SSL encryption beyond what is possible with software alone.
The basic techniques of WAN optimisation include compression, data de-duplication, and protocol specific optimisations.
Like many enterprise applications, network file systems like the Windows Common Internet File System (CIFS) grew up around local area networks (LANs). In other words, they were designed to transmit files around the building or across the campus, not around the world. As a result, they are often unnecessarily “chatty,” meaning that they expect to carry on a rapid-fire conversation between client and server whenever they transmit a block of data. That is fine when each transmission and acknowledgement takes milliseconds on a LAN, but it becomes a problem when each of these exchanges is stretched over a longer connection and the delays start to add up.
One way around this is to deploy an optimisation appliance at either end of the WAN connection, which can transparently streamline these file transfer conversations and minimise the effect of network delays. The best of these devices can eliminate in the range of 65 to 98 per cent of network round trips.
Look for products that support a wide range of network file system and replication protocols, including those associated with specific network storage products. For example, users of EMC’s Symmetrix V-MAX and DMX storage systems will want support for SRDF/A. This is EMC’s protocol for asynchronous file replication, often used for replication between datacentres or to disaster recovery facilities.
Along with protocol streamlining, WAN optimisation devices dramatically reduce the amount of data that must be transmitted, which both speeds transmission and lessens the load on the network. The bandwidth required to transmit a file can often be reduced by 60 to 95 per cent. This is achieved with a combination of compression and data deduplication techniques. For example, suppose you are backing up a directory of word processing documents that includes multiple drafts of the same press release. Your WAN appliance can identify when a block of data is identical to one that it has already transmitted. In those cases, instead of transmitting the data itself, it sends a reference that the appliance on the other end of the connection can use to retrieve that data from its cache. So if each draft of a document is 95 per cent the same as the last, with WAN optimisation you only need to stream the fraction that has changed across the network.
Further, even different documents might all contain a section that is always the same, which would only have to be transmitted once for many documents. At the same time, the data that is not duplicated can still be compacted – often by a factor of 100 – using standard file compression algorithms like Lempel-Ziv (LZ).
In combination, these protocol and deduplication optimisations often allow a single file to be copied across the WAN more than 250 times faster than would be possible to without optimisation. An entire directory of files can be copied nearly 20 times faster, while lowering the bandwidth required by up to 98 per cent.
The payoff from these improvements comes every time a branch office employee downloads a procedures manual from the corporate datacentre, or a factory scheduling application retrieves the latest sales demand projections from headquarters. If a critical enterprise server crashes and must be restored from backups stored offsite, WAN optimisation will pay off in spaces by allowing you to retrieve that backup data many times quicker – and get back in business.
All these things are possible, for those who plan their network and storage strategies properly.
The author is Marketing Evangelist, APAC & Japan, Riverbed Technology.
Your guide to the latest cricket World Cup stories, analysis, reports, opinions, live updates and scores on https://www.firstpost.com/firstcricket/series/icc-cricket-world-cup-2019.html. Follow us on Twitter and Instagram or like our Facebook page for updates throughout the ongoing event in England and Wales.
Updated Date: Sep 19, 2011 16:11:40 IST