Datacentres today are faced with a growing demand for the latest servers and additional storage capacity, and improved availability of mission-critical applications. The process of managing a diverse set of operating systems, storage arrays, virtual environments, databases and applications combined with this growth or via corporate mergers has become overly complicated and extremely costly. Non-standardised Network-Critical Physical Infrastructure (NCPI) of datacentres is a growing concern for CIOs today. Standardisation of infrastructure software can help to reduce this complexity.
Rationale for Standardisation
According to experts, standardisation can increase operational efficiency with comprehensive visibility and centralised management of applications, servers and storage, across multiple hosts, and increase storage utilisation across heterogeneous storage arrays. Standardising on a single layer of infrastructure allows companies to free their data from technology changes, providing the flexibility or allowing dynamic migration of data to different tiers of storage and performing seamless data migrations across different server architectures. Additionally, customers are not forced to discard a technology previously acquired; they can use existing technology assets without losing agility.
Says Sriram Iyer, director, Datacentre Management Group, Symantec, “Interfaces and commands for using native operating systems tools differ from one file system/volume manager to the next. System and storage administrators are often siloed into one specific platform, or need training to become expert in each. In addition, servers are managed individually; updating one parameter or monitoring a file system’s utilisation requires logging into each server. For example, from a single interface an administrator will be able to identify and migrate data volumes for all servers attached to one storage array to another. The cost of training and maintaining the skill sets to optimally manage disparate operating system-specific solutions is therefore greatly reduced.”
Standardisation Reduces Costs
Rajesh Dhar, country manager, HP says, “Standardisation brings the cost down in various ways. The closer enterprises can be to standards, the more they can deliver at lower cost. With standard-based software, computing and datacentres, it becomes easier for CIOs and CTOs to manage IT, because availability of skilled people to work on these will be more. They won’t need to depend on the vendor for changes or upgrades; they can have people in-house to maintain the same.”
In datacentres today, blades are driving the standardisation as storage is all done on blades. For a structured IT management that’s standard-driven, planning is very critical. CIOs should always plan keeping in mind a period of three years down, as technology refreshes every 18 months. In hardware, it’s bladed architecture that will drive standardisation in future. In software, CIOs have to buy software written to standards to save costs in the long run, feels Dhar.
IT Needs to Deliver
When it comes to datacentre infrastructure software, experts say there are four key capabilities that IT needs to deliver to applications to ensure that they run efficiently and are highly available: Data Protection, Storage Management, Server Management and Application Performance Management.
Data Protection ensures that the data is always protected and can be recovered when required. Storage Management manages storage resources, ensuring that data goes to the right place at the right time and is never lost. Server Management implies managing server resources to ensure that performance expectations are met and servers hosting critical applications are never down. Application Performance Management ensures that the performance of applications in the datacentre meet predefined expectations.
Sriram observes, “There are a host of point products or solutions in each of these areas. However, the benefits accrued by standardising on point solutions are for the short term. The fundamental objective to reduce complexity through standardising is defeated as complexities would resurface in the long run.”
Failure to Adopt Standardisation
Says Amod Ranade, business development manager, InfraStruXure Systems for American Power Conversion (APC), “Standardisation is all-pervasive but we hardly notice it. From driving a car to replacing a battery, its influence is at work behind the scenes to make things more convenient, predictable, affordable, understandable and safe.
Despite the successful record standardisation has when it comes to streamlining businesses, Network-Critical Physical Infrastructure (NCPI) has missed the turn. Failure to adopt modular standardisation as a design strategy for NCPI is costly on all fronts: unnecessary expense, avoidable downtime, and lost business opportunity. Standardisation and its close relative – modularity – create wide-ranging benefits in NCPI that streamline and simplify every process from initial planning to daily operation, with significant positive effects on all three major components of NCPI business value: availability, agility and total cost of ownership.
Standardising NCPI introduces two simple but powerful fundamental characteristics – modular building-block architecture and increased human learning. The clincher for modular standardisation is its multi-faceted, point-by-point contribution to NCPI business value.
Orphaned Terabytes of Storage
Orphaned storage is also referred to as unassigned or unclaimed storage. Storage that has been provisioned by the array but not recognised by the server is called unassigned storage. Storage that is claimed by the server but not visible to the application is called unclaimed storage.
Reclaiming this storage helps in two ways. Firstly, it improves storage utilisation by putting to use storage that was lying unused. Secondly, it can result in the postponement of storage purchases. Both help reduce costs. In addition, due to the limitations with the native operating system volume managers, storage is heavily over-provisioned. Because downtime is often required to grow existing volumes and file systems, administrations will allocate based on initial requirements plus estimated growth over a period of time along with a fudge factor.
Sriram says, “By reclaiming unused storage from existing file systems it can be reallocated to new applications or growing applications, eliminating the need for new storage purchases. Typically, to reclaim this unused storage, applications would need to be shut down while the volumes are backed up, original volumes deleted and recreated with the appropriate size, and the data restored.”
Storage Migration
One of the biggest reasons for planned application downtime is migrating data to new storage arrays. Due to technology upgrades or leases expiring, companies typically replace storage arrays every 3-5 years. Since array capacities currently run to 100+ terabytes, many servers and applications are affected by migrating this data. Vendor-specific tools generally allow this data to be moved only to the same vendor’s hardware within the same product family. Often professional services are required to identify the dependencies and perform the data movement. This is a costly process, which requires applications to be taken offline.
“We have Storage Foundation that helps migrate an existing application’s entire data set to a new storage array without bringing the application down and is a much easier process. By mirroring volumes across arrays or drag/drop of a volume from the graphical user interface, data can be dynamically transferred. This also provides the flexibility to move data to a different vendor’s storage array or a lower cost storage device,” adds Sriram.
Standards would also mean complying with industry-wide specifications and codes established by certification organisations such as ISO and IEEE, which are profoundly significant to the industry. Standardisation’s historical record of economic success speaks for itself and needs no further analysis.