Near exponential data growth is changing the rules for organisations everywhere. Server and desktop virtualisation, databases, Web 2.0 applications and high performance computing (HPC) are all contributing to new data management challenges, even as these applications change the way that organisations use data to fulfil their missions and impact their bottom lines. Streaming and transactional data in particular are driving new requirements for storage infrastructure. HPC systems also have stringent requirements for high I/O bandwidth and low latency.
IDC estimates that the total amount of digital information created, captured, and replicated will grow at a rate of 58 percent per year, reaching 1,610 EB by 2011.Web 2.0 applications are growing at a tremendous rate and require highly scalable and affordable storage.
As storage capacities have grown, traditional means of deploying and managing storage have also become outdated. Most of today’s storage systems require highly trained administrators to effectively manage the environment. This adds time and cost to deployment of new storage systems and increases the ongoing cost of managing the environment.
Complicating matters, these challenges are occurring in the context of a highly competitive global marketplace where getting data to market quickly with predictable costs can make all of the difference. Like computational infrastructure, data storage infrastructure must be agile in order to scale for unpredictable growth spikes in workloads and changing business strategies. With volatile global economic conditions, no organisation can afford to ignore costs, even as they plan for future growth and deploy storage infrastructure that can continue to perform at peak levels.
Customers can no longer tolerate the high costs of proprietary storage or massive licencing fees and are looking for new ways to address their growing storage requirements and their challenges in managing storage environments. Today’s IT environments require storage solutions that can offer:
• Greater simplicity and ease-of-use
• Real time diagnostics and tuning
• Massive scalability
• Better storage economics
It is absolutely essential to follow appropriate storage management practices in order to tame the growing bulk of enterprise data. After all, intelligent data is what an organisation thrives on. Over the last few years, storage management has evolved from a server-centric model to a network-centric model. And this shift has naturally given rise to new difficulties, concepts, and challenges.
Major Storage Issues Faced by Users in India
As per Gartner data, lack of storage management tools, growing storage capacity demands and storage performance demands, lack of reliable backup/ recovery solutions, growing availability demands and low importance of storage management in total IT management are the main storage issues faced by users.
Statistics from recent client inquiries show that these issues are likely to remain their key concerns in 2009. This result indicates that storage users are clearly yet to come to terms with the explosion of data within their organisations, and are looking for ways and means in which they can better manage their storage infrastructure and also fulfil the capacity demands of various departments without going overboard on the budget.
Storage management challenges
These are some of the challenges faced by enterprises, which want to handle their storage resources optimally:
- The biggest challenge is to manage the explosion of data growth. An enterprise needs to not only have adequate hardware, but a correct management strategy to handle volumes.
- Storage interoperability issues keep cropping up because not all vendors’ products interoperate even though they claim to do so. The products are sometimes proprietary in nature. Enterprises may have legacy systems and multiple operating systems, which are difficult to integrate. Also, new storage standards like iSCSI, Infiniband, and Bluefin are already in various stages of maturity even though earlier standards like Fiber Channel have not been adopted very widely.
- Managing a distributed architecture is difficult but vital as the same information needs to be accessed by different users, who run different applications.
- Managing storage remotely and with the least amount of human intervention.
- Most enterprises have restricted IT budgets, which naturally impacts storage hardware and software procurement decisions. IT heads are doubly cautious about spending money.
- Qualified technical personnel for storage systems are not easy to come by. This adds a cash component to the overheads.
- There may be unpredictable demands due to events like unpredictable growth, holiday rush, and catastrophic events.
Unfortunately, most of today’s storage solutions remain proprietary, complex, and expensive, with appliance vendors seeking proprietary lock-ins and lucrative software licencing. In this demanding environment, special-purpose appliances have hit hard limitations in terms of performance and scalability. Limits on power, cooling and real estate have also become real constraints, and energy costs are rising while IT budgets remain static. New constraints for power and cooling are also driving many to re-think the way they deploy both computational and storage infrastructure.
In the current scenario, considering the constraints that businesses operate in, the need of the hour is a paradigm shift away from traditional proprietary storage controllers. Open Storage is leading the storage revolution - with storage appliances that save money, cut energy consumption, improve performance and help to simplify storage growth.
What is open storage?
Open storage systems combine industry-standard hardware with open-source software, and are supported by a community of thousands, who have a passion to create better storage solutions. This combination helps spur innovation and drives better storage economics. Developers can leverage volume servers and disk drives as well as an open storage software stack to speed storage innovation.
Unlike traditional proprietary storage solutions, open storage solutions offer freedom of choice at every level of the storage system stack. In proprietary systems, storage software is closed and costs typically three times the amount. Customers must generally purchase a storage controller and controller software from the storage vendor and then also pay for individual features, usually with capacity-based software licences. Lastly, customers must buy the disks enclosed within the storage system. In most cases, these are commodity disks that are marked up as much as five times the original cost.
By contrast, in open storage architectures, both the operating system and the application software are available as open-source software. The hardware is also industry-standard, so customers can leverage an industry-standard server in place of an expensive, proprietary disk controller. ZFS, which is included in the Solaris Operating System (OS) for no additional licencing fee, provides data services such as RAID, error correction, and system management, thus insulating applications from the underlying hardware. Such services have traditionally been tied to specific hardware devices and have been available only when bundled with an expensive controller.
In an open architecture, customers can select the best hardware and software components to meet their requirements. For example, a customer, who needs network file services, can use an open storage filer built from a standard x86 server, disk drives, and OpenSolaris technology at a fraction of the cost of a proprietary NAS appliance. All the components of a closed system must come from the vendor. Customers are locked into buying disk drives, controllers, and proprietary software features from a single vendor at premium prices and typically cannot add their own drives or software to improve functionality or reduce the cost of the closed system. For more than 20 years, storage system vendors have utilised more and more standard components in their products, but have not passed along savings to their customers, because the products have remained closed and proprietary. Standard CPUs, memory, and disk drives are used by most storage vendors, but closed, proprietary storage systems can cost up to five times the market price for standard components such as disk drives.
A new market for open storage solutions is developing in response to today’s storage requirements. With IDC estimating the total storage market (hardware, software, and services) at approximately $90 billion in 2011, it is predicted that by then, open storage will represent more than 20 percent of the external disk market, or approximately $5 billion of the $24 billion external disk market.
Need for a new storage architecture
There are several new business opportunities that require vast amounts of inexpensive storage—and these opportunities cannot be realised with today’s traditional storage architectures. Google and Amazon probably could not exist in their current forms if they hadn’t built their own storage infrastructures based on open storage principles rather than traditional storage architectures built from proprietary products, which were simply too expensive and inflexible to accomplish the scale and economics demanded by their online business models.
The rapid growth of new digital data demands new storage architectures that offer more flexibility and radically different storage economics. Web 2.0 applications are growing at a tremendous rate and require scalable and affordable storage. Industry-standard hardware, open-source software, and community development trends also continue to grow, and they are key enablers to building a new, open storage architecture.
Additionally, there are many market segments and storage trends that are fast growing and can benefit greatly from a new, open storage architecture. Eco-responsible IT efforts can leverage open storage’s lower energy consumption, economic, and consolidation advantages. HPC environments are almost exclusively built from open-source software and already utilise open storage architectures to efficiently manage vast storage pools, high I/O bandwidth, and low latency needs. Virtualised server environments can also leverage the flexibility and consolidation advantages of open storage.
As current market trends and segments illustrate, the need for an open storage architecture is clear and growing and will represent a significant part of the total storage pie in the next few years. This includes storage hardware, software, and services.
_Jagannath is GM-Marketing with Sun Microsystems India.
_