Defining maturity can be a tricky proposition, especially when it concerns technology. You’re never quite sure when, how or based on what parameters, one can deem a piece of software mature. If it’s the longevity that you base your judgment on, then Linux is one of the more mature technologies there is. Yet despite being around for over 15 years and gaining immense fame, it still hasn’t been trusted with mission-critical business application workloads.
For such an old and famous enterprise platform, the deployment figures aren’t that flattering. Majority of organisations deploying Linux still choose to stay away from putting critical workloads in Linux environments and continue to run non-critical functions such as web applications, file servers, print servers and other custom developed in-house applications.
This situation, however, is beginning to change and if Gartner’s prediction is anything to go by, then by the end of 2009, Linux-based mission-critical IT data centre deployments will result in more than $2.2 billion of the $11 billion revenue from servers shipped.
The primary reason for this shift is that Linux operating systems and open source-based software in general have reached critical marketplace mass. The increasing interests of industry heavy weights such as Microsoft and Oracle and others have given Linux a kind of validity that was missing in the early years. This in turn is encouraging user organisations to look at Linux more seriously and deploy it in crucial enterprise environments.
It’s Not A Question of Maturity
This is great news for the Linux and its proponents, for they have had to deal with the maturity question for the longest time. "Is Linux mature enough for my mission-critical applications?" has been the most frequently asked question by the user community. Valid as it might have been a few years ago, according to George J. Weiss, VP & distinguished analyst, Gartner, this is actually the wrong question to ask.
He says that the real issue now is not so much a technology issue with the kernel, but one predicated on several factors including organisational requirements, preparedness and best practices. Weiss says that IT architects need to rethink the concept of mission-critical Linux as it is one part kernel and nine parts ecosystem and best practices.
According to him, "Linux can be considered mission-critical under conditions of good best practices, a proven management ecosystem, additional technologies for availability and recoverability, critical knowledge and skills in managing the life cycle of platforms and applications, and staying within the maturity envelope of Linux technical and architectural capabilities."
What You Should Do
In a recently published note, Gartner put forth a few recommendations for user organisations to consider. They are:
Organisations shouldn’t embark on mission-critical deployments before a 12-24 month experience with Linux in less-stressful environments, such as simpler Web servers and network infrastructure functions.
They need to manage application life cycles, ensure that skills specific to Linux are on par with Unix and employ sound business continuity strategies consistent with meeting stringent service levels.
Businesses must also use well-defined processes for asset and change management, compliance, configuration management, availability, updates and security related to the applications life-cycle.
Users must also plan about 20% of deployments as mission-critical before making Linux a strategic enterprise operating system.
Focus on Linux subscription contracts (for example, terms and conditions, and service-level agreements) as equally important to the technology.
Make system vendor service and support contracts a preferred choice as system deployments increase in complexity and heterogeneity.
Complexity Will Change Price Equations
As business critical applications are deployed in Linux-based environments, several changes can be expected to occur due to the highly complex nature of such undertakings. For instance, Linux distributors and their platform partners can charge higher fees for delivering support for mission-critical applications. Also provisions such as high availability, disaster recovery, virtualisation, and system management which form a crucial part of any deployment strategy could elevate the costs substantially from earlier simpler infrastructure deployments.
In essence, the favourable total-cost-of-ownership comparisons will diminish with the rise of complexity. How will this affect customers for whom the lure of lower costs has been one of the charms of Linux?
Weiss argues that while the lure of cost reductions was the initial driver, especially when compared to running the applications on more expensive Unix boxes, the market has evolved in which IT planners are now making fundamental architectural changes in their infrastructure.
"They now see the economies of running their applications and databases on clustered x86 nodes and migrating them from more expensive cabinets and frames. So there will still be economies derived, providing that the deployments are well planned around best practices," he concluded.
Updated Date: Jan 31, 2017 01:14:31 IST