By Santhosh D’ Souza In a world where every thing is getting connected, data will be highly distributed – from residing on sensors and in decentralized storage locations to being stored in backend datacenters. Consistent and appropriate policies need to be applied to data based on location, type, age and sensitivity amongst other things. As a result backup, an already challenging process, is now reaching a breaking point. Since the beginning, data backup has always been a critical element of efficient IT management - and it is bound to stay that way. From the days of big open reel tape drives, massive tape libraries to the Virtual Tape Library (VTL), recovery time and recovery point have remained important IT objectives. Meaning how long will it take to recover and at what point in time can I get back to? Answers to these questions have become a key element of corporate data governance and protection. As workloads generating new business value replaced older workloads, the analysis of both backup and recovery objectives has become the cornerstone of the collaboration between IT and Business. [caption id=“attachment_482183” align=“alignleft” width=“380”]  Pic[/caption] Over the last 30 years, backup and recovery have evolved. They started off as highly labour intensive activities that not only required huge floor-space but were also error prone. In today’s scenario when companies recognize corporate data as strategic assets, it is essential for businesses to protect these assets against a wide range of risks and threats. Data protection has become a high-priority objective for organizations. In turn, these organizations are now identifying, deploying, and efficiently managing their data backup and recovery infrastructures like never before. While the core of the data center is certainly important, much of that core is now moving to the virtualized environment, presenting new challenges to Backup. In fact, a recent report by Gartner states that by 2018, 40% of organizations will augment or change their current backup applications; by 2016 20% of organizations will abandon traditional backup/recovery in favor of newer techniques; and by 2019, there will be a 50% increase in the number of large enterprises eliminating tape backup for operational recovery. Since creation of a virtual server is far easier and less costly than a physical one, virtual machines can appear on the network and start running production applications without the backup manager ever knowing about them. If virtual storage and efficient data management are implemented properly, a well-orchestrated and intelligent IT infrastructure will be the result. Whilst flexible IT has many advantages, a crucial aspect is its ability to increase automation and efficiency. Tasks such as recovery, backup and test development will take less time, use less effort and cost less money, delivering business savings that are invaluable. The process of backup has continued to evolve taking what is often considered to be a costly, mundane, but essential data protection insurance policy and driving more efficiency, speed and protection capabilities into the process. Efficiencies and protection such as zero capacity copies, dedupe, compression and encryption are expected to be the norm and provide flexibility and choice. For those organizations where cost savings are king, there are tools available which calculate the savings made with new infrastructure. This means the economics of IT will be changed as budget can be shifted away from maintenance and will be focused on providing future-ready IT services that enable businesses to accelerate. (The author is Director- Systems Engineering at NetApp Marketing & Services Pvt Ltd India.)
Tasks such as recovery, backup and test development will take less time, use less effort and cost less money, delivering business savings that are invaluable.
Advertisement
End of Article
Written by FP Archives
see more


)

)
)
)
)
)
)
)
)
