Symantec Corp. has announced an add-on solution for Symantec’s Cluster File System that enables customers to run Big Data analytics on their existing infrastructure by making it highly available and manageable. Apache Hadoop offers customers significant value to drive revenue by helping analyse data for business insights, however many existing data solutions lack the data management capabilities and built-in resilience to overcome the cost and complexity of increasing storage and server sprawl. By working closely with Hortonworks, the new Symantec Enterprise Solution for Hadoop offering provides a scalable, resilient data management solution for handling Big Data workloads to help make Apache Hadoop ready for enterprise deployment.
Symantec’s Cluster File System is a proven enterprise solution to address Big Data workloads. With Symantec Enterprise Solution for Hadoop, organisations can leverage their existing infrastructure by scaling up to 16 PB of data including structured and unstructured data. Companies can avoid over provisioning on both storage and compute capacity, run analytics wherever the data sits, eliminating expensive data moves and make Hadoop highly available without a potential single point of failure or a performance bottleneck.
IT administrators have spent considerable time and resources consolidating their data centers and reducing their footprint through virtualisation and cloud computing. Taking advantage of Big Data analytics should leverage this consolidation of storage and compute resources. Symantec Enterprise Solution for Hadoop enables customers to run Hadoop while minimising investments in a parallel infrastructure – greatly reducing the storage footprint to reduce cost and complexity.
The first step in making the Hadoop infrastructure work is to funnel data for analysis. By enabling integration of existing storage assets into the Hadoop processing framework, organistions can avoid time consuming and costly data movement activities. Symantec Enterprise Solution for Hadoop allows administrators to leave the data where it resides and run analytics on it without having to extract, transform and load it to a separate cluster – avoiding expensive and painful data migrations.
In an Apache Hadoop environment, data is distributed across nodes with only one metadata server that knows the data location – potentially resulting in a performance bottleneck and single point of failure that could lead to application downtime. Symantec Enterprise Solution for Hadoop provides file system high availability to the metadata server and ensures analytics applications continue to run as long as there is at least one working node in the cluster.
“Customers can’t afford to let the challenges of implementing Big Data translate into management challenges within the infrastructure they’ve worked so hard to build,” said Don Angspatt, Vice President of Product Management, Storage and Availability Management Group, Symantec Corp. “Our Enterprise Solution for Hadoop helps connect Hadoop’s business analytics to the existing storage environment while addressing key challenges of server sprawl and high availability for critical applications. It’s now entirely possible to get the Big Data solution you want from the infrastructure you’ve got.”


)
)
)
)
)
)
)
)
