Open source software provider Red Hat announced on Wednesday its big data-focused strategy, which includes developing new solutions for enterprises to run their big data analytics workloads.
The move comes a couple weeks after Red Hat released version 1.1 of its platform as a service offering, OpenShift Enterprise.
Red Hat said it will contribute its Red Hat Storage Hadoop plug-in to the Apache Hadoop open community to transform Red Hat Storage into a fully-supported, Hadoop-compatible file system for big data environments.
Red Hat is also building a network of ecosystem and enterprise integration partners to deliver comprehensive big data solutions to enterprise customers.
This is another example of Red Hat’s strategic commitment to big data customers and its continuing efforts to provide them with enterprise solutions through community-driven innovation.
Red Hat says its big data infrastructure and application platforms are “ideally suited for enterprises leveraging the open hybrid cloud environment.”
The open source developer is working with the open cloud community to support big data customers, such as through projects like OpenStack and OpenShift Origin.
Red Hat’s big data solution strategy is focused on extending its product portfolio to deliver enhanced enterprise-class infrastructure solutions and application platforms, and partnering with leading big data analytics vendors and integrators.
Red Hat Enterprise Linux continutes to be the leading platform for big data deployments, providing distributed architectures and features that address critical big data needs.
The solution enables enterprises to develop, integrate, and secure big data applications reliably and scale easily to keep up with the pace that data is generated, analyzed, or transferred.
Red Hat Storage is built on Red Hat Enterprise Linux operating system and GlusterFS distributed file system, and can be used to pool inexpensive commodity servers to provide a cost-effective, scalable, and reliable storage solution for big data.
And as previously mentioned, the Hadoop plug-in for Red Hat Storage will be available to the Hadoop community later this year.
Currently in technology preview, the Red Hat Storage Apache Hadoop plug-in provides a new storage option for enterprise Hadoop deployments that delivers enterprise storage features while maintaining the API compatibility and local data access the Hadoop community expects.
Red Hat Enterprise Virtualization 3.1 is integrated with Red Hat Storage, enabling it to access the secure, shared storage pool managed by Red Hat Storage.
The integration of these platforms furthers Red Hat’s open hybrid cloud vision of an integrated and converged Red Hat Storage and Red Hat Enterprise Virtualization node that serves both compute and storage resources.
Red Hat JBoss Middleware provides enterprises with technologies for creating and integrating big data-driven applications that are able to interact with new and emerging technologies like Hadoop or MongoDB.
The solutions can populate large volumes and different kinds of data quickly into Hadoop with high speed messaging technologies, simplify working with MongoDB through Hibernate OGM, process large volumes of data quickly, access Hadoop along with your traditional data sources with JBoss Enterprise Data Services Platform, and identify opportunities and threats.
“With today’s announcement, Red Hat demonstrates its strong commitment to continue to provide enterprise infrastructure and platforms to effectively run big data applications today and in the growing open hybrid cloud environment,” said Ranga Rangachari, vice president and general manager of storage at Red Hat. “With true enterprise-class offerings, Red Hat leverages the power of the open source community to give our big data customers a choice in technology, deployment environments, and partners.”
Talk back: Are you currently offering your customers big data solutions? Are you considering using any of Red Hat’s big data solutions? Let us know in a comment.