Hybrid Clouds Are the New IT Ecosystem: Here’s Why Storage Systems Must Adapt

by Larry Freeman, NetApp

Free Download

Data Fabric: Realize the Full Potential of the Hybrid Cloud

Learn More

Enterprise storage systems have evolved through four distinct stages. First, there were monolithic storage arrays. These gave way to modular SAN/NAS systems and distributed storage networks. Next, advanced software techniques led to storage virtualization, with storage volumes abstracted from the underlying physical devices. Finally, in the current era, cloud storage services have emerged that combine virtualization with unprecedented economies of scale.

In each case, the transitions were driven by changes in compute platforms, which in turn required a different type of storage architecture.

Hybrid Clouds Are the New Ecosystem
Today, economic considerations are driving a transition towards hybrid cloud services, and storage architectures must once more evolve to meet the needs of the IT ecosystem. The idea driving hybrid cloud adoption is clear—leverage the public cloud whenever practical to gain new levels of elasticity and reduce IT costs. The impact is equally clear, as applications and data are diverted into the public cloud, and the amount of infrastructure required within on-premises data centers is reduced over time.

Figure 1 – A hybrid cloud ecosystem with interconnected end points

On the compute side, moving virtual machines (VMs) and their associated applications between data center servers and cloud-hosted servers within a hybrid cloud is well understood and efficient. However, importing and exporting application data, is not as seamless. Data movement is a significant problem for a hybrid cloud. While cloud compute services are designed to be device- and location-independent, data is almost always housed somewhere on permanent storage, and it is not easily moved.

Insufficient data mobility between data centers and public cloud providers can be a significant barrier to hybrid cloud adoption. In a recent CIO survey conducted by IDG Research Services, 78% of enterprise IT organizations viewed the ability to manage data across multiple clouds as critical or very important—but only 29% of these organizations viewed their ability to do so as either excellent or good.

Without a common framework for data services, hybrid cloud success will remain elusive. What’s needed is a way to securely manage, share and move data among different clouds. Imagine a hybrid cloud where data management capabilities are consistent, connected and combined—in essence, a fabric that joins on-premises clouds with public clouds.

How to Control Your Future in a Hybrid Cloud World

A Data Fabric Eliminates Cloud Silos
To realize the vision of a data fabric, a method must exist to seamlessly control and manage data between on-premises storage arrays and the many storage endpoints within a hybrid cloud. Fundamentally, a data fabric is a way to manage data, both on-premises and within the cloud, using a common structure and architecture. A data fabric provides efficient data transport, software-defined management and a consistent data format, allowing data to move more easily among clouds.

Figure 3 - Isolated cloud silos vs an interconnected data fabric

With data portability enabled via a connected data fabric, application servers and application data can move together. Here are a few of the benefits:

  • Economic and data governance flexibility. When using a data fabric to design new applications in the cloud, you can simply remove server instances (and data) from the cloud if an application project fails. Conversely, if an application takes off, you can easily move it (and its data) to another, more secure, environment.
  • Better utilization of resources. Mature applications often take up data center space, power and the resources of skilled IT staff. A data fabric enables you to selectively move applications to a public cloud infrastructure and focus internal IT resources and mindshare on the applications that deserve attention.
  • Cloud-based disaster recovery. One of the most exciting capabilities of a data fabric is the ability to enable multi-site disaster recovery (DR). SAN-to-SAN replication between data fabric endpoints provides cost-effective, cloud-based DR options with hot site capabilities and very short recovery times.

Storage Systems Must Evolve—Again
Each prior transition in the storage industry was triggered by a new IT ecosystem that delivered new business capabilities and reduced costs. The cloud represents another major shift, and it requires a storage infrastructure that can keep pace with a new environment.

The hybrid cloud model—combining on-premises capabilities with resources and services from various cloud providers—is poised to become the dominant model in enterprise IT. From a storage perspective, a data fabric will best enable IT organizations to take advantage of this model. As the model matures, a data fabric ecosystem will become critical for providing a consistent framework for data movement throughout the many and varied end points with a hybrid cloud.

To find out more about the NetApp vision of a Data Fabric and how it can help your organization realize the full potential of the cloud, read the latest edition of our Tech OnTap Newsletter.

No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo and Go further, faster, are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Click here for a full listing of NetApp trademarks. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.

Show more