The Cloud’s Dirty Little Secret: Lack of Data Portability

by Larry Freeman, NetApp

Free Download

Cloud ONTAP Use Case: Cordant Group

Learn More

Enterprise organizations that deploy cloud-based applications could be in for a big surprise if they decide to change cloud service providers. The lack of data interchange standards and the sluggish speed of bulk data transfers over TCP/IP networks make data portability between clouds difficult at best—and unworkable at worst.

While there are several motivations for changing providers—higher than anticipated costs, dissatisfaction with too-frequent outages, or even a notice that your cloud provider is shutting down its service—the bottom line is this: If you're considering placing application data into a cloud service, you would be well advised to have an exit strategy because most service providers have zero interest in making it easy for you to switch.

What will you do when it’s time to switch providers?
In a traditional data center, it’s often possible to sit tight while planning for a change; for example, staying with an existing data storage system as budget and logistics issues are worked out, rather than upgrading to a newer model on short notice. With public clouds, however, circumstances may force you to move quickly and without the luxury of extensive planning.

It has also become common to hear about companies shifting their workloads out of a public cloud service to save money. In a recent IDC survey of AWS users, a majority (52%) had already moved their cloud assets to an in-house system or another cloud provider, with operations cost cited most frequently as the primary reason.1 In this scenario (and others like it), it can be disruptive to operations whenever business-critical data needs to be moved, whether to an alternate service or to on-premises infrastructure.

NetApp Data Fabric Fundamentals: How to Manage, Protect and Access Data Across Your Hybrid Cloud

Is data portability an elusive goal?
A major barrier to cloud-to-cloud data migration is the relatively slow transfer rate of the TCP/IP networks used to move data in the cloud. The average worldwide fixed download cloud transfer rate is approximately 20 megabits per second, according to Cisco. Using that figure, moving just 1 terabyte of data from one cloud to another would require six days, a time frame that is wholly unacceptable for most enterprises.

In addition, data governance best practices require that enterprises maintain adequate control over any data housed in public clouds—just as they must for on-premises data. Because a hybrid cloud requires an ability to replicate and migrate data from one cloud provider to another, it is critical to retain control of the data at all times—in essence, keeping ownership of the data.

True cloud data portability requires a common data transfer protocol that is fast, efficient and widely available across multiple end points in a hybrid cloud. Broad support of such a protocol enables management tools to invoke common API system calls when migrating data between otherwise disparate cloud platforms.

The fabric-connected cloud
Data Fabric is a vision for delivering uniform services across mixed cloud environments. The multi-cloud capabilities of a data fabric provide enterprise IT organizations with a choice of cloud environments for running applications. This flexibility enables a broader range of services to meet application needs and business requirements. Assets can be protected and access can be maintained if a particular cloud is compromised, and vendor lock-in can be avoided—all while managing data in a consistent and seamless way.

NetApp MetroCluster

Figure 1 - Data Fabric Interconnected Cloud

A NetApp-powered data fabric enables fabric-connected clouds in three ways:

  • A common API framework enables data migration and replication between cloud provider end points
  • Space efficient replication uses block-level deduplication, compression and virtual cloning to reduce cloud-to cloud data transfer times by 50% or more
  • Applications can be connected or redirected across cloud services in seconds, such as those from AWZ, Azure and SoftLayer

Imagine a cloud environment in which all of the data management capabilities are consistent and connected into a coherent, integrated and compatible system—in essence, a single virtual cloud with a unified data structure. That is precisely NetApp’s implementation of the data fabric.

With a unified set of data services spanning multiple clouds, it is no longer necessary to keep applications locked in siloed cloud environments; rather, applications become portable as they (and their data) seamlessly move between on-premises and off-premises clouds.

1IDC Amazon Web Services IaaS (Storage) Leading Use Cases and Deployment Challenges, June 2015, Doc # 256807


No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo and Go further, faster, are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Click here for a full listing of NetApp trademarks. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such.

Show more