Feeling the data migration pain

The explosion in data is putting a hurting on migrating data from legacy HW/SW to newly installed products. Find out here what users and vendors can do to combat the problem.

This Content Component encountered an error
Dr. Geoff Barrall
CTO, BlueArc Corporation
Dr. Barrall is the CTO, executive vice president and co-founder of BlueArc Corporation and the principal architect of its core technology, the SiliconServer Architecture. Prior to joining BlueArc, Dr. Barrall founded four other ventures, including one of the first Fast Ethernet companies and a successful UK consultancy business. In this role, he was involved in the introduction of innovative networking products into UK markets including the Packeteer and NetScout. Dr. Barrall received his PhD in Cybernetics from the University of Reading in 1993.

Analysts' technology forecasts can vary from the extremely conservative to the bullishly optimistic depending on their interpretation of customer feedback and industry expectations. However, one area that they all agree on is that demand for storage continues to expand at a rapid pace. The resulting proliferation of servers and storage infrastructure and the complexity that this brings creates additional problems – one of the most dramatic...

being data migration.

With a typical product refresh every 18 to 24 months it's not uncommon for a customer's storage infrastructure may include two, three or more product lines from a single vendor. This leaves the customer faced with the problem of maintaining multiple servers with vastly differing performance and scalability levels, or the process of moving data from an older system to the newest model.

Only a few years ago, the process of migrating hundreds of gigabytes in an afternoon or overnight wasn't a serious impediment to progress. However, in today's environment, capacities breaking the tens and hundreds of terabyte level -- the migration of this data could add up to several days or even weeks of continued downtime.

So what does this mean for customers' storage strategies? When purchasing storage, customers need to consider not only today's, but also tomorrow's needs. How are they going to deal with the unceasing explosion of data, its management and eventual migration? Can they afford to perform costly forklift upgrades every 18 to 24 months? Or chance littering their data centers with outdated systems that are probably incapable of performing to today's requirements?

Lets take a look at the reality that storage customers often faced today. Through my many conversations with customers and in our own company's analysis of fulfilled orders, we are seeing a doubling in terms of real storage shipped every year. When coupled with correspondingly decreasing costs, these exponentially growing demands would be fine were storage technologies able to keep pace. However, the maximum performance and scalability of most of today's storage technologies are expanding at a much slower rate.

As platform technologies lag behind in terms of overall system performance and scalability, the typical system administrator is left with only two choices: accept a progressive decrease in overall system performance; or an increase in the number of servers. And while some storage vendors may claim the ability to scale capacity to meet customer requirements, it is uncommon to see these systems equipped to their listed peak capacity. After a certain point, they simply run out of horsepower -- whether at 40% of capacity or 80%.

Rather than stringing together point products, the logical answer is instead for customers to opt for storage solutions that can handle the massive forecast data growth rates of this year and beyond, and provide the performance necessary to meet customer requirements. By making storage purchases that do not address this inevitability, customers doom themselves to future management headaches and downtime that will negatively impact their business.

In turn, storage vendors need to provide solutions that scale to customers' performance and capacity needs for today's environments and those of tomorrow. Vendors must also offer seamless transitions as technology improvements become available. Together, these changes would reduce and eliminate the pain of data migration and server proliferation that can cripple even the most robust and redundant infrastructures.

To combat this problem, some vendors have introduced solutions around a global file system, which allows the data within a single file system to be accessed by multiple servers. While this approach offers higher levels of scalability, it does not address the issue of multiple servers required to deliver the required bandwidth and performance. Meanwhile, each server still requires individual management and maintenance, introducing a multitude of point products to further complicate the solution and increase complexity. Point products, whether strung together in a global file system, or isolated as storage islands in a customer's infrastructure, guarantee the eventual need to migrate data to a new system or systems.

Analysts, vendors and customers all agree that storage will continue to grow. The time has come to prepare for the consequences of today's actions.


This was first published in March 2004

Dig deeper on Data management tools

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close