By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
| Look for the latest data migration automation before you start any heavy lifting.
IT environments are growing in complexity as companies transform their physical infrastructures to accommodate today's rapidly changing business needs. The goal is to create highly available and dynamic platforms to support service-oriented architectures (SOA), composite applications and virtualization services. As part of that process, server, switch and storage technologies are regularly upgraded to handle increased demands. And those upgrades come with data migrations. Data may move from one storage array to another; individual DAS may move to networked storage environments; or you might have to ship all of your data to a new offsite data center.
Why is data migration such a dreaded task? Because it's still a very manual process. Consider the following:
Small-scale migration: An application needs additional storage, but currently resides on a limited array. The application's data needs to be migrated to an array with more available storage. This is typically a three-week process, with multiple maintenance windows.
Large-scale migration: Multiple storage arrays, or an entire data center, must be migrated. Typical engagements involve petabytes of data, and could include hundreds or thousands of servers. You'll most likely have consultants crawling all over your data center, and the project could last six to nine months (or longer).
There are numerous problems associated with manual data collection, analysis and planning, including a greater possibility of recording and propagating errors, increased project timeframes, and possibly more unplanned downtime or lengthy cleanups. The risk is compounded in highly dynamic environments, as any change can render a configuration obsolete. This can lead to a number of the planned migration servers being dropped from the scheduled migration. If dropped, these servers won't be picked up again until the end of the project, which requires the old storage infrastructure to remain operational during that period. Completing migrations in a timely manner is therefore important. That's especially true if old equipment is leased, as additional charges usually apply for every additional day the equipment remains onsite.
Reliance on manual collection, configuration and migration results in longer projects with greater risks and higher costs. Leveraging technology to automate the process is therefore required to handle major data migration projects, as well as day-to-day provisioning activities.
A better way to migrate data
There are products that can change your data migration experience. Several companies--IBM (Softek), Incipient, Informatica and SANpulse Technologies, for example--have recognized the current shortcomings associated with manual data migrations and developed products in response. IBM purchased Softek for its host-based data mobility software that can be leveraged in open systems and mainframe environments and, depending on the need, can be combined with its global services for a complete solution. Incipient and Informatica recently released software solutions to aid in data migrations. The software can be leveraged internally for smaller data migrations or by a professional services firm for larger engagements. SANpulse provides an end-to-end service with an agnostic approach and, at the time of this writing, is being leveraged by EMC.
In general, leveraging an application from the above mentioned vendors has the potential to simplify data migrations. For example, a New York City financial services company had a data migration project consisting of approximately 300TB of storage and more than 600 hosts. It chose to use SANpulse due to its automation aspects.
Four months later, the migration project was complete, all of the data and hosts were migrated, and there were no additional stragglers. The firm estimated it would have taken at least six months using old methods and would have required additional resources. According to users at that company, the discovery process prior to then would have taken six to eight weeks and an army of support people. By leveraging SANpulse's SANlogics technology platform, this phase took only two engineers and three to four weeks. This same firm is evaluating software from Incipient to accelerate storage provisioning times by leveraging its software to perform smaller array migrations.
Four crucial stages
Stage 1: Discovery and analysis. During this phase, a comprehensive inventory of all SAN components is required; it's important to determine the relationship between the hosts and their SAN logical units, logical volumes and physical back-end devices. Once the inventory has been established, all of the hardware and software components need to be checked for compatibility.
This process is open to all sorts of opportunities for human error, and automation is an obvious alternative. One of the greatest challenges is collecting and correlating information obtained from multiple sources. Ensuring this data is accurate is important when analyzing the data to determine if any remediation is required for firmware upgrades or software revision levels.
Stage 2: Planning. Now a detailed data migration plan can be formed. The plan should include a list of servers scheduled for migration, a timetable outlining the schedule, a configuration of the targeted storage devices, and a list of the upgrades necessary to bring any hardware or software up to acceptable levels. Automation at this stage enables greater flexibility later, and will allow you to add or remove servers at the last minute by requiring only a simple verification step prior to the migration. Don't shortchange this step; you want to be able to monitor your migration against timetables and checklists.
Stage 3: Migration. Data from the old system is mapped to a new system, usually leveraging some type of migration technology. These are products from a storage vendor or a third party, and they can be array-, host- or network-based appliances or intelligent switches. The most important feature of this technology will be the ability to conduct "online" migrations. Optimally, all data migrations would be accomplished while the systems remain available. The migration automation technology should also let you reverse the migrations process in the event of a software or infrastructure failure. After the migration has been completed, server and hosts need to be remapped to the new volumes, and servers need to be rebooted so applications know where the data now resides. Automating the scripts required to establish these new paths greatly accelerates the time to switch over and eliminates human error.
Stage 4: Cleanup. The post-migration cleanup usually consists of decommissioning retired equipment and deleting the old network paths to ensure that the required equipment is no longer visible to servers and hosts. This phase could also be used to pick up servers that were dropped during the migration, and to perform any remediation issues that may have come up during a move. Lengthy cleanup times can result in additional maintenance or outage windows. If implemented correctly, automation technology should enable cleanup to be done on a weekly basis instead of a costlier, bulk cleanup at the end.
The real question is this: With so much attention being paid to the next generation of data centers, have you considered that there might be a better way to migrate data? If you're planning a technology refresh, take a close look at the data migration proposal. Challenge your incumbent storage vendor, storage software vendor or third-party consulting firm to deliver a better solution. Ask them if they've leveraged recent advances in data migration solutions to reduce the time, risk and cost for your project. Don't be content with the old, painful and costly methods. It could mean substantial savings, accelerated timeframes and a much simpler chapter in your IT history.