All data is not created equal. Data importance continually varies according to when and where it is created and needed. Yet because of staff and legacy technology limitations, many IT departments treat all data the same. The result is higher storage costs, performance drags and, most important, limits on executive analysis and decision-making.
The answer to the problem is non-disruptive
The need for data mobility has increased not only because of the information explosion but also because of trends transforming IT. These trends include server and storage consolidation, upgrade difficulties associated with lease rollovers, the purchse of new equipment such as arrays, and the rising number of SANs (storage area networks), reflecting the need for bandwidth and high availability.
Limits on storage functionality
IT departments are familiar with the complexity resulting from the unending fluidity of application requirements and storage capabilities. This complexity increases the likelihood of hidden management concerns and additional costs, such as having to keep unnecessary data on expensive storage devices. Increasing storage capacity has always been a difficult task. It can create manageability issues or may entail the costly deployment of larger servers. Complexity also creates unintended consequences and complicates planning. It limits storage tuning, such as routine data transfers for performance. Oversight to enforce SLAs (service level agreements) and ensure accurate input for bill-backs is harder.
Additionally, while storage manufacturers have made phenomenal strides in reducing cost-per-megabyte and access times, disk management and related capabilities have lagged behind. Operating systems cannot recognize the LVMs (logical volume managers) created by other operating systems. Applications compete for storage at the expense of other applications. Application requirements and storage capacity rarely mesh. Either disks are too small or too big for the proposed location, or capacity exists outside the application's reach. Inevitable crashes impact applications and users.
Routine management becomes harder with greater risks. Data migration required for consolidation or equipment replacement requires complex planning. The migration effort is often stressful, awkward and fault-prone, commonly causing downtime, performance lags and user disruption. An IT manager's worst nightmare is irrecoverable errors during routine load balancing or hardware upgrades.
Advantages of path management
What can eliminate these common data migration difficulties and allow IT executives to concentrate on more strategic activities? In the short term, the answer is path management. In the long run, however, IT executives need to keep their eyes on true data mobility.
Based on a logical view of storage rather than the traditional physical view, path management masks the complexity of heterogeneous storage environments by performing several functions. In addition to managing blocks of logically contiguous data that appear as disk drives to most systems, path management provides high availability by managing disk storage using volumes, volume sets and classes. System data is separated from such user data as databases and files. Advanced path management can deliver disk usage analysis and dynamically reconfigure disk storage, even while the data remains available to users.
Path management software, which can include volume management, failover, load balancing and provisioning, plays a vital role in storage consolidation and supporting data center growth. It delivers higher levels of data availability and protection in addition to fewer IT maintenance requirements. No longer is it necessary to move files around to effectively utilize new storage; instead, new volumes can be assigned on the fly -- even during business hours. Storage consolidation through path management also makes it easier to keep customer and other data current and consistent. Logical storage can also be tuned to match specific application requirements, such as database copying for data mining. By making data movement and consolidation virtually transparent to users, applications and host processors, path management is extremely useful in such areas as disaster recovery, serverless backups, non-disruptive application testing and data consolidation projects.
While path management is important in heterogeneous host environments, it is particularly advantageous in establishing and maintaining SANs, especially those that incorporate Windows NT. When a Windows NT system boots, it automatically attaches to every storage volume it sees, even if it is assigned to another system. Path management helps ensure that Windows NT plays well with other systems by not violating storage assignments.
Path management also eases such continuous IT concerns as scalability and security. While adding additional boxes can quickly lead to unmanageability, path management enables growth and consolidation of storage across multiple spindles; uptime is increased, especially during physical transfers. If any performance or integrity issues are detected during transfers, path management software can immediately roll back to the original configuration without impacting data or applications. The process takes a fraction of the time other methods can take. Additionally, failing drives can be replaced without user disruption.
Path management supports enhanced security, since specific volumes can be assigned to each host, eliminating the risk of unauthorized access to data "owned" by other hosts.
Future of data mobility
While path management is essential today, data mobility is vital for the data center of the future. Data mobility not only masks and automates the complexities of data migration -- regardless of platform or legacy technology --but also delivers business value by enabling policy-based data movement. It matches the vision of many organizations to disaggregate servers and storage and interconnect them via multiple interoperating, non-stop fabrics.
Organizations today are seeking to move away from application dependency to data independence. One implication of this move is that storage must become application transparent or accessible by any application according to hierarchical requirements. Storage must also be redeployed in real-time according to changing application requirements within companies and across industries. As a result, data mobility is required to both insulate applications from storage location and data movement details.
Data mobility serves as a foundation for lifecycle data management. Currently, a lot of data is stored on media that does not reflect data aging, latency or accessibility requirements, which raises costs and diminishes agility. Although data centers pay homage to hierarchical storage management (HSM), most HSM activities languish because of the extensive planning, coordination with backup programs and ongoing maintenance required. Data mobility ensures that data is automatically deployed to the lowest storage class, such as content addressed storage, while also meeting business needs for data immediacy.
Data mobility that balances storage, continuity and I/O requirements will be both business and IT policy-based. Explicit rules can prioritize users or resources, based on constraints or events. For example, data can automatically be moved to long-term storage if data has passed thresholds for either transparent access or recovery. Additionally, quality of service (QoS) requirements can automatically drive allocation of capacity, not the allocation of devices common today when access times or other benchmarks are not met. These capabilities enable IT managers to respond only to exceptional issues such as when recovery times are exceeded instead of arranging routine data transfers.
IT departments will become more efficient with data mobility. Hardware and application upgrades are non-disruptive, and more storage can be managed with fewer resources. Path management is also much easier, which both masks the heterogeneity that characterizes SAN environments and increases uptime by facilitating multi-pathed connectivity between applications and storage. Reporting and analysis become simpler, enabling easier compliance with SLAs.
For too long, storage has been viewed in terms of devices. While that was appropriate to the needs and technologies of an earlier era, today it raises costs, complicates management and adds risk to upgrades and availability requirements. Business requirements now make it imperative to view storage in terms of a process that balances data inequality and business requirements, especially in SAN environments. Thanks to its abilities to shape storage utilization according to enterprise demands without affecting users, path management represents an important step for employing storage as a process. Ultimately, advances in data mobility will extend that process from not only automatically meeting user and application demands, but also reflecting the changing nature of data throughout its lifecycle. With data mobility, storage management becomes a policy-based background routine instead of risky, time-consuming balancing act.
About the author: Chris Gahagan is Senior Vice President of Storage Infrastructure Software
at EMC Corp., Hopkinton, Mass.
This was first published in April 2003