reports in with his predictions for 2004. At the top of his list is something he calls "programmable storage" and he explains how current technologies will converge to make that happen. Chris also predicts that we'll finally see some benefits from SNIA's standardization efforts, virtualization at the fabric level and SMI-S technology making storage management simpler.
Also, take a moment to go back in time and look at
Chris' 2003 predictions
to see how they played out.
As an example, I found a few companies that actually migrated the entire core of their SAN to an iSCSI solution. Using intelligent iSCSI gateways as the core fabric switches between the HBAs on the hosts and their Fibre Channel based storage arrays, these companies leveraged their hardware and experience with IP to help them save costs.
The ability of these new faster intelligent switches will allow them to "crack" the FC packets to examine their contents, which enables the creation of content based routing policies.
Packet/block level virtualization
Forward thinking IT departments do not just want to provide low cost storage solutions to their users. They want to provide different cost structured "pools" of storage that have different service levels assigned to them. They want to provide the "right" storage, based on the requirements of the applications. Wouldn't it be cool if all your data magically migrated itself to different classes of storage based on its age or frequency of access or even regulatory needs? Virtualization allows for the creation of pools of storage from multiple vendors, using combinations of high performance storage for production applications and lower cost pools of storage (ATA disks, tape or
gear) for backup and data retention. Packet and block-level virtualization, in conjunction with intelligent fabrics, will eliminate the headaches of data migration, reduce reliance on proprietary storage based data replication solutions, and allow the creation of business policies that can be enforced at the individual data block or packet level of the data stream. Block-level virtualization will enable "thin provisioning" where every server thinks it has a huge pool of storage available to it, but the server is only allocated what it needs. Thin provisioning will eliminate the need to manage storage growth needs at the server level.
We will see virtualization solutions in the form of both hardware and software, and they will be implemented at all three layers of the SAN, the host level, fabric level and storage level.
Object model data management
Everything needs to be managed from the perspective of the application.
Everyone wants the ability to monitor the data stream and be able to create a policy that automatically eliminates stale, useless, and non-essential data. They also want to try and capture information about the content of all that data so they can make better business decisions about where the data should be stored. In order to store information more efficiently, there are the questions that need to be asked:
CLICK HERE for Part 3
- What data is frequently accessed?
- What data is stale, and is a good candidate to be moved to archive type storage?
- Are there hidden gold nuggets of customer information that can be gleaned from the content of my files to help drive new business?
- What data-types should be stored on expensive high-end storage and which on tape or cheap disks.
- What are the performance metrics for which data types?
- When was everything last backed up?
- Do I need to actually back everything up or can I get rid of a lot of stale data?
- Which data is mandated to be kept by regulatory requirements?
- What are the properties of specific storage arrays?
- What methods are available within a particular storage array?