The classic scenario where enterprise storage technology filters down to small- and medium-sized companies is being turned on its ear, and some of the coolest developments are happening in mid-market systems.
If you can get past the clamor of the recent cloud buzz and other “new” storage technologies, you might notice some big changes in the mainstream storage market. And in a reversal of recent history, the change is filtering up from small- and medium-sized business (SMB) and small- and medium-sized enterprise (SME) customers, rather than the usual pattern of new technologies chasing after the enterprise first. As a result, mid-sized companies have an enterprise-sized menu of data storage choices to select from, many of which are poised to change business capabilities in big ways.
One example is the kind of scalability that makes storage more adaptable and cost-effective than ever before, which we call “adaptable capacity and performance.” This scale-out approach to storage allows companies to add more capacity or performance independently and as needed.
But adaptable capacity and performance is about more than just reacting to new demands. It can also make planning for and acquiring storage much easier, paving the way for a sustainable approach that does away with cyclical re-planning and replacement. Scalability offers the premise of pay-as-you-grow storage, so you can start small with an infrastructure built for current needs and add storage as you need it. When new technology becomes available, the “new-tech” storage nodes can be added to the existing system with the older ones getting phased out over time. And it’s all done nondisruptively.
A few block storage vendors have stood out in this area for some time; Dell EqualLogic and HP LeftHand are certainly leaders. They differentiated themselves by taking advantage of standardized hardware and focusing attention on software innovation for things like automated pooling, tiering and load balancing.
That combination of standard hardware and software innovation has evolved into “intelligent infrastructure integration,” which includes integration on multiple levels. The most basic is hardware integration with the infrastructure; instead of dedicated storage locations, standard hardware and Ethernet connectivity lets storage be installed and scaled anywhere, or even in different places for increased availability. The next is workload integration in which sophisticated block management allows storage solutions to peer into the data they store. In a server virtualization environment, this has allowed these systems to connect storage features like optimization, protection and replication to virtual servers without kludgy workarounds.
These next-generation vendors have rapidly harnessed every new virtual infrastructure integration point from hypervisor vendors, including Citrix’s StorageLink and VMware’s vStorage APIs, such as VAAI. But the idea of intelligent infrastructure integration also includes physical integration. Industry standard hardware results in a form factor that looks and acts much like a server, which means adding storage in the infrastructure no longer requires dedicated storage racks or aisles.
A new crop of storage startups is also stirring things up. Scale Computing, for example, offers a highly scalable commodity-based clustered storage system. Scale Computing’s storage is scale-out and deeply integrated, and makes use of an intelligent block layout engine to create a single clustered storage system from many nodes, with RAID 10-like features built in and automatic rebalancing and mirroring of data as new nodes are added. Its systems are multiprotocol, supporting both block and file storage (iSCSI, CIFS and NFS).
Storage innovation isn’t confined to the SMB/SME space. Companies like 3PAR, EMC Isilon, IBM, NetApp and Pillar are evidence that innovation still lives in the enterprise world. NetApp’s WAFL, for example, is about intelligently laying out data on disk and across different components of the storage subsystem for ongoing storage optimization and deeper storage capability integration.
The point is that it takes more than slick storage hardware to deliver intelligent data storage. For the coming private and public cloud architectures, intelligent storage will be a mandatory part of any solution that hopes to scale, stay efficient, be uniquely available and deliver complex sets of functionality like granular partitioning, federation and multi-tenancy.
About the author:
Jeff Boles is a senior analyst at Taneja Group. He can be reached at firstname.lastname@example.org.
- Data Protection Strategies in the Era of Flash Storage –Rubrik
- Data Management Strategies for the CIO –SearchDataCenter.com
- Three Ways That AI Will Impact Your Data Management and Storage Strategy –IBM
- Data integration strategy: A clearer path for data –TechTarget