Storage is arguably the stickiest infrastructure in the data center, and users will always buy more and more. There's no dominant leader hogging a huge share of the market, and there's still measurable product differentiation. 3PAR isn't competitive with EqualLogic, and both are very different from Isilon. But even when you compare competitive products like 3PAR and EMC Corp.'s DMX, there are still major differences. 3PAR is far easier to use and has better capacity optimization technology, while EMC DMX scales to higher levels of performance, supports mainframe and has an interoperability matrix that would break your foot if it fell on it.
As we enter 2011, it's important to leave behind the heady enthusiasm of the major events in storage and get back to reality. I talk to IT professionals all the time and their No. 1 priority isn't scale-out architectures or any other new-fangled technology. Their job is making sure their storage systems work all the time without blowing up, slowing down or losing data. It turns out that even in this day and age not all storage systems fulfill those fundamental standards all the time.
It's also important to understand that cache-coherent clusters or scale-out architectures aren't panaceas. They're like any other technology that comes with its own set of pros and cons. All cache-coherent architectures have trade-offs that invariably impact performance and management. If you implement a scale-out storage solution for performance, make sure it'll work for your I/O workloads because it isn't a black-and-white proposition. For example, Isilon is great for large files and streaming data, but not nearly as good for smaller transaction-oriented I/O. The overhead created when having a shared "brain" across lots of nodes requires rapid communication, and the more transactions that occur the slower the system will respond. That's not a knock on Isilon by any means, but it's important to understand what it's great at and what it's not so great at.
One of the most important trends affecting storage is the unbridled growth of file data. In a recent report, IDC predicted that file data will eclipse all other data types by several factors in the next few years. I agree with this based on what I'm seeing in the field. I'm working with companies that literally have petabytes of file storage and new files continue to surface like the BP oil spill. That puts NetApp in the driver's seat and leaves Dell Inc., Hewlett-Packard Co., Hitachi Data Systems, IBM and Oracle Corp. at a major disadvantage. EMC has much more of a fighting chance with Isilon in its portfolio, but this begs the question: Is Isilon worth $2.2 billion? If you believe the lion's share of all networked storage capacity will be file, you bet it is.
How we deal with all that growth is an unavoidable issue. Throwing more storage at the problem isn't sustainable. That's why storage optimization is going to play an increasing role in how we manage storage in 2011. Tried-and-true technologies such as thin provisioning need to be implemented to a greater degree. Data compression and data deduplication will find their way into primary storage systems. And perhaps the most compelling "new" capability is automated tiering of storage. EMC and Hitachi Data Systems have announced it and 3PAR released its version in 2010. However, Compellent is the only storage system vendor that has years of experience and thousands of customers supporting this technology, so it's no surprise Dell scooped up this new cool kid on the block (pun intended). Automated tiering, if done efficiently and reliably, can significantly change the economics of storage.
I also predict that because storage is now "cool," so are the people that write about it . . . or maybe that's pushing it.
BIO: Tony Asaro is senior analyst and founder of Voices of IT.
- Data Protection Strategies in the Era of Flash Storage –Rubrik
- Data Management Strategies for the CIO –SearchDataCenter.com
- Three Ways That AI Will Impact Your Data Management and Storage Strategy –IBM
- Data integration strategy: A clearer path for data –TechTarget