This article can also be found in the Premium Editorial Download "Storage magazine: Inside the new Symmetrix DMX model offerings."
Download it now to read this article plus other related content.
But first, what was the big hold up? Alas, the various factions took six months longer than they should have. They still smoked the Fibre Channel (FC) standards efforts any way you slice it.
Here's where we are in networked storage: FC block storage networks kicks butt and rules the data center. So what's the next step? I'm glad you asked.
The next step is to look in the mirror and face facts, dude. You nailed the data center and that's cool, but have you noticed the mess outside the glass room? You know what I'm talking about--it's those 2500 tier two, three, or four servers that each have 100GB of disk in them that cause all the real management stress.
Maybe you only paid $3,800 bucks for each of those machine that run SQL Server, Exchange, Notes and act as CIFS/NFS servers doling out office files. But I hate to be the one to tell you this--that world has 250TB of data in it. I bet your utilization of that storage is about 20%. And I bet that while the value of the data center data is obvious, and may be way higher than that "other" data, that other data is probably still pretty important. Important enough to manage, keep up and keep organized. Oh by the way, that is the fastest growing
Here's the iSCSI pitch--if I give you a free driver--and Microsoft will--you already have all the connection costs covered because you already plug into Ethernet. Then the only thing you need to build your "poor man's SAN" is some centralized storage. Poof, you just eliminated a huge chunk of your administrative/operational costs. You get all the benefits of the data center SAN for more of your data, and make your server administration easier because you remove the internal disk. Utilization, availability and the rest all go way up.
People have been pooh-poohing iSCSI adoption because "we need to wait for TCP/IP offload engines" or performance will be too slow. Horse hockey. Who cares about performance in that environment? The stuff is fast enough 90% of the time, and when it ain't, use FC or a TCP offload device.
No way can you tell me staying the same is a better alternative. The only question remaining is how do the storage device makers respond? With 250TB of cheap, really easy to manage storage, if they expect you to justify the upfront move. It shouldn't require anywhere near the skill level and cost of the data center storage admins--networking admins should be able to handle the management end.
I see an opportunity here for array people to change the way they do things and take what could be a market that is two to 10 times bigger than the data center market from a raw capacity perspective. It's no longer a question of if, but when. The first major vendor to get out there and evangelize the solution has the opportunity to re-write the rules and set the market on its ear.
This was first published in February 2003