Fusion-io Inc. came out of stealth last March with a storage hardware product consisting of a series of Flash chips...
in a RAID configuration on a standard server PCIe card.
Since then, the company has been linked in product development efforts with major server vendors Hewlett-Packard (HP) Co. and IBM Corp. Dell Inc. hasn't officially engaged with the company, but founder Michael Dell recently revealed that he has personally invested in Fusion-io. The emerging vendor made headlines most recently when it hired Apple founder Steve Wozniak as its Chief Scientist.
Last September, Fusion-io rolled out a variation of its ioDrive product called ioSAN, which incorporates 10 Gigabit Ethernet network interfaces into the ioDrive card.
Fusion-io today disclosed that that it has signed an OEM agreement with HP for the HP StorageWorks IO Accelerator, a NAND flash-based storage adapter based on Fusion's ioMemory card. The product uses HP BladeSystem c-Class x86 servers, and can accommodate two or three Fusion-io cards, each of which has a capacity of 320 GB. HP and Fusion-io claim the system can offer more than 100,000 IOPS and 800MBps read- and 600MBps write-throughput.
David Flynn, Fusion-io's chief technology officer, sat down with SearchStorage last week to talk about how he sees solid-state storage replacing storage-area networks (SANs).
SearchStorage: There's a lot of talk in the solid-state storage world about how to provision solid-state storage so it's most cost-effective. How do you decide what data to move onto an ioDrive?
Flynn: On the order of 90% of the world's relational databases are less than 1 TB in size. What we've found with most of our customers is that it's just too easy and cost-effective not to just put the entire database on the silicon.
It seems counterintuitive. There has been a whole lot of focus on 'How do you separate the fast from the slow?' The thing is, if you focus on the storage-area network business, folks who buy 15 K rpm Fibre Channel-attached SANs are almost always using only a fraction of the capacity they get for the stated purpose. If you buy a 10 TB SAN of shallow, very high-speed drives [for a 1 TB database], you're getting an effective utilization of one-tenth.
That has various implications. One is that the effective cost per gigabyte is really 10 times what [users] think of when they buy the SAN. Now if you talk to these guys they'll tell you 'Oh, I'm using the whole thing.' But if you tease into it, you'll find out that they're using the other 9 TB for backup or archival, not for the live data set. You could use a whole lot cheaper storage for backup.
NAND Flash is actually higher capacity per cubic inch than even the deepest disk drives. Fifteen thousand rpm drives only go to 146 gigabytes because they have to have wide track spacing. We can collapse a multiterabyte storage array into a single server. And the effective benefit of doing so gets huge because each one of these [ioDrives] adds its own incremental performance. We can aggregate up past 1 million I/Os per second inside a system and 8-plus gigabytes per second of throughput. That's unattainable with an externally attached storage array, regardless of whether or not it has SSDs [solid-state disks] in it.
SearchStorage: That 1 million IOPS was part of Project Quicksilver with IBM, right?
Flynn: No, actually, that's in-house right now at our labs. We have a roughly 5U HP DL785 doing a million I/Os from one box. [Project] Quicksilver was multiple servers from IBM.
Flynn: This is just benchmarking—SPC [Storage Performance Council]-type testing or Iometer -- whatever test you want to run. But the type of actual workload people would use this for is data mart or data warehousing.
SearchStorage: Do you have any customers right now using this in production?
Flynn: One of those cases is in the online transaction processing [OLTP] space, a case where the database is a couple hundred gigabytes for all their customers. They were able to put four cards inside a primary database, run them in a mirrored fashion, and put two cards in the second databases and run them striped. So they end up with triple redundancy in that they have mirrors and a whole secondary server. They went from a shared storage strategy to a share-nothing strategy whereby the database is replicated from one server to the other without using shared storage. It turns out to be more reliable that way because they can tolerate a fault in the server. Before, they didn't have failover for the server.
SearchStorage: Do you encounter a lot of sticker shock because it's such a small thing and, even if you can argue it replaces the SAN, its absolute cost is still high?
Flynn: That customer calculated that in 2007 they had 15% of potential Web business that possibly was lost due to timeouts and slowness of the response time of their array. They were looking at buying two additional NetApp arrays to go with their existing two to get a 30% improvement. For about one-third the cost, they were able to go from eight servers to four, remove the two NetApp filers they already had and didn't buy the other two, didn't have to move their collocation to expand, and they got 10 times the performance.
For an extended deep dive into the Fusion-io technology with Flynn, please visit our Storage Soup blog at IT Knowledge Exchange.