Get started Bring yourself up to speed with our introductory content.

Jon Toigo's top tips on IOPS performance in the data center

Jon Toigo suggests IOPS performance boosts through the use of SSD technology, automated storage tiering and virtualization components.

Available data storage technologies like solid-state drives (SSDs), automated storage tiering and virtualization allow for a number of different architectures to exist in today’s data centers, so storage issues are often addressed with a combination of components. With this in mind, assistant site editor Ian Crowley interviewed Jon Toigo, CEO and managing principal at Toigo Partners International and chairman of the Data Management Institute, to find out which configurations can yield a boost in IOPS performance, energy efficiency and application performance, as well as why IT managers ought to be thinking about these factors more in the future.

Toigo explains the roots of the storage industry’s focus on capacity-based solutions, providing a firm understanding of throughput performance and power consumption limitations. He also elaborates on how SSD caching can play a major role in improving these metrics, providing examples of today’s most pressing energy-consumption challenges, and makes suggestions as to how we can address throughput performance issues without an expensive hardware upgrade.

Play now:
Download for later:

Toigo's top tips for IOPS performance in the data center

  • Internet Explorer: Right Click > Save Target As
  • Firefox: Right Click > Save Link As You recently wrote a piece on performance and capacity in the data center for Storage magazine. In it, you mentioned that storage vendors tend to focus on capacity rather than IOPS performance. Why is this?

Toigo: In the past, as a friend of mine used to put it, we bought storage by the pound. Sort of like ground beef at a butcher shop. Mainly, companies were concerned about capacity; they didn’t want to run out of space and it was a capacity play. In fact, I wrote an article in Scientific American probably five years ago where I interviewed all the VPs of business for Seagate, Western Digital and all the other drive manufacturers at the time, and they said that the industry is rewarded when they increase capacity on a drive. They don’t see as much action when they improve things like performance of the disk drive itself. They don’t see as big of a return on that investment, so it’s been a capacity play. Usually, when somebody wants to brag about storage, they talk about how they increased the capacity year over year to deal with burgeoning data that seems to be endless and as predictable as death and taxes. So we went looking for capacity, and the industry always provided it. Can you describe what role flash plays in increasing throughput performance?

Toigo: Sure. I’m not a huge flash advocate. I think they’re subject to oversell, like all the other technologies in the storage business. People seem to forget that we’re spoofing to keep flash viable because flash itself runs into a lot of errors and problems with memory-ware. They’ve thrown arcane rating schemes at them so when you buy a 1 GB flash module, it actually has 4 GB of memory on it. They’re swapping out memory through an interesting ratings scheme. In other words, the technology isn’t fully baked and I think it’s too expensive. But used selectively -- not as a write target but rather in this augmenting role with disk, where there’s code written in the disk array that actually swaps out hot data into the flash device -- it’s a very effective use of flash memory. It optimizes the performance of disk, lets you get rid of short-stroked drives, and basically creates a complementary infrastructure where two different storage modalities (flash and disk) both deliver significantly to the overall throughput of the device. I think that’s very smart technology and a very good way to go.

Unfortunately, most of the flash guys have dreams of avarice where they think they’re going to replace all the disk that’s out there. Although, that may happen if the industry keeps jacking up the price of disk drives and blaming it on the Thailand floods. The bottom line is that flash is not cost-competitive in my book right now. There’s also the wear that occurs in flash. Some of my larger clients (credit card companies, for example) carry out a million transactions per second and they say 'We’d be changing out these flash devices more frequently than people change their underwear if we were using them as write devices.' That’s pretty expensive when you’re talking $10,000 or something for a flash unit. So I’d say stick with standard DRAM or one of these complementary schemes where you’re just getting the best characteristics of flash to complement the best characteristics of disk. You’ll dramatically reduce the amount of wattage you’re going to have to purchase to deliver the IOPS requirements necessary for your most demanding applications. So there are still some areas of concern for flash technology. What options exist to combine it with different components to improve IO performance?

Toigo: I’d turn to the XIO guys and give them a good interview one of these days because they’ve got a good story. They’re doing basically bricks of storage with a fixed capacity, and the option to add flash components to that basically provides that sub-LUN tiering functionality. The other approach you can take here is to virtualize storage. Virtualization of storage is basically another way to gain a huge IOPS bump, some will say up to 400% faster, off of any storage you already have because when the data comes into the virtualization server, it basically services those IO requests off of DRAM on the server. It tells the application the write has been received and you can process the next transaction or whatever it may be, when in fact it’s been cued in memory and is waiting its turn to be written to the back-end storage, which could be just about anything. I like that model as well and I think that’s sort of the everyman’s version of what I was describing as a hardware solution before. That’s what I do here in my office, I’ve got 11 TB of storage all virtualized with about 40 GB of RAM in the server, and it’s faster than any rig on the planet right now. IOPS is sort of an externalized measurement of throughput, and what I’m most concerned about here is the gating factor introduced by energy costs and energy availability.

Please listen to the complete podcast on boosting IOPS performance with Jon Toigo.

Dig Deeper on Data storage strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.