Why does virtualization cause the I/O bottleneck problem?
You're basically doing an emulation within VMware and within a hardware controller for storage. That emulation -- virtualization of a controller -- stands between your applications and the use of the actual physical hardware underneath, and that creates an I/O bottleneck. So you have this emulation of a hardware controller sending and receiving I/O from the storage complex and it gets in the way.
Unfortunately, a lot of people are being conned right now -- and I use the word con deliberately -- into believing that if you throw flash or some sort of high-speed storage at the problem it's going to alleviate it. That's kind of a brute-force attack. Your I/O bottleneck is rooted in the virtualization software stack. It's not a problem with I/O itself or the configuration with the networks; it's none of those things. It’s not a question of the speeds and feeds of the equipment down below. You can brute force it, make it appear like it's working faster, by writing all your outbound writes to memory first. Then the memory signals, "OK, we got it" and you go to your next operation. Then it takes its own time in writing the data to the back-end software. You can do that; that's caching, and it's been done for years.
It's also known as spoofing. It's the way NetApp Filer -- for those people familiar with network-attached storage -- works. NetApp's back-end WAFL [Write Anywhere File Layout] system plus its RAID system aren't the world's greatest performers. So what you do is you cache in front of it for what they call Flash Cache cards, which are memory, and you cache all the incoming traffic in the memory once the write has been made and let the write occur later on, as the system catches up. That's a way to spoof the I/O bottleneck issue, but it doesn't resolve the problem. And it's going to get worse; the more workloads you stack up inside a server, the worse that logjam is going to get.
Dig Deeper on Data storage management
Related Q&A from Jon Toigo
Cache memory and random access memory both place data closer to the processor to reduce latency in response times. Learn why cache memory can be the ... Continue Reading
Linear Tape File System and Linear Tape-Open technology can improve user access and durability in your tape archive system. Explore specific products... Continue Reading
Parallel computing technology has not seen widespread use in the business world, but could that change? Jon Toigo discusses parallel I/O for ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.