By now, everyone knows about the rising popularity of hybrid flash or all-flash arrays, which can give a boost to performance. However, there is a catch, and it's also something storage pros know all about: The I/O blender effect. "The problem we now have is because in our hypervisor we created a data store, that data store holds some number of [virtual machines] VMs," said Howard Marks, chief scientist at DeepStorage.net. According to Marks, if you use block protocols like Fibre Channel or iSCSI to communicate with that storage, "the storage system has no knowledge of which I/Os come from which VMs."
In his Storage Decisions 2014 presentation, Marks discussed the I/O blender effect, and what causes it. When multiple VMs communicate with a SAN-based system, the sequential I/O multiplexes together once it hits the storage array port. Because of this, says Marks, "the storage system doesn't see sequential I/O anymore, it just sees random I/O."
This causes many problems, including increased latency, hindered performance, and the inability of features like cache read-ahead to work correctly. The I/O blender effect can impact performance so drastically, according to Marks, that you could end up getting worse performance than you would using an individual LUN for each workload.
"Whether you have a hybrid array or an all-solid-state system, the performance comes from a single pool. That makes things much more efficient but it also makes your applications completely subject to the I/O blender effect." Marks said.