With fast servers and high-performance flash storage, will performance bottleneck just shift out to the networks now?
They certainly can. We have been doing a lot of testing with SSDs in our lab, and we have begun to see interesting things happen when you put SSDs, especially in large quantities, in a system. We have seen performance bottlenecks move away from storage now, which is the first time I have ever seen this. In some cases, the bottleneck moves to the network, and in others, CPU [utilization] goes way up.
People need to ask, 'How is my network architected? Is 1 Gigabit Ethernet going to be enough?' Not really. I have been saying for a while now that SSDs and 10 Gigabit Ethernet, SSDs and faster speeds of Fibre Channel, go very well together.
Some of our testing has shown that when you deploy SSDs, CPU utilization can go way up, like 50% for one VM. How many of those can you put in a box? You can get two. People are going to have to rethink both their networking and their CPU physical-to-virtual ratios because of SSDs.
Dig Deeper on All-flash arrays
Related Q&A from Dennis Martin
RDMA technology can help speed up I/O in storage environments by bypassing copy processes in the software stack when data is called up from a ... Continue Reading
Remote Direct Memory Access is a good way to reduce latency in flash environments and works with InfiniBand and some Ethernet connections. Continue Reading
Dennis Martin of Demartek discusses creating DIY hybrid SSD arrays by adding flash drives to an existing array. Continue Reading