tiero - Fotolia
- Scott Sinclair, Enterprise Strategy Group
The development of solid-state storage has led to a stunning transformation in enterprise IT. Organizations have particularly enjoyed the significant benefits of the low latencies engendered by flash technology.
ESG recently surveyed 373 IT decision makers responsible for their organizations' data storage infrastructures to investigate multiple aspects of the current state of the storage industry, including flash. When asked to identify the benefits achieved from deploying flash, the most commonly identified was obvious, improved application performance. The second, third and fourth most identified benefits -- greater storage utilization, reduced operating expenses and lowered total cost of ownership -- further display the potential of flash to improve efficiency across the entire IT ecosystem.
Essentially, organizations either experienced improved performance with flash because hard disk drives (HDDs) were holding back the potential of their data ecosystem; or they had deployed so many HDDs, shifting to solid-state storage allowed the rest of their environment to become more efficient, thereby requiring fewer components to provide the same level of IOPS.
It would seem that spinning, mechanical drives are the performance bottleneck in a digital data ecosystem. So removing the HDD bottleneck and replacing it with flash ultimately improves the data center, right?
Well, yes and no.
Kicking the bottleneck down the road
Replacing slow spinning disks with faster flash storage affords ample benefits, as ESG's research shows, but it doesn't remove the bottleneck. It just moves it somewhere else.
A core concept in system or process design is you can never remove a performance bottleneck. There will always be some component that holds back the potential of the rest of the system.
For some, especially those who were required to read Eliyahu M. Goldratt's books "The Goal" or "The Critical Chain" at some point in their professional career, this seems obvious. Often, however, IT professionals overlook this fact when they think about data center design. Once you understand that a performance bottleneck will always be present, the next step is to make sure you know and can control where it resides.
This is the hidden problem with flash storage. For decades, mechanical media was such an obvious bottleneck. To improve performance, you simply increased the size of the bottleneck or added spindles. Thanks to flash, however, storage may not be the bottleneck any more.
So where is it? Great question. Is the bottleneck the processors, the memory, the application licenses, the protocols, the networking, the storage controllers that sit in front of the flash storage, or something else entirely?
The answer? It depends.
Locating the bottleneck isn't easy
Identifying and locating a performance bottleneck in a system, especially after the deployment of flash storage, is easier said than done. For instance, leveraging the various management tools for all the elements across all the components in a data center makes identifying bottlenecks a complex process. Products that provide a more end-to-end understanding do exist, however.
Companies such as Aptare or PernixData with its Architect software, for example, leverage analytics to provide a more complete picture of the data path in an effort to deliver design insights and recommendations. In addition, storage-networking vendors such as Brocade offer tools that can help track performance over the storage network.
It is important to note, however, that the bottleneck may not remain static. It may shift over time as processors and memory advance and applications and firmware are updated. And the introduction of NVMe, for instance, will likely move it again. This added complexity is especially true in all-flash environments where the data ecosystem is apt to be all-electrical.
Despite this complexity, understanding the location of the bottleneck is absolutely critical, as the only way to increase system performance is to expand it. Adding precious resources and capital budget to components that are not the bottleneck in an effort to improve performance wastes money.
Widening the bottleneck
So let's say you've found the bottleneck, now what?
The next step is to design an architecture that places the bottleneck where you want it to reside. In pretty much any system or process, the bottleneck should be the component that is the most expensive to expand, because that's an expense you'll definitely want to avoid.
If adding a new application license is more expensive than adding memory, storage or networking, then the rest of the system should be designed so that each license is leveraged to its maximum extent. In other words, in this example, the application should be the bottleneck.
When discussing the disruptive nature of flash, it is overly simplistic to say the performance bottleneck is removed when solid-state replaces HDD storage. It is more accurate to say that, with the addition of flash, the bottleneck has either been considerably widened or moved.
If you don't know which, you better find out -- and quickly.
How SSDs move bottlenecks from storage to the network
Identify and eliminate storage bottlenecks
SSDs improve performance, cause bottlenecks
- Best Practices for Deploying SSD –SearchStorage.com
- Solid State Storage: Tips & Tricks –SearchStorage.com
- Storage Buyer's Checklist: Solid-State Storage Arrays –SearchStorage.com
- Essential Guide to Solid-State Storage Implementation –SearchStorage.com
Dig Deeper on Solid-state storage
Beat the software bottleneck by improving storage performance
Partners weigh in on enterprise flash advances
NVMe is ‘ready’, say Dell EMC, Virtual Instruments and Cisco
Why upgrading to a Gen 6 Fibre Channel network is a good idea