Storage

Managing and protecting all enterprise data

Gorodenkoff/stock.adobe.com

Beat the software bottleneck by improving storage performance

Find out what vendors are doing to address the storage software bottleneck problem to enable users to extract the full performance potential of the media in their data centers.

Storage systems no longer live up to their full potential. Proving this statement requires a little math.

If you add up the raw performance of each drive in an all-flash array and compare it to the stated performance of that array, you will find a significant difference. The math suggests the all-flash system should be able to generate millions of IOPS, yet most can't create more than a few hundred thousand.

Understanding the reason for the performance gap requires understanding storage systems architecture.

Most storage systems today consist of an ecosystem of CPUs, memory, internal and external networking, storage media and storage software. The storage software is what data storage managers interact with as they provision and protect the data stored on the media. Except for the software, each component in the storage ecosystem has become faster and less latent. For example, CPUs now have more cores, storage media is flash instead of hard disk-based and networks have more bandwidth.

The rise in the number of vendors within the software-defined storage category is what drives the discussion around isolating software from the rest of the storage ecosystem. Even vendors that provide turnkey storage products primarily serve as software developers nowadays, as their software may only be available with the hardware they sell. In fairness, their equipment may have some unique capabilities or configurations, but these vendors are still writing software code to manage the storage environment.

The challenge results from storage software remaining, for the most part, the same as it was a decade ago. Even features that most storage software products and tools offer -- RAID, snapshots and replication -- remain relatively the same.

Overcoming the software bottleneck -- and improving storage performance -- is a critical hurdle for storage vendors. If they are unable to, it won't make sense for customers to adopt faster memory technologies. Let's explore what vendors have done to try and solve this problem as it has become increasingly evident that storage software prevents users from extracting the full performance potential of the storage media within a storage system.

Throw hardware at the problem

Most commonly, vendors have thrown more CPU and memory at the issue. But more powerful CPUs mean a much more expensive bill of materials that doesn't necessarily fix the problem.

CPU manufacturers no longer enhance execution by improving the raw performance per core. Instead, they increase the number of cores contained within the CPU. But taking advantage of a multicore processor with eight or more cores requires sophisticated software multithreading and, in most cases, rewriting code from the ground up. To avoid this time-consuming process, storage vendors try to isolate specific functions to individual cores, which helps performance a bit but doesn't effectively use the cores in parallel.

Another approach adds more memory to act as a cache between the CPU and flash storage. Adding more memory is, again, more expensive. There is also a physical limit as to how much memory a storage system can support. Furthermore, memory is volatile by nature, and a power or system failure will cause data loss. Working around volatile memory adds complexity and more cost.

Do we need to fix the storage software bottleneck?

The challenge of getting maximum performance from a storage system has led many vendors to shift their focus to factors such as simplicity of management or quality of service features. The basic stance here is that there is no need to improve performance because current systems are good enough.

Many data centers do not need more than 500,000 IOPS; in fact, most don't need more than 100,000 IOPS. The problem is that not tapping into the full raw performance of flash drives inflates media costs. If -- technically speaking -- four or five flash drives can deliver 500,000 or more IOPS, why should an organization be forced to buy 24 or more?

Compounding the situation is the increasing capacity of flash drives. Within a year, 16+ TB flash drives will be commonplace, with higher capacity drives soon to follow. That means a 24-drive flash array with 16 TB drives will provide 384 TB of raw capacity, more than most data centers need. But it will still be an amount that vendors will force data centers to buy so their offerings can deliver acceptable performance.

The problem with high-density flash

High-density flash is potentially more of a problem for underperforming storage systems than high-performance flash. Several vendors are ready to ship 16+ TB drives this year, and others have announced 50+ TB drives set to ship next year.

The cost of these drives will be considerably less on a per-TB basis than today's flash drives and will perform almost identically. While data continues to grow, not all data centers will be petabyte (PB)-class within the next five years. But without a more efficient storage software model, vendors may force these data centers to buy 1.2 PB of capacity per shelf (50 TB x 24 drives).

Most composable storage offerings can slice high-capacity SSDs so clusters can use them more efficiently. "From scratch" storage software is even more compelling. If it can extract full performance from the drive, these tools may enable a midsize data center to address all of its storage needs with a 12-drive system that delivers more than 1 million IOPS and 600 TB of capacity or a six-drive system that delivers 500,000 IOPS and 300 TB of capacity.

There is also the reality that new workloads like analytics, AI, machine learning and deep learning require more performance than what a typical array delivers. A need for 1 million or more IOPS is not uncommon in these markets. In addition, mainstream workloads will continue to scale and require more performance so that even databases or virtual environments will, at some point, benefit from a performance upgrade.

How to fix the software bottleneck

Fixing the software bottleneck and improving storage performance need to be top priorities for storage vendors, or new higher-speed media and networking will have meager adoption rates. Start-up storage vendors, in particular, have identified these challenges, and there are several approaches to addressing them coming to market.

Eliminate the concept of a storage system. Most workloads today operate as part of an environment, often a collection of servers acting as a cluster -- like a VMware cluster, Oracle Real Application Cluster or Hadoop cluster. The software that manages these clusters includes basic storage management capabilities like volume management and data protection. Performance improves because there is no network or advanced storage software to add latency.

Problems arise due to each cluster operating by itself and inefficient storage resource utilization. Storage management becomes a separate management process within each cluster, and storage utilization is typically low because the software doesn't evenly distribute data and I/O load throughout the cluster.

Data protection requires copying data between nodes in the cluster, which increases capacity consumption significantly.

Use composable cross-cluster storage. A potential solution to the problem of islands of clustered storage is composable storage. Composable storage involves a shared resource pool of the components in the storage ecosystem. When a cluster needs storage resources, the composable software assigns a virtual storage system to that cluster.

For example, a storage manager may allocate 10 of 50 available flash drives and two CPUs to a VMware cluster, and then another 20 of the 50 to an analytics workload driven by Hadoop. The advantage of a composable architecture is that the drives are not internally captive to a specific server, so organizations can change the use case for the drives and even the servers as needed.

Several storage manufacturers are developing custom FPGAs and ASICs that help accelerate storage performance by offloading tasks to them.

Composable offerings remain dependent on the capabilities of the clustering software to provide storage features, which means the management of storage is still individually executed within each cluster. It also indicates that the comprehensiveness of the features and storage performance are still mostly dependent on how well the cluster software's storage functions work. Lastly, most composable offerings require a sophisticated network architecture like NVMe-oF that -- while gaining in popularity -- is still nowhere near commonplace in data centers.

Provide custom hardware. An alternative to eliminating the storage system is to provide it with more processing power specifically designed for storage.

Several storage manufacturers are developing custom field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) that help accelerate storage performance by offloading tasks to them. Some networking vendors have delivered network interface cards with processing power on the card, with the intent of having the storage software run on the card instead of counting on the core CPU.

It is unclear if putting storage software on FPGAs or ASICs improves performance or if it just enables the residing application to more fully use the CPU. There is also a noticeable cost disadvantage with the use of additional hardware. In addition, moving to a dedicated FPGA or ASIC means a vendor no longer benefits from Intel's continued CPU development cycle.

Rewrite storage software from scratch. Most storage software is at least 10 years old. Even newer storage systems typically use open source libraries. Rewriting from scratch provides vendors with the possibility of rethinking storage algorithms, such as how the storage software will provision, reference, and protect storage and data. Starting over also enables a vendor to ensure its software is more multithreaded in nature.

But this process does not mean incompatibility. The storage software can use connectors to provide support for traditional block, file and object use cases.

 The challenge is the time it takes for the product to come to market. By taking advantage of existing software libraries, a vendor can deliver a new product in two or three years. But it won't be highly differentiated from existing offerings. A startup that takes the "from scratch" approach might take five years or more before its first version is ready to test. An established storage company might have to run its project in secret while continuing to sell its other products.

The value is that storage software would no longer be the bottleneck because it could extract full performance from each drive installed in the system while requiring fewer overall resources.

methods to overcome the software bottleneck

Improving storage performance by overcoming the software bottleneck is a challenge for the entire data center and vendors. If vendors can't fix the bottleneck, there is no need for networking vendors to build faster networks or for storage hardware vendors to develop faster media.

The alternatives mentioned here -- eliminating the concept of a storage system, composable cross-cluster storage, customizing hardware and rewriting storage software from scratch -- provide viable solutions to the problem, but each one has its challenges. IT professionals, as usual, will need to decide which trade-offs are the most palatable.

Article 4 of 6

Dig Deeper on Storage architecture and strategy

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close