Storage

Managing and protecting all enterprise data

DrHitch - Fotolia

Disaggregating network, compute and storage allocation demystified

Explore the ways disaggregation concepts and principles are being applied to create and allocate pools of compute and storage resources to serve applications on demand.

The time spent supporting aging IT architectures is orthogonal to the needs of today's fast-paced business environments. Granted, several newer technologies, such as converged and hyper-converged infrastructures, public and private clouds, and Hadoop- and Spark-based scale-out products and services are easing the situation. It's not enough, however. We want to spend more time developing new applications and little to no time managing IT infrastructures.

Enter disaggregation. Separating an aggregate into its components isn't a new concept, but this approach is assuming greater importance. For IT, disaggregation means breaking a computer down to its core elements -- compute, memory, I/O, storage, cache, network fabric and so on -- to implement more-cost-effective, agile infrastructures and more-efficient storage allocation.

But why do this now after a decade of aggregating IT resources through concepts like hyper-convergence? Scaling resources individually is often more-cost-effective than scaling them as combinations.

The idea of disaggregation is to create pools of individual resources using many computers and then allot appropriate combinations of resources -- memory, CPU, cache, network fabric and storage allocation -- on demand to serve individual applications. Done correctly, disaggregation enables standing up and breaking down infrastructures almost instantaneously, with resource utilization soaring past 80% and management costs decreasing with the resulting automation of storage allocation.

Let's explore five products to illustrate how disaggregation is being used and the benefits are accruing. Three are normally associated with storage: Nutanix, Pivot3 and Datrium. The remaining two -- DriveScale and Hewlett Packard Enterprise's Synergy -- apply disaggregation on a broader front.

Hyper-convergence and disaggregation

One of the key knocks against hyper-convergence has been the requirement to buy measured, predefined doses of compute and storage allocation in the form of a node, even when all you need is one or the other. Initial offerings from pioneers like Nutanix offered no choice. But in response to demand, Nutanix, Pivot3 and others now sell storage-heavy and compute-heavy models that mitigate, but don't eliminate, this issue.

Benefits of disaggregation:

  • improves resource utilization -- buy only resources you need in incremental steps, reducing Capex;
  • enables the creation of infrastructure at near-real-time speeds;
  • allows breakdown and reuse of resources at near-real-time speeds;
  • puts focus on application development and delivery, not infrastructure management;
  • makes data available faster and more frequently, facilitating decision-making on business side; and
  • reduces cost of managing infrastructure, lowering Opex. 

Nutanix offers all its models with minimum disk purchases of zero -- no disk at all. Also, to meet the needs of a range of applications, you can build a cluster with a variety of nodes, which is a key aspect of hyper-convergence. This way the purchase of a node is appropriate for the application for which it will be used, but the entire cluster is managed as one entity.

Ideally, disaggregation demands the separation of compute and storage so you can purchase 100% of one and none of the other. But in a Nutanix environment, this only works for 100% compute, not 100% storage. Nutanix solves this problem by using minimal compute for a storage-heavy node. It's not pure disaggregation, but it gets the job done.

Pivot3 takes a different approach. Like Nutanix, it has a compute-only node with no storage and a storage node with minimal compute power. The storage node can be used in the classic hyper-converged sense, where the focus is on running virtual machines, and the presence of a hypervisor is assumed. But the node can also be used without requiring a hypervisor -- essentially, bare-metal support. Customers can run applications not yet virtualized in a physical infrastructure as in the past, while still taking advantage of storage pools created in hyper-converged clusters managed from a single console.

With its acquisition of NexGen Storage, a purveyor of external storage arrays with fine quality of service, Pivot3 has made it possible to integrate an external, "disaggregated" NexGen storage array as part of the hyper-converged pool. Pivot3 also lets physical and virtual applications running off cluster use storage resources from clusters presented as an iSCSI target. All applications, internal and external to the hyper-converged cluster, can use NexGen's policy-based quality of service capability.

Neither Nutanix nor Pivot3 subscribe to pure disaggregation. But they are trying to deliver its advantages to customers.

Disaggregation and Datrium

A storage-only product, Datrium expands the meaning of disaggregation by separating many functions that we normally associate with storage arrays. Simply put, Datrium implements snapshots, compression, deduplication, replication, encryption and more as software in servers in a scale-out fashion, but leaves certain functions, such as a mirrored nonvolatile RAM and a simplified dual controller, in the storage array. This makes storage boxes more than JBOD, but simpler than traditional arrays. Datrium calls this the DVX Data Node.

Issues with large big data clusters

Because of the size of big data clusters for Hadoop, Spark and the like, they benefit more from disaggregation than hyper-converged clusters do. Here's why:

  • Lack of flexibility. PC servers come with a fixed ratio of compute and storage allocation. Hadoop and Spark are designed to work most effectively on clusters with the same compute-storage ratio across the entire cluster. Unfortunately, application characteristics change over time, and even if the chosen ratio was correct on day one, it may not be on day 60. Regardless of whether compute or storage is under pressure, additional servers must be purchased with the same ratio.
  • Cumbersome and expensive upgrades. Let's assume compute has become a problem, and Intel has a new, higher-performing processor. Even if there are no capacity issues, the entire cluster would need to be upgraded with new CPUs and disk drives to upgrade just the processor. Of course, you could install the old drives in new servers -- if you could even buy them without storage -- but this is time-consuming for large clusters. The reverse is true as well. If storage space is an issue and larger drives are available, you must replace the entire cluster with new servers.
  • Application silos. Because different applications require different compute-storage ratios, customers often end up building separate clusters for each application. This works fine for a time, but when resources become unbalanced, they can't be brought over from adjacent clusters for rebalancing.
  • Drives block airflow. Servers with compute and storage in the same box can only hold a limited number of drives in the chassis because disk drives block airflow to what needs it most: the CPU. Both compute and disk drive densities can be much higher per rack unit if they're separated. 

Datrium uses software-defined principles to implement these functions in a server that takes full advantage of scale-out; you can add another server to improve deduplication or encryption performance, for example. At the same time, Datrium uses a smarter but simplified storage box and claims to gain significant performance advantages over hyper-converged and other 100% software-defined products from other vendors, such as Hedvig and SwiftStack. Unlike traditional dual-controller storage arrays, Datrium's Data Node creates no silos and delivers scale-out performance and capacity independent of each other. The server can also run physical, virtual and container applications.

Datrium calls this "open convergence," not pure disaggregation, but another example of how the concept can be implemented.

HPE Synergy disaggregation

Hewlett Packard Enterprise's Synergy is the first platform in the industry architected as what HPE calls composable infrastructure. It's the purest form of disaggregation available. Start with individual resource pools of compute, storage and network fabric, and combine them as needed under software control for each application. Administering all three layers as one using a single, high-level unified API to compose, decompose, manage, update and scale infrastructure, Synergy is equally comfortable with virtualization, hybrid cloud and DevOps operational models.

Done correctly, disaggregation enables standing up and breaking down infrastructures almost instantaneously,

Unlike traditional hyper-converged, with its sweet spot in virtualization, Synergy covers all types of physical and virtual applications -- traditional, mobile and cloud-native. And while hyper-convergence brought compute and storage allocation together in a scale-out model, for the most part, it has left fabric alone. Composable infrastructure, designed from the ground up to deal with all the above, fabric included, is disaggregation in its rawest form.

Synergy is made up of three basic elements:

Fluid resource pools. Fluid pools of compute, storage and fabric resources are created on demand for each application and its specific service-level needs. You can configure compute capacity for physical, virtual and container workloads and internal storage capacity as direct-attached file, object or block storage, as needed by applications. Given the important place 3PAR storage holds in HPE's portfolio, it can be configured as external direct-attached storage and made part of the fluid pool. Fabric can deal with multiple protocols, and its bandwidth is nondisruptively scalable and adjustable. Add more compute, storage and fabric resources at any time with no impact on operations. These resources automatically become part of the fluid resource pool and immediately available to applications. Scaling doesn't increase the management burden.

Software-defined intelligence. HPE built Synergy as hardware with built-in software-defined intelligence. Provisioning, scaling, deprovisioning and more, are all done using templates, and infrastructures are composed and recomposed at near-real-time speeds. Think of Synergy as infrastructure as code, where all mundane functions that once took hours and days now occur almost instantly. Compute, storage and fabric are provisioned together with the proper firmware, BIOS, drivers and OS images and without operator intervention. Synergy uses templates for these purposes and, therefore, requires no knowledge of internals for the hardware and software underneath. This makes infrastructure easy to use, not only for IT but for DevOps and test-dev personnel as well. The differences between development, testing and production environments disappear and all use the same interface.

Unified API. Unlike traditional infrastructures, where each device has its own unique low-level API you must configure to the device, Synergy has one high-level API across compute, storage and fabric. Granted, there are efforts by purveyors of converged infrastructures -- Dell EMC, IBM-Cisco, NetApp-Cisco and others -- to simplify provisioning and management across compute, storage and fabric. But they do that through a masking layer because, under the covers, the elements remain distinct and separate.

Synergy's unified API has two distinctions: It's much higher-level than typical command-line interface-level commands, and it's designed to cut across compute, storage and fabric. It also provides a single interface to discover, search, inventory, configure, provision, update and diagnose infrastructure. And it's a single vehicle to integrate Synergy into other management platforms, including Microsoft System Center, Red Hat and VMware vCenter, as well as other DevOps platforms, such as Chef, Docker, OpenStack, Puppet and Python.

HPE Synergy, the product

The first incarnation of HPE Synergy is the model 12000 Frame in a 10U format. You can configure several Frames in a ring and manage multiple rings, all from a single console. There are five elements of importance in a Frame -- Composer; Image Streamer; and the Composable Storage, Composable Compute and Composable Fabric modules -- that play together to create a fully disaggregated platform.

Composer. Composer is responsible for discovering, searching, inventorying, configuring, provisioning, updating and diagnosing infrastructure across compute, storage and fabric. It's a physical appliance that's based on HPE OneView and the unified API. Composer uses server profile templates developed by IT or the business user that automate the provisioning, updating and de-provisioning processes across computing resources. Many compute module profiles can be created from a single template

Image Streamer. A physical appliance that fits inside the Frame, Image Streamer serves as a repository of bootable, golden images for a variety of applications, loadable on compute modules in a matter of minutes. It can be used instead of building a server through physical provisioning and installation of the OS, hypervisor, I/O drivers, application stacks and so on, reducing hours and days of error-laden process to an automated, error-free one that takes minutes.

Composable Storage, Compute and Fabric: Individual modules provide a variety of storage, compute and fabric resources tightly integrated with all other system elements. You only add those resources in short supply. The architecture is scale-out across all three elements, and you can build massive infrastructures that are managed as one. Even fabric, the most static element of all in the past, is converted into code for programmatic provisioning and de-provisioning.

DriveScale disaggregation approach

DriveScale has applied compute and storage disaggregation to next-gen big data analytics-driven applications like Cassandra, Hadoop, Kafka, MongoDB and Spark. These applications love scale-out architectures, usually built with commodity servers and small nodes, consisting of compute and direct-attached internal storage. Hundreds, often thousands, of nodes are required to solve big data problems, each working on portions of the puzzle using relevant data local to that node, then consolidating the results to deliver answers. By keeping data local to compute, latency and east-west traffic are minimized.

This approach has taken over the big data world in the past five years. But as clusters have grown larger, sometimes to thousands or tens of thousands of nodes, several issues have surfaced. They include a lack of flexibility, cumbersome and expensive upgrades, the appearance of application silos due to having to maintain the same ratio of compute to storage allocation for a given application and limits on the number of drives a chassis can hold to maintain the level of airflow required to keep CPUs cool.

DriveScale vs. Synergy

Both DriveScale and Hewlett Packard Enterprise leverage disaggregation as underlying technology to improve infrastructure utilization and flexibility, simplifying management while reducing cost. They do so quite differently, however. HPE Synergy is based on a hardware and software combination covering all three layers of the infrastructure -- compute, storage, fabric. It is designed to address all applications and deal with all types of data -- block, file and object. Its design maintains low latency between compute and storage, a must for transaction-oriented applications, for example. Because HPE developed the hardware and software, Synergy is a strategic buy.

DriveScale, on the other hand, aims squarely at the big data problem by dealing with massive clusters running NoSQL databases. Its SAS to Ethernet Adapter introduces 200 milliseconds of latency. That's negligible when dissipated over a 64 MB block size used in big data applications, but catastrophic in a transactional application using an 8 KB block size. Therefore, it is targeted at these big data applications and doesn't solve all infrastructure problems across the data center. DriveScale focuses on massive scalability, using commodity compute and storage, leaving fabric matters to vendors that specialize in it. 

DriveScale disaggregates compute from storage to nondisruptively allow the creation and changing of any compute and storage ratio an application requires at any point in time. It calls for buying JBOD compute and storage separately from the one or many vendors. (Server vendors have commodity servers with compute-only that are often much denser than standard commodity servers because of improved airflow.) This way, you can replace compute on a different schedule than disk drives -- generally every two to three years for compute and five years for drives. Since a separate pool of compute and storage allocation are available, you can allocate resources at will across different applications, making resource utilization much higher than before.

DriveScale uses a SAS-to-Ethernet adapter that connects disk drives to a standard top-of-rack Ethernet switch, which in turn connects all compute elements in a cluster. The vendor's intellectual property is in the orchestration software, which programmatically provisions, manages and de-provisions resources to the right cluster. Its GUI is based on a RESTful API that can integrate upwards with customers' preferred management tools, including Chef, Puppet and more.

Disaggregation deserves a look

In the purest sense, disaggregation implies the breaking of something into its component pieces, then creating independent pools of each resource that can be combined under programmatic control to deliver the right combination of resources to applications. Some storage and hyper-convergence vendors have applied disaggregation principles to make their offerings more cost-effective. But DriveScale and HPE Synergy use them in novel, paradigm-shifting ways.

HPE Synergy is the purest and the widest ranging example of disaggregation, exemplifying how the concept can be used to solve a large variety of problems. HPE views composable architectures as the next phase of evolution beyond hyper-convergence. Other vendors, such as DriveScale, use disaggregation principles to solve a specific problem.

While details of the architectures used by public cloud vendors, such as Amazon Web Services and Google Cloud Platform, aren't well known, disaggregation is likely the underlying principle. It may be early days for disaggregation, but creating separate resource pools, combining them at the right time and in the right mix, and then making them available to an application is an idea that's too powerful to ignore. It's time to add the term "disaggregation" to your vocabulary.

Article 1 of 5

Next Steps

How disaggregation can help solve big data infrastructure inefficiencies

Learn more about the benefits of disaggregating data storage resources

Dig Deeper on Storage management and analytics

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close