How does management of a scale-out network-attached storage environment differ from managing a traditional NAS?
In traditional network-attached storage (NAS), you have a storage controller, usually called a head, and behind that head is a certain amount of hard disk capacity -- there may be two heads for redundancy. But the bottom line is that those heads are kind of fixed, although the storage associated with them can continue to grow. If you need more processing power, you have to upgrade the heads; if you need more capacity, you may need to do a forklift upgrade or something similar.
Scale-out NAS, however, allows you to add additional nodes or heads as processing power is needed and to add more trays of storage as capacity is needed. The areas where this is really beneficial in a storage management sense is if you have very large deployments, if you have hundreds of terabytes or petabytes of data, and if you're suffering from what's called "NAS sprawl." Putting data into a scale-out NAS environment can give you a single point of control, such as single namespace, rather than having to spread it across many machines. You can manage capacity from a single standpoint and it simplifies that environment.
It's also helpful when you have a number of files that are approaching approximately 80% of the theoretical limit of the traditional NAS system because performance tends to degrade rapidly once you exceed that threshold. So it really is best for solving NAS sprawl -- unpredictable environments that are poorly served by NAS. Other examples would be high-performance computing, parallel processing or simply dealing with high volumes of files where traditional NAS performance is not acceptable.
About the expert: Phil Goodwin is a storage consultant and frequent TechTarget contributor.
This was first published in February 2014