Published: 17 Feb 2009
| Traditional and not-so-traditional products and services earned honors this year, addressing such issues as data deduplication, solid-state storage and storage cloud services.
Storage managers didn't get much of a break in 2008 as the struggle to keep pace with growing capacity demands continued unabated, but there was still plenty of good news to hearten them. Scores of excellent products were rolled out in the past year, introducing new technologies or adding significant enhancements to tried-and-true storage technologies. As with most technological advances, "bigger" and "faster" were key themes; however, this year they were joined by "efficiency" and "cost savings," terms that provided even more encouragement to beleaguered storage pros who are quickly learning how to do more with less.
The editors of Storage magazine and SearchStorage.com, as well as a panel of judges -- including users, consultants and analysts -- pored through this bumper crop of new or improved products and selected the 15 we feel stood out based on innovation, performance, ease of integration, functionality and value. Congratulations to all of the storage Product of the Year winners!Click here to see a list of all of this year's Product of the Year finalists.
| Backup and disaster recovery software and services
GOLD: VMware Site Recovery Manager 1.0
It didn't take long for the server virtualization juggernaut to roar through data centers, forever altering the server landscape and even the meaning of the word "server." Led by VMware, virtualization applications have proven to be the most effective consolidation tool yet. But despite all of the benefits server virtualization brought, it's also responsible for causing a few headaches along the way. Just ask any storage administrator.
Virtualization has had a profound effect on the storage side of the shop, posing new challenges related to capacity allocation and data protection. Traditional methods can be used to back up virtualized servers for business continuity, but while the old methods work, they lack the efficiency and resiliency required in virtual environments.
There's no shortage of vendors addressing this problem, but it has taken the company most responsible for the storage shakeup -- VMware Inc. -- to come up with some truly elegant solutions, like VMware Site Recovery Manager (SRM) 1.0.
VMware SRM ranked high in all of our evaluating categories and topped all backup and disaster recovery (DR) software and services for value. "Increasing use of VMware makes this software incredibly valuable to every organization," notes one judge. While other judges point out that implementing SRM takes some planning, all agree that the effort is worth it.
With VMware SRM you can automate much of the DR process. You use it to create a DR plan that defines how virtual servers will failover based on the apps they host and their criticality (essentially documenting your DR process and creating a runbook). SRM then lets you test the automated failover scenarios without disrupting the production environment.
The automated recovery plans you build with SRM can be as sophisticated as necessary; you can control which servers are recovered first, the recovery sequence and which servers don't need to be recovered.
VMware SRM runs on a server at each site involved in the DR plan. It works hand in hand with replication apps from other vendors that tap into VMware "adapters" to integrate with SRM. EMC Corp.'s SDRF, FalconStor Software Inc.'s Continuous Data Protector and Hewlett-Packard (HP) Co.'s StorageWorks Continuous Access are just a few of the popular replication products that have been certified by VMware for SRM. VMware vCenter Server is required to run SRM.
Pricing for VMware Site Recovery Manager starts at a little more than $2,000 for a single-processor physical server and includes one year of support.
| Backup and disaster recovery software and services
SILVER: FalconStor Network Storage Server (NSS) 6.0
It's nearly impossible to understate the popularity and importance of server virtualization today, and FalconStor Software Inc.'s Network Storage Server (NSS) 6.0 offers a variety of functionality designed to ease storage management in virtual server environments. NSS 6.0 is a software-based SAN solution that acts as a storage virtualization and replication gateway to storage arrays from any vendor, provisioning virtual disks to VMware ESX.
NSS 6.0 is the first software-based storage virtualization and replication gateway to be certified by VMware as a storage virtualization device (SVD), and it enables VMware Site Recovery Manager capability on any VMware-certified storage hardware. NSS 6.0 also supports Microsoft Windows Server 2008 Failover Clustering and Hyper-V virtual server environments.
The product offers integrated support for Fibre Channel (FC), iSCSI and InfiniBand, and allows users to manage virtualized and non-virtualized storage from a single console. It uses thin provisioning to improve storage resource allocation and capacity management.
NSS enables VMware Site Recovery Manager with any VMware-certified storage array platform, and works with heterogeneous storage hardware at the primary data center and the disaster recovery site, which could translate into significant savings for many users. NSS 6.0's Application Snapshot Director for VMware complements VMware Site Recovery Manager to ensure that active application data can be replicated to a remote disaster recovery site with full transactional integrity. Finally, FalconStor's MicroScan Thin Storage Replication technology optimizes WAN utilization for more efficient replication and faster failback operations.
Our judges rank NSS 6.0 highly for its breadth of functionality, as well as for its innovation and performance. FalconStor NSS 6.0 is available as software only or as a turnkey appliance. Pricing starts at $2,000.
| Backup and disaster recovery software and services
BRONZE: Acronis Recovery for Microsoft Exchange
For most companies, email systems are communications lifelines linking them with their suppliers, customers and contractors; for many firms, doing business as usual means having fully functional, reliable email service.
Plenty of data protection vendors say they can protect and sustain this valuable corporate asset, but few do it as well and as easily as Acronis Recovery for Microsoft Exchange. Although not widely known in the enterprise realm, Acronis Inc. has established a worldwide presence, based largely on the success of its True Image line of bare-metal recovery apps. With Acronis Recovery for Microsoft Exchange, the company builds on that expertise and incorporates its trademark ease of use.
Acronis Recovery for Microsoft Exchange protects Exchange data stores at the brick or database level with near continuous snapshots of mail transaction logs.
A management console lets you remotely manage all instances of the application running on your network. From the console, you can install agents, set options on Exchange servers, and launch recoveries for any Exchange server running the agent.
Restores are essentially point-and-click affairs, allowing recovery of entire mail databases, or individual mailboxes or messages. You can also easily pick the point from which you want to recover. One of Acronis Recovery for Microsoft Exchange's most impressive features is its ability to bring the email system back up and available to users within minutes while it's busy recovering the email data in the background. Acronis calls this dial-tone service, providing users with at least basic mail service during recovery, rather than making them wait until the operation completes.
A host of wizards makes deploying, configuring and using Acronis Recovery for Microsoft Exchange easy enough for even those businesses strapped for IT resources to deploy it. One judge calls it "a solid SMB backup product." We agree, especially with a price ($999 per server) that puts it within reach of most companies. But we suspect plenty of enterprises will give it a long, hard look, too.
| Backup hardware
For storage pros charged with backup, data growth remains the biggest challenge. Given the implications exploding capacity has on the cost of storage and energy, management best practices, staffing considerations and even data center floor space, users tell us that data deduplication has become a core value proposition when it comes to backup hardware products. But users also say data deduplication vendors will have to push the scalability and performance of their products to keep up with their data growth.
Data Domain Inc.'s data deduplication products have achieved the broadest acceptance, with the company boasting more than 1,800 customers to date. The DD690 continues the company's scaling approach, which follows the curve of the commodity processor market; the DD690 adds quad-core processors along with 10 Gigabit Ethernet connections. The company's Stream Informed Segment Layout (SISL) architecture means throughput performance isn't dependent on disk I/O availability, so the throughput of Data Domain systems increases with each Intel processor performance improvement, according to the company.
Thus the DD690 boosts the single-stream throughput of Data Domain's dedupe to 600 GB per hour. The DD690 can be configured as a DDX array (up to 16 DD690 controllers in 20U of rackspace) to deliver up to 22 TB per hour of aggregate throughput and 28 petabytes (PBs) of usable capacity. One of our judges calls it "an evolutionary advance for Data Domain."
The DD690's flexibility adds to its appeal. It supports both virtual tape library (VTL) and NAS interfaces, as well as multiple, simultaneous protocols, including Ethernet and Fibre Channel. NAS for disk backup is gaining popularity, most notably among small- to medium-sized businesses (SMBs), which was one of the hottest storage markets in 2008. The NAS interface and enhanced performance also mean the DD690 can be used for nearline or secondary storage, rather than strictly for backup; Data Domain's Retention Lock software also allows it to be used for compliance archiving.
In addition, Data Domain's system addresses some of the other key storage issues facing storage managers this year, including disaster recovery, wide-area networking (WAN), and storage for remote and branch offices. Data Domain offers distance replication software that takes advantage of the data reduction to reduce bandwidth consumption when sending data over a WAN, and the boosted capacity of the box means the fully configured DDX version can support up to 60 remote sites.
| Backup hardware
Reacting to the significantly amended Federal Rules of Civil Procedure (FRCP), many companies spent much of 2008 crafting their data-retention policies and deploying products to help them comply with the new guidelines for the legal discovery of electronic information. Other organizations deployed the same gear to archive inactive data and ease the impact of data growth on strained backup and management processes.
Typically retained for long periods, the growth of archived data can become as much of an issue as the one it's trying to solve. Maintaining data-retrieval performance and keeping ongoing ownership costs in check are critical as the archive repository grows. The ability to migrate data within the repository to take advantage of new technologies and economies of scale is also important. Toss in regulations like the Sarbanes-Oxley Act (SOX) and Health Insurance Portability and Accountability Act (HIPAA), and the situation is further complicated by the need for heightened security and data integrity.
Permabit Technology Corp.'s Enterprise Archive Data Center Series, introduced in early 2008, addresses those challenges with a combination of sub-file deduplication and compression, as well as a grid-based architecture called RAIN-EC. Dividing data among multiple hardware nodes in the grid yields several benefits: it boosts performance to up to 2 Gbps through parallel processing; keeps archive data available in the event of a node failure; allows for rolling migrations to new hardware over time; uses standard interfaces like CIFS, NFS and WebDAV that keep data accessible to multiple applications; and includes features such as data verification, replication, write once read many (WORM) and encryption for compliance.
The data integrity features and grid architecture also mean users can build the archive using space-efficient 1 TB drives, while avoiding lengthy RAID rebuilds in the event of a drive failure. All of that adds up to scalability -- up to 3 PB, to be exact. When asked about the product's appeal, one user of the Permabit archive simply repeated the IT pro's data-growth mantra: "Faster and more capacity."
| Backup hardware
If 2008's theme was ever more data growth, some storage vendors responded to the challenge by offering greater efficiency and more flexibility in their products. The Quantum Corp. DXi7500 data deduplication device was the first to offer users a choice between "in-line" and "post-process" data deduplication approaches. While other vendors bicker over the relative merits of the two methods, Quantum lets users decide which one works best for them.
The DXi7500 also allows both approaches to be used simultaneously for different backup jobs. If users don't want to get so granular with policy settings, the product's "adaptive mode" can automatically adjust the data deduplication process based on the data ingest rate.
The product gives users a choice when it comes to integrating physical tape into backup schemes, neutralizing another frequent bone of contention in the market for deduplicating virtual tape libraries (VTLs). Backup software certified with the device can initiate, track and control all writes to tape, or the DXi7500 can manage copies to tape with shadow tape creation. It can also write copies of backup files to a directly connected tape library, minimizing the overhead on the rest of the environment when creating tape copies.
The DXi7500 scales from 9 TB raw to 180 TB raw, and offers up to 4 TB per hour compressed throughput, according to Quantum. It can be used with the smaller models in the DXi product line to transmit backup data from remote sites to a central location. Replication is asynchronous, automated, encrypted and operates as a background process.
The influence of the DXi line extended well beyond Quantum in 2008. With vendors like Data Domain and Riverbed Technology Inc. already paying royalties for data deduplication to Quantum (based on a patent portfolio it bought with ADIC subsidiary Rocksoft in 2006), DXi was also picked up by EMC Corp. as the basis of a new data deduplication product line in May.
| Disk and disk subsystems
The strong point of BlueArc Corp.'s Titan unified NAS and iSCSI system can be summed up in one word: performance. Our judges consistently gave the Titan 3200 high marks for performance, and Titan customers often cite performance as the top reason for buying the BlueArc systems.
Bumping up performance was the main emphasis of the Titan 3200 that rolled out in March. The new system offers up to 200,000 IOPS and 20 GBps throughput. Both those numbers were doubled from the Titan 2000. BlueArc's new hardware also supports file systems of 256 TB, 64 virtual storage servers and eight cluster nodes for a maximum capacity of 4 PB, up from 2 PB with the 2000 series.
But BlueArc did more than bump up performance with the upgrade. It added a new set of open application programming interfaces (APIs) that lets BlueArc partners and other developers write applications for Titan. These applications include information lifecycle management (ILM), data virtualization, and data retention and reduction.
The Titan 3200 supports solid state, Fibre Channel, SATA and WORM-protected storage. Like previous Titan systems, it's best suited for applications requiring high bandwidth and IOPS, such as high-performance computing and Web-based services. Hitachi Data Systems sells the Titan 3200 as its high-performance NAS 3000 platform.
Like previous Titans, the 3200 implements its file system in silicon on a field-programmable gate array (FPGA) rather than in software. One of our judges describes the Titan 3200 as the "fastest NAS in the market with its innovative FPGAs. The value is good because of the ability to consolidate multiple NAS systems, and scalability is impressive."
Starting price for the Titan 3200 is $100,000.
| Disk and disk subsystems
Intel Corp. planted its flag in the enterprise solid-state drive (SSD) market with the X25-E (for Extreme) device.
Intel's newest SSDs plug into 2.5-inch SATA drive sockets, and deliver up to 250 MBps sustained read, 170 MBps sustained write, 35,000 IOPS read and 3,300 IOPS write performance. The X25-E is available in 32 GB and 64 GB models. The 32 GB X25-E is capable of writing up to 4 PB (petabytes) of data over a three-year period (3.7 TB/day), and the 64 GB version can write up to 8 PB over that period.
What sets the X25-E apart from other early enterprise SSDs is the way it speeds the write process. Intel uses single-level cell (SLC) flash memory for its extreme drives, storing one bit per memory cell vs. two bits with the more common multi-level cell (MLC) flash memory. MLC drives have greater capacity, but SLC drives perform much faster writes. In the case of Intel's SSDs, its MLC versions write at up to 70 MBps vs. 170 MBps for the SLC drives. SLC SSDs are more expensive, as SSDs remain a premium play.
Verari Systems Inc. is shipping Intel X25-E SSDs in its HyDrive enterprise storage blade and BladeRack 2 X-Series blade servers. Sun Microsystems Inc. has also pledged support for the drives.
"Intel shook things up with this product," says one of our judges, a storage manager. He calls Intel "the first major player in the flash drive market that can really push the performance ceiling."
Flash will need strong performance to make a market impact; X25-E pricing begins at $695 for 32 GB.
| Disk and disk subsystems
By offering a SAS backplane and active-active controllers in a midrange storage system, Hitachi Data Systems is at least slightly ahead of its time.
Industry experts agree it's only a matter of time until SAS replaces Fibre Channel as the performance drives in storage arrays, but Hitachi got the ball rolling among major vendors with the AMS 2000 family. The systems are available with SAS or SATA drives.
Hitachi also rolled out a new Dynamic Load Balancing Controller for the platform. The symmetric active-active controller doesn't require logical unit number (LUN) assignments to match preferred paths from server to controller, and can send I/Os to any host port without a performance penalty. The system also monitors controller utilization rates and rebalances the load between controllers. The AMS 2000 series supports disk spin down when no I/O activity is taking place to save power.
The 2000 family comes in three versions: the AMS 2100, AMS 2300 and AMS 2500. Hitachi claims an IOPS cache burst rate of 400,000 for the AMS 2100 and 2300, and 900,000 for the 2500. The sustained throughput ranges from 1,250 MBps for the 2100 to 2,400 MBps for the 2500. Capacity scales to 118 TB on the AMS 2100, and to 472 TB on the AMS 2500.
"This is proof that midtier storage is growing up, with true active-active controllers," says one of our judges, a SAN architect. "It's a real big gain in stability, performance and management for real-world applications."
Hitachi AMS 2100 pricing starts at approximately $31,500.
| Networking equipment
Riverbed Technology Inc.'s RiOS operating system impressed our judging panel with its ability to consolidate and simplify remote-location infrastructure.
RiOS reduces the hardware footprint in remote locations by virtualizing edge services, accelerating application performance and simplifying remote-location administration. The RiOS OS powers the company's Steelhead WAN acceleration appliances, mobile client and Riverbed Services Platform (RSP).
Among the new features RiOS 5.0 introduced were the RSP data services platform that enables virtualized edge services without deploying additional physical servers. Application acceleration features now eliminate the burdensome storage requirement to support widely used applications, including Web-based apps. And an application-level protocol optimization for Microsoft Exchange 2007 adds to the OS's support for Exchange 2000 and 2003.
One judge remarks that Version 5.0 offers "major improvements for Steelhead customers."
The company claims its Steelhead appliances will improve simple file-sharing operations by a hundredfold, cut disaster recovery windows tenfold or more, and reduce WAN bandwidth utilization by 60% to 95%. The judges give RiOS 5.0 high marks for innovation, performance and functionality.
The Steelhead appliance models range in price from $3,495 to $129,995. Ten mobile client licenses cost $3,499, and can be purchased with the appliances or later. Riverbed estimates that organizations will require a license for every three to five mobile users, with the cost per user approximately $87.
| Networking equipment
The Brocade DCX Backbone network switching platform allows administrators to consolidate servers and SAN switches, while increasing application service levels and energy efficiency. One networking category judge calls it an "evolutionary upgrade" for the networking market.
Introduced in January 2008, the DCX Backbone provides 8 Gbps throughput, which allows broader server virtualization, data center consolidation, and energy and overhead cost savings.
Each DCX Backbone operates up to 384 ports. The eight-blade slot features 256 Gbps of throughput, which means the Backbone can deliver up to 3 Tbps of chassis bandwidth. Inter-chassis link ports can connect two DCX Backbones to deliver 6 Tbps of dual-chassis bandwidth. One judge calls it "crazy fast."
It received the highest performance marks in the networking category, with the judges commending it for functionality and ease of use.
Blade slots can be used to provide 10 Gbps FC connectivity, FCIP SAN extension and fabric-based applications; they can also support 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Converged Enhanced Ethernet (CEE) protocols, which simplifies LAN and SAN server connections.
The DCX Backbone uses less than .5 watts per Gbps, which the company says makes it 10 times more efficient than competitive products.
The network infrastructure environment can be administered with the new Data Center Fabric Manager application, and it's compatible with Brocade's B-Series and M-Series components. Brocade says DCX Backbone pricing begins in the low six figures and varies by configuration, software licensing and support options.
| Networking equipment
Network Executive Software (NetEx) Inc.'s HyperIP 5.5 software enables faster WAN data replication and recovery by accelerating TCP applications using standard Internet connections. HyperIP provides an alternative to dedicated point-to-point data lines, and is available in a 1U-sized appliance using standard, off-the-shelf components.
The judges give it high marks for ease of integration and value. One judge says HyperIP is "slick data transport optimization that works, is very reasonably priced and does what it says."
HyperIP technology compresses block-level data up to a 15:1 ratio, which enables it to increase WAN performance to up to 800 Mbps throughput for a single TCP/IP connection, the highest performance of any optimization appliance on the market, according to the company, and 25% to 100% faster than competing products.
As an appliance, it enables data transport at wire speed and leverages the proprietary HyperIP TCP application acceleration technology to mitigate latency and packet loss, and protect against variable circuit quality conditions.
The current HyperIP 5.5 release extends support for multisite configurations, which allows for large-scale, remote data replication deployments. It has more than 8,000 application TCP connections, thereby enabling concurrent replication, migration and recovery processes. A command line interface (CLI) provides single response commands, and multiple-system images let users install new software image upgrades while the software is running.
HyperIP is easy to implement by simply attaching it to a distance-separated Ethernet network segment, so users can leverage existing Ethernet/IP WAN infrastructure. Traffic is pointed to HyperIP via the network, server, NAS device or storage array routing statements. HyperIP parameters and feature options can be modified through the Web-accessible user interface.
Pricing for HyperIP 5.5 starts at $6,000.
| Storage management tools
Virtualization was one of the hottest topics in storage last year, fueled in no small part by the popularity of virtual server technology. Now VMware Inc., the leading server virtualization vendor, has extended its reach to storage and captured top honors in the management category with its Storage VMotion software.
Users had been asking for a storage version of VMware's VMotion tool, which moves running virtual machines (VMs) from one physical server to another. With Storage VMotion, VMware adds the ability to migrate running virtual machine disk files within and across data storage systems with no downtime, ensuring continuous service availability and transaction integrity.
Storage VMotion "set the standard for virtual guest availability," according to one judge. "It increases application uptime and IO flexibility -- the reason most [people] buy VMware today."
Storage VMotion works by moving a virtual machine's home directory (containing information on configuration, log files, etc.) to the new location before moving the virtual machine disk file. It creates a "child disk" for each virtual machine being migrated, and all disk writes are directed to that child disk once the migration starts. The parent, or original virtual disk, is then copied from the old storage device to the new one. The child disk is reunited and consolidated with the new parent disk, and the ESX host server is redirected to the new parent disk location.
The product is most useful for VMware customers who want to minimize service disruption while performing maintenance or migrating to new storage systems or different classes of storage. It's also helpful for balancing or optimizing storage workloads and addressing performance bottlenecks. Storage VMotion supports both FC and iSCSI SANs, and works with any operating system and application that runs on the VMware Infrastructure.
Customers who buy the Enterprise Edition of VMware Infrastructure 3 -- the server virtualization platform that includes VMware ESX Server 3.5 -- get Storage VMotion as part of the package. List price is $5,750 per two processors. Both VMotion and Storage VMotion require VMware vCenter Server (licensed separately) to enable central management of the entire VMware virtual environment from a single console.
VMotion and Storage VMotion are available as part of a standalone package for purchasers of the less expensive Foundation and Standard versions of VMware Infrastructure. The standalone price is $3,495 per dual processor.
| Storage management tools
Data deduplication took center stage in 2008, as major storage vendors rolled out backup products, and corporate IT managers either tested the waters or clamored for more information about the red-hot technology.
NetApp Deduplication claimed runner-up honors in the storage management competition, differentiating itself from other large vendors with its ability to work with primary storage data in addition to backup and archival data.
NetApp announced in July 2008 that customers of its V-Series family of storage virtualization products could also use the deduplication technology to reduce redundant copies of data on disk arrays from other major vendors, including EMC Corp., Hewlett-Packard Co. and Hitachi Data Systems.
Judges awarded NetApp Deduplication technology the highest marks of any storage management entrant in the areas of "ease of integration into environment" and "ease of use and manageability." NetApp Deduplication is a simple command-based feature that's free to users of the company's Data Ontap operating system, which is part of all NetApp FAS Series and V-Series storage systems.
"Porting dedupe into the core OS is a winner," comments one judge.
NetApp Deduplication operates as a background process on FAS Series and V-Series systems, and the impact on read/write operations is minimal. Users can schedule deduplication for off-peak times, which can be especially important in minimizing performance hits with heavily used primary storage apps. They can also select the data sets that will produce the greatest benefit, leaving out sets such as encrypted data that won't deduplicate efficiently.
NetApp's Web site has a "deduplication calculator" for customers to compute the space savings they can expect to see.
| Storage management tools
Cloud storage generated a flurry of activity last year as a steady stream of products and services hit the market, so it's fitting that one of the promising new offerings, Nirvanix Inc.'s CloudNAS, rounds out the top products in storage management for 2008.
CloudNAS can transform any Linux or Windows server at a customer site into a virtual NAS gateway to the Nirvanix Storage Delivery Network's (SDN) encrypted offsite storage. The main distinguishing characteristic of the Nirvanix approach is the use of standard storage protocols to access the SDN online storage service.
Nirvanix CloudNAS mounts the SDN as a virtual drive that can be accessed via NFS, CIFS, FTP or as a virtual tape library (VTL) target, through supported archiving and backup applications. Other storage services, such as Amazon's Simple Storage Service (S3) and Rackspace Hosting Inc.'s Mosso, require development to an API.
Released in October 2008, Nirvanix CloudNAS hasn't had much chance to make an impact yet, but the judges rate the technology as highly innovative. One calls the Nirvanix approach a "great alternative for second-tier unstructured data" with "excellent multisite [disaster recovery] capability."
Once CloudNAS is installed on a server, an administrator can point existing applications and storage processes to it and set file, directory or access permissions. Users access the Nirvanix-mapped drive from their existing applications.
Behind the Nirvanix SDN is a clustered file system that includes all of Nirvanix's globally distributed storage nodes under a single namespace. CloudNAS claims to offer built-in data disaster recovery and automated policy-based data replication on up to three of the geographically dispersed storage nodes.
Data is transferred from CloudNAS via the Internet or Nirvanix cross connect services, which provides a direct connection from the customer site to one of Nirvanix's storage nodes.
CloudNAS is available as a free download for customers that have a contract with Nirvanix for 1 TB or greater. Optional round-the-clock support is available at $200 per month per server.
- A guide to hybrid flash storage arrays –ComputerWeekly.com
- Acronis Backup: Transforming Data Protection with Blockchain and Ransomware ... –Acronis
- Containers and storage 101: The fundamentals of container storage –ComputerWeekly.com
- Demystifying storage performance metrics –ComputerWeekly.com