The FXT Series nodes are available in two models, the FXT 2300 and FXT 2500. Each FXT Series node contains 64 GB of read-only DRAM and 1 GB of battery-backed NVRAM. The FXT 2300, list priced at $52,000, contains 1.2 TB of 15,000 rpm SAS drives. The FXT 2500, at $72,000, contains 3.5 TB of SAS disk.
The nodes can scale out under a global namespace. CEO Ron Bianchini said Avere has tested up to 25 nodes in a cluster internally and the largest cluster running at a beta testing site contains eight nodes, though there's no technical limitation on the number of nodes the cluster can support.
The clustered NAS system can be attached to third-party NFS NAS systems for archival and backup storage. "Any NFS Version 3 or above connecting over TCP is compatible," Bianchini said. Bianchini was CEO at Spinnaker Networks when NetApp bought the clustered NAS company in 2003.
Avere customers can set a data-retention schedule using a slider in the user interface to tell the FXT system how closely to synchronize the third-party SATA NFS device (which Avere calls "mass storage"). If it's set to zero, the FXT Series will ensure that writes to the mass storage device have been completed before acknowledging a write to the application. The slider can be pushed up to four hours, meaning mass storage will be up to four hours out of synch with the primary FXT Series.
Bianchini said two of eight beta sites are running with the retention policy set to zero. "The downside is that you don't get the same scale with writes as you do with reads" because the system has to wait for the SATA-based filer to respond before committing writes to primary storage, he said. "The environments using it this way aren't doing a lot of writes."
FXT Series automates data placement on tiered storage
Bianchini said the Avere FXT Series' proprietary algorithms assess patterns in application requests for blocks of storage within files -- including whether they call for sequential or random writes or reads – and then assigns blocks to appropriate storage tiers for optimal performance. In Version 1.0 of the product, the primary tiers are DRAM for read-only access to "hot" blocks, NVRAM for random writes, and SAS drives for sequential reads and writes. The NVRAM tier is used as a write buffer for the SAS capacity to make random writes to disk perform faster. Avere plans to add Flash for random read performance, but not with the first release.
Along with automatic data placement within each node, the cluster load balances across all nodes automatically according to application demand for data.
"If one block gets super hot on one of the nodes, another node will look at the other blocks in its fastest tier, and if one is a copy, it will throw out the older one and make a second copy of the hot block" to speed performance, Bianchini said. "As the data cools, it will back down to one copy as needed."
Avere's system is another approach to automating data placement on multiple tiers of storage, an emerging trend as storage systems mix traditional hard drives with solid-state drives (SSDs). Compellent Technologies Inc.'s Storage Center SAN's Data Progression may be the closest to Avere's approach, though data is migrated over much longer periods of time according to user-set policy on Compellent's product rather than on the fly and automatically.
"The industry has graduated to a more automated way of moving stuff around in storage systems, but as an industry we're still babes in the woods on that," said Arun Taneja, founder and consulting analyst at Hopkinton, Mass.-based Taneja Group. "We've taken steps in the right direction, but right now it's kind of piecemeal. [Avere's product] addresses the need to tightly integrate and design these systems from the ground up."
Avere has publicly referenced one customer, the Salk Institute in La Jolla, Calif., but spokespeople for the biological research organization were unavailable for comment as of press time. "At the end of the day, this is science, and if they can prove the science, they have a chance," said Steve Duplessie, founder and senior analyst at Milford, Mass.-based Enterprise Strategy Group. "They have to prove they can maintain the performance of Fibre Channel, but still can rip out the Fibre Channel infrastructure, which accounts for a large percentage of [overall system cost]. If that proves true, an idiot could sell this."
Noemi Greyzdorf, research manager, storage software at Framingham, Mass.-based IDC, said that if the system performs as Avere claims, it could be a boon in VMware environments, where hot spots can crop up unpredictably. "Dynamically allocating storage based on requirements for IOPS and throughput addresses a significant portion of the performance challenges associated with server virtualization environments," she said. "From what I have seen, I think it has legs to stand on if executed properly."