Users told SearchStorage.com they were pleased that BlueArc Corp. added capacity and performance to its Titan NAS head, but there were still some features they were waiting for now that the announcement has come and gone.
The new Titan 2000 series comes in two versions: the Titan 2100 and Titan 2200. The 2100 is essentially a revamp of the first Titan, with 75,000 SPECsfs operations per second rather than 50,000 as in the earlier version, and with the same capacity ceiling of 256 terabytes (TB). The 2200 is where most of the upgrades have been made. The 2200 will allow performance up to 100,000 SpecSFS operations per second, twice the performance of the first generation Titan, and will also scale in capacity up to 512 TB.
The startup, which has been gaining traction against Network Appliance Inc. (Net App) in the NAS space according to analysts, announced the latest version of its flagship product this week as part of its regularly scheduled roadmap, which calls for a major upgrade every 18 to 24 months.
"The reason I chose BlueArc in the first place was its scalability," said William Hiatt, national technology director for Advantage Sales and Marketing, who uses Titan in front of disk systems from Storage Technology Corp. at his main data center in San Diego.
Hiatt said he was pleased with the fact that the new Titan would add even more throughput and capacity, as Advantage had recently been formed out of the merger of 30 different companies.
"We have 40 to 50 offices still waiting to be networked into our main data center," he said. He estimated each office had between 200 GB and 300 GB of data to add to his storage pool.
"The new Titan's throughput will help us consolidate all that data," he said.
Kelly Carpenter of the University of Washington in St. Louis was similarly impressed with the performance upgrades, because it would provide the added boost in throughput needed to support new servers in his environment.
"We're buying blades and adding clients to our system fast -- we have 120 new server blades ready to come online right now," Carpenter said.
Still, both users said some of the improvements they'd asked for before the announcement haven't appeared yet.
According to Hiatt, he's still waiting for BlueArc to change the process of adding disk behind the head -- right now, he said, when new disks are added the old data on other disks is left in its original volumes and not restriped across the new capacity.
"Additional capacity is just slapped on the very end -- we can't get the benefit of more performance by adding more disk," he said.
Carpenter said the management of the Titan "could be nicer -- I would have suggested GUI improvements, things like that," he said. He also said he wished BlueArc had improved its snapshot capabilities in the release.
"I understand NetApp has this patented, proprietary snapshot system and no one else can have the exact same thing because of that," he said. "But it would be nice to see snapshot functionality similar to NetApp's, particularly the SnapVault, which keeps a repository of snapshots on site."
Both users also said that support for global namespace and 10 Gigabit Ethernet (GigE), also included in the announcement, did not seem to be useful to them at the moment. Hiatt said he is only running one Titan and has no means or desire to cluster machines, Carpenter pointed out the 10 GigE support was only between Titan heads, not connectivity to the network.
"They're up to 6 Gbps now [in connectivity to the network]," he said. "The full 10 would've been nice."
"We listen to our users very closely, and the things they want changed, we work on getting changed," said Louis Gray, senior manager of corporate marketing for BlueArc. According to Gray, support for 10 GigE to the network will be added later this year.
As for the data restriping issue brought up by Hiatt, "there may be a misunderstanding by the user on what storage pools and global namespace can offer on a single Titan," said Jon Affeld, director of product marketing for BlueArc. "They will allow them to separate out the new storage from the old storage, and have more control over how data is rebalanced."
"One of the reasons we don't do what [the customer] is asking for is that the time to restripe 60 to 70 TB creates a big workload. It's a time-consuming activity," said Jeff Hill, director of product management for BlueArc. "We feel he's much better off getting the system up and working faster."
"As with everything, there's a pro and a con," Hiatt said. "But we'd be willing to take the time to get better I/O for high-performance applications."