Isilon Systems Inc. announced the addition of an accelerator node to its line of clustered storage products this...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
week, offering users the possibility of scaling performance independent from capacity in their storage clusters.
The new node, a 1U box that contains processors and memory but no disk, is plugged into a user's cluster like a regular node. Once the power and network cables are plugged in, the cluster automatically detects the new box as it would with a regular cluster node, and rebalances throughput loads across the cluster. The rest of the cluster aggregates the additional processing power for "free," since the accelerator node does not add more data to the system to be processed.
Users can add up to three accelerator nodes per regular Isilon box, and at a maximum 250 terabyte (TB) cluster configuration could see up to 6 GBps aggregate throughput in a single system, according to Isilon.
"We collect data for a month and then perform a large set of processor-intensive operations on it at once," said Dr. Parag Mallick, director of proteomics at the Cedars-Sinai Prostate Cancer Research Center. "Having the extra throughput without starving the cluster is very attractive."
Mallick's department stores large
XML files of data taken from instruments called mass spectrometers, which are essentially pictures of the different elements of blood -- "at about 30,000 megapixel resolution," Mallick said. The Center's Isilon cluster regularly processes approximately 22 TB worth of these files in different ways, sometimes to evaluate if a particular experiment was done properly, and sometimes to compare samples taken from patients over a 30-year span of clinical trials for different factors.
Some studies put even more of a strain on the lab's storage systems. "Let's say you want to ask a general question of what proteins are different in men and women," Mallick explained. "All of a sudden, you need to look at all of the data you've ever collected -- " hundreds of terabytes."
Mallick said he has been using the Isilon system after attempts to use a
SAN failed. So far, he said, it's worked well, but when he ships his operation to a new facility, with double the number of mass spectrometers demanding an estimated six times as many compute cycles as his current system, the accelerator node will probably come in handy.
Mallick said he has beta versions of the boxes in his lab for testing. "We're going to try and break it," he laughed. "That's what we do best." Among the things the lab will watch out for while kicking the tires, Mallick said, is whether or not the addition of the accelerator node will really take processing load off the other cluster boxes'
"We're going to test how well it doles the general load out," he said. "We're going to really scrutinize if the load drops across the rest of the nodes."
Since April, Mallick said, he's experienced drive failures in his Isilon system twice. A spare drive in each of the boxes where a disk failed kept him from experiencing downtime, but he said he will be looking to see "if we have the same extent of drive failures -- if the accelerator improves the way the system does predictive caching."
Finally, Mallick said, he hopes to see Isilon add a few more capabilities to its technology -- especially in the area of backup and data migration.
"Right now they can't export data in volume to tape and have it parallelized," he said. "They'll do
mirroring, but it'd be nice if they could provide another backup strategy."
He also said he hoped Isilon would give its boxes the ability to do predictive migration of data -- in other words, to move it to slower speed backup disk once it had aged a certain amount without being accessed. Currently, Mallick said, such migrations must be done by hand.
According to John Brake, system administrator for Digital Dimension Studios, a special-effects house for film and TV located in Burbank, Calif., his shop of 30 2-D and 3-D digital effects artists hasn't come close to testing the throughput his Isilon system already has.
This, he said, is because the artists are still limited by the speed of their single-drive PC workstations, something Brake said the studio is looking to get rid of. Meanwhile, by moving from
DAS to a clustered system, Brake said, he's doubled productivity in the shop. What had been 4 TB of slowly accessed storage is now 8 TB, and it's growing every day.
Right now it's not as much of an issue, Brake admitted, but said he would be looking at the accelerator nodes because the studio is looking to replace the artists' workstations with more sophisticated
RAID systems over the next few months. Furthermore, he said, the studio was trying to take on bigger projects, such as film effects in stereoscopic 3D projects they won't be able to branch into without faster workstations and ever faster storage.
"With stereoscopic 3-D, our artists would be pulling 24 MB frames," Brake said. "That's where the accelerator node will come into play. But first, I need to see how fast I can get my system."
Clustered storage has been a hot item this week, with a number of companies making announcements in this space to coincide with the Supercomputing 2005 show in Seattle. Among them, a new, and supposedly improved, version of Microsoft's Compute Clusters Server (CCS) 2003 Windows software; OnStor Inc. unveiled the Bobcat 2280 NAS gateway, featuring a FastPath accelerator chip for higher performance in streaming media, content delivery and large-scale file storage applications; and Panasas Inc. is adding blades to its ActiveScale Storage Cluster.
Isilon, trying to get in on the game with Microsoft, announced interoperability with CCS 2003, as well as a live demonstration at the Supercomputing 2005 show. Isilon also announced that Asigra Inc.'s Televaulting backup and recovery software can be paired with its clusters.
"Traditionally, clusters have been Unix and Linux-based," said Tony Asaro, senior analyst with the Enterprise Strategy Group. "This will be great for users who want to scale beyond the traditional server environment." This, of course, is only if CCS works this time around when it becomes available, which is currently projected only as sometime early next year.