After confronting scalability problems with NFS, the Bioinformatics Core at the University of Alaska Fairbanks adopted an iSCSI storage system and has some lessons to share from its implementation.
In this interview, Shawn Houston, technical lead in the Biotechnology Computing Research Group, explains why iSCSI was chosen, what products his group is using for the iSCSI environment, what problems it encountered and what others considering such a move should know before implementation. You can listen to the podcast as an MP3 or read the transcript below.
You must have Adobe Flash Player 7 or above to view this content.See http://www.adobe.com/products/flashplayer to download now.
Download for later:
University implements iSCSI storage system
• Internet Explorer: Right Click > Save Target As
• Firefox: Right Click > Save Link As
Houston: Well, it was about three years ago, almost four now. We were changing over from a small cluster to a much bigger one for our computational back end, and I was looking to save money but still maintain a fairly high-performance system.
SearchStorage.com: What did you have prior to implementing the iSCSI storage system?
Houston: Our original cluster had only NFS [Network File System]. It was an Apple Xserve cluster. It actually started out as a cluster of [Apple] G5 towers and grew into an Xserve cluster.
SearchStorage.com: How many nodes do you have, and what type of clients are they?
Houston: I think the last count was just under 40. All of our clients are open source implementations, either with Linux or Mac OS.
Houston: All of our storage is used for research purposes. We do have a few classes that are taught on the system, but pretty much the bulk is research.
SearchStorage.com: Do your users have any special requirements regarding availability?
Houston: Because of the kinds of code that we run here, I attempt to provide 100-day uptime to our users. And I pretty much always meet my uptime target. I had one researcher request an extended uptime and I kept the system up for almost 240 days for him.
SearchStorage.com: What made you change to iSCSI?
Houston: NFS tends to stop scaling around seven or eight nodes in a clustered system. Our tests showed that iSCSI tended to scale much higher than that. We would have preferred to have gone to a storage-area network [SAN] at that time, but price considerations drove us toward more of a commodity solution.
SearchStorage.com: What iSCSI products are you using?
Houston: We're using two different products at the moment. One is from Sanrad Inc. It's a SCSI-to-iSCSI bridge. And we also use Nexsan Technologies' products, which have iSCSI native to them.
SearchStorage.com: What features of the Sanrad and Nexsan solutions convinced you to select them over the other systems you looked at?
Houston: It was mostly on the Sanrad side. They gave me a fan-out of something in the thousands of nodes and an internal bandwidth that exceeded the back-end storage device.
SearchStorage.com: Was the transition from an NFS environment to an iSCSI storage system a big change for you?
Houston: It was actually a very, very big change. Mostly going from NFS, which natively exports any file system over the NFS protocol to the iSCSI protocol, which requires a global file system of some sort.
SearchStorage.com: What back-end storage system are you currently running?
Houston: Nexsan SATABeasts and SATABoys.
SearchStorage.com: And what's your total capacity?
Houston: I would have to say we have about 22 TB of installed disk, but just within our core we're only using about 11 TB of it.
SearchStorage.com: How did the implementation of the iSCSI storage system go? Were there any issues?
Houston: The biggest one was within the Linux operating system. iSCSI's gone through two major revisions and at the time we started implementing, it wasn't generally built into most of the commercial Linux versions we were looking at.
SearchStorage.com: What type of issues did you have between the iSCSI system and your Linux operating systems?
Houston: We've had some stability issues going from the initial implementation of iSCSI and the older 2.4 kernels to the implementation in the 2.6 kernels.
SearchStorage.com: Were there any system changes that brought about the stability issues?
Houston: Going from Red Hat Enterprise Linux 3 to the Red Hat Enterprise Linux 4 was a major change, and then going from Enterprise Linux 4 to Enterprise Linux 5 was another major change.
SearchStorage.com: Are there any ongoing operational issues you've experienced?
Houston: The biggest operational issue is the shut-down and startup of large complex systems, and I don't think that's so much of an iSCSI issue as it is a global file system [GFS] issue.
SearchStorage.com: What tips would you give IT administrators considering a move to an iSCSI storage system?
Houston: I would say the first tip is to check fan-out on the implementation they're looking at, the back-end systems. A lot of the systems, like the base Nexsan system, have a fan-out of eight, which is very limiting.
The second tip is to get past the iSCSI issue and start looking at the global file systems. If you're comfortable implementing global file systems, iSCSI's a no-brainer. If you're not comfortable with a global file system, start there first.
SearchStorage.com: What do you tell people who ask if you're happy that you made the switch to an iSCSI storage system?
Houston: I would say the only thing I usually tell people when they ask is year-after-year, when I benchmark the system, I see about 80% of the performance of [Fibre Channel] at a fraction of the cost. I've never been disappointed with it.
This was first published in February 2010