This blog is about three months in the making.
First, a bit of background. Several posts ago, I predicted the death of SATA in favor of SAS, which is only marginally more expensive (not talking the dirt-cheap integrated SATA controllers, but higher-end cache-carrying SATA RAID controllers) for an admittedly smaller capacity but much higher speed.
After using SAS on some of the servers and blades at work, I came home to my SATA-based desktop computer and wept silently whenever I did anything disk-intensive, because it was soooooo much slower. I have SCSI for the OS in all my server equipment, but even those machines weren’t as peppy as the SAS stuff at work. Taking these two things into account, plus the fact that the games I like to play are all disk I/O intensive, then throwing in a bit of friendly rivalry for good measure, I decided to upgrade my desktop machine to use SAS storage.
I convinced the home finance committee (my wife) to approve the purchase of a few new components for my experimental SAS-based desktop. In a previous post I mentioned my buddy Karl, who has been persistently making fun of the small/low benchmark numbers of my desktop. I quipped that his larger/higher benchmark numbers were simply to make up for deficiencies in other areas of his rig and he was overcompensating. Secretly, I was impressed and had to see what it felt like to hit the magic 200MB/sec throughput mark on my desktop. So I hit eBay, credit card in hand, in search of the components I needed for my SAS-based desktop monster.
I researched which card and what drives to purchase, and settled on a couple of 15k 72GB Seagate SAS drives and an LSI 8204ELP SAS/SATA array controller. I got everything relatively quickly and unboxed it all. It’s difficult to put into words the anticipation I felt at that moment of beating Karl’s benchmarks. . .only to feel the crushing blow of disappointment when I took a look at my old motherboard, which, in my hasty competitive blur of eBaying, I had forgotten to check for the correct PCI Express slots.
My motherboard had 2 PCI-Express 1x (very short, relatively slow slots mainly for audio and gigabit networking) and 1 PCI-Express x16 (much faster, much longer physical slot mainly used for high-bandwidth boards like video cards). The LSI 8204ELP RAID card is a PCI-Express x4 device (quiz on this stuff in five minutes!). It doesn’t fit in the x1 slot and my x16 slot was occupied by my video card. Topping Karl’s benchmark would have to wait a little longer.
Fast-forward three months. More research, more waiting, and more pouncing later, I bought a motherboard that has more than one big high bandwidth PCI-E slot that can handle my LSI card.
This is where the fun really begins.
After three days of flashing firmware, updating BIOSes and fiddling with cabling, I finally got the LSI card and associated drivers to work properly and got an operating system loaded (x32 Vista). I ran a couple of benchmarks on this system. Success! Karl was going down! (The irony here is I still didn’t beat Karl’s benchmarks. Not only that, but he’s going for 300MB/sec — he’s waiting for his drives to come in.)
As you can see by the stark difference in the captures below, one is clearly a more smile-inducing experience to a storage geek than the others, but the bigger story is the single SAS drive vs. the single SATA drive.
In day-to-day activities like searching my email or installing an application or even playing a game (Command and Conquer 3 missions load in seconds instead of mind-numbing hours), things are peppier, as is to be expected. But it is surprising the speed at which things happen. Vista feels fast –yes, I said it– it feels better, more responsive, and over the last few days I found myself in my Debian install less and less, believe it or not. I’m having an okay experience with Vista (no, I haven’t installed SP1 yet…I’m waiting for the first service pack for it before I take the leap!). I wonder if Microsoft can convince LSI and Dell to build a commodity SAS chip on-board for them?
This experience on the desktop was certainly more involved than on a server.* One would think some of the lessons these vendors learned in the enterprise would have trickled down to the desktop by now. But I guess that’s asking too much.
It also tells me that as modular and approachable as these desktop systems have become, cutting-edge is still not somewhere the uninitiated can be. That seems obvious, but when I think of 64-bit operating systems, I don’t think cutting-edge, because they’ve been out for 3-4 years now, 8GB of RAM is less than $100 on the open market and more than half of that amount would be entirely useless in a 32-bit environment.
Was all my heartache, frustration and re-installation worth it? Heck yes. . .and then some! Until one of the disks in my RAID 0 set dies and I blog about what a crock the million-hour MTBF numbers are, I’ll be the happiest storage geek this side of Seagate’s skunk works. Scroll all the way down for some notes on Vista x64.
* I know this is a storage-oriented blog, but this process drove me so far up a wall I needed a parachute to get down. I feel the need to share the fine print, to hopefully help someone avoid the same devilish Catch 22’s and gotchas to doing this with the 64-bit version of Windows Vista.
1) Windows Vista x64 works very well, but has very limited driver support, and storage devices are the only exception. Most storage vendors have great 64-bit drivers. Unfortunately almost no one else does, even Microsoft themselves (try using Groove in x64).
2) While Windows Easy Transfer is great, it will not let you transfer your files and settings from an x64 computer to an x32 computer. Instead, you have to virtualize your old system using VMware and run it in a VM on the system you’re migrating to. One more note about virtualizing your old machine: If you’re using 64-bit Windows and try to take the hard drive out and stick it in your new machine so you can build a new VM using a raw disk, that won’t work either.
3) If you decide to use your old hard drive in your new one, make sure your BIOS isn’t set to boot from it on your new system, especially if you’re moving from onboard controllers to add-in controllers. Most system BIOSes will want to set the onboard devices higher in the boot order by default.
4) Be mindful of how many add-in cards/controllers you have loading BIOSes — apparently there’s a limit to the number (memory, maybe?) you can have loaded, and once you’ve hit that limit you cannot enter the BIOS of the last card in the bootup sequence. For example, on the system board I have now there are three storage controllers (two SATA and one EIDE) plus the LSI card I’ve added. When the BIOS of the EIDE is active, I cannot enter the BIOS of the LSI card (last one to load) to configure an array. I have to disable the EIDE, configure my array and re-enable the EIDE. Why this happens in today’s systems is beyond me, but be wary, it happens in servers too. I’ve tackled a similar problem with 4 PCI-X Areca cards and a couple EIDE cards in a production server.