Storage

Managing and protecting all enterprise data

WavebreakmediaMicro - Fotolia

Data storage infrastructure starts with science-fiction inspiration

Find out what happens when you take a cue from an old science-fiction movie and build an 'interocitor' that becomes its own storage infrastructure.

Fans of classic science-fiction movies might appreciate the feelings that overwhelmed me this past holiday season when I confronted the challenge of building a fairly complex data storage infrastructure from an assortment of undocumented parts. There I was, a veritable Dr. Cal Meacham from the 1955 classic This Island Earth, working to create a storage platform no less complicated than your proverbial "interocitor," but without so much as a "Metalunan" catalog to document the component parts. I wasn't sure whether my creation would be useful for storing data, let alone performing some sort of "electron sorting" or other exotic workload mentioned in the movie. But I attacked the project with zeal, anyway.

The backstory

My test bench, consisting of a couple of DataCore SANsymphony servers connected in a failover cluster, had grown into a hodgepodge of external storage boxes connected with everything from USB to eSATA and Fibre Channel (FC). Every slot of every installed StarTech.com eSATA board was maxed out, their drives virtualized by DataCore into storage pools split evenly between the two servers. Everything on one set of disks was replicated on the other. With my 2016 research projects ending, it was time to rethink the data storage infrastructure, make sense of this jumble and start fresh in the new year.

There I was, a veritable Dr. Cal Meacham from the 1955 classic This Island Earth, working to create a storage platform no less complicated than your proverbial 'interocitor.'

Around the holiday, a friend mentioned how his shop was retiring a bunch of Promise Technology arrays -- three, to be exact -- attached via iSCSI, FC and SAS. He said I could use these to consolidate the octopus of data storage infrastructure around each of my servers and that he could save them from the trash heap and deliver them to me for "upcycling" if I wanted. I did want them, of course, and a few days before Christmas, he appeared in my driveway with the gear in tow.

I should have known my life was about to change when he hastily offloaded his salvage and made a quick getaway. Each rig was heavy, apparently containing some terabyte and 500 GB SATA drives of various manufacture, requiring me, my friend and a couple of my teenage daughter's male friends to heave them into my office.

"I will call you next week to see how you are doing," my friend said hurriedly as he peeled away and down the road.

First reel: Setup phase

It was almost as though he expected a person of my limited intellectual prowess to fail the interocitor test. But, like Meacham in the golden age of sci-fi flick, I started with the closest thing I could see to a beginning point: I bought three sturdy equipment shelves and loaded them with the array chassis: 12 bays, another 12 and 16 more. I never realized how much a few hundred terabytes weighed!

Upon visual inspection, there were no iSCSI chassis in the mix, but rather two FC and one SAS. Moreover, powering on each rack produced a cacophony of sound akin to an airplane hangar, much too noisy for my office.

I should have known my life was about to change when he hastily offloaded his salvage and made a quick getaway.

So after deciding how to rack the components, I looked into cleaning fans on power supplies, and then how to provide sufficient power and network connectivity to enable me to place the whole thing in a storage room about 100 feet (and two walls) away from where I work. There was also the issue of connectivity between the racks and the two servers.

Second reel: Challenge phase

The plan had been to transfer the contents of the external eSATA, USB and iSCSI storage onto the bigger virtual capacity pools built using the new gear. To do this, I needed to connect the new arrays to the servers, format and pool them with DataCore, and copy the contents of each small storage box so I could retire them.

I invested less than a couple hundred dollars and some sweat equity to build a good data storage infrastructure that I can scale over time.

That was where I encountered the first challenge. I had no additional slots in my servers for additional host bus adapters (HBAs), whether FC or SAS. From what I could find on eBay, I needed a PCIe x16 slot for each HBA. My servers had two, one being used for a video card, the other for a two-port FC adapter being used for failover clustering. The eSATA external port cards were using PCIe x1 slots, leaving a couple of good old-fashioned PCI 32-bit slots. I could buy HBAs dirt-cheap from several vendors belonging to the Association of Service and Computer Dealers, or even on eBay, but they were no good to me if I had no slots.

To make a long story short, it turned out the biggest rig was actually an iSCSI one someone had retrofitted with a FC controller, for whatever reason. I discovered this by chatting with a very helpful Promise Technology support guy just after New Year's, who I imagined shaking his head when asking: "Why don't you just buy the latest VTrak from Promise?"

Third reel: Final phase

It will require considerable testing to see whether the controller transplant will work. Either way, I am left with both a SAS and an FC rig. We may end up placing the FC controller from the big rig into the SAS kit, converting it to FC. That would allow me to connect each storage device to one of the two FC connectors on the existing HBA in each server. Alternatively, I could buy a used Brocade FC switch, again on the cheap (sub-$80), from one of my sources in the secondary market and just cable everything to that.

Either way, my interocitor is up and running, and shortly, all data storage infrastructure will be virtualized, and all those little four-drive arrays retired. Well, until the next time I need some elbow room.

The next step is to overlay the entire platform with StrongLINK from StrongBox Technologies and add an LTO-5 or better tape storage component running the Linear Tape File System. That way, rarely accessed older data can migrate automatically to tape.

I invested less than a couple hundred dollars and some sweat equity to build a good data storage infrastructure that I can scale over time. That's what I call a special Christmas. Cue the Metalunans.

Article 6 of 8

Next Steps

The genius of intelligent storage infrastructure

Infrastructure management tops big data challenges

Big data's effect on storage infrastructures

Dig Deeper on Primary storage devices

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close