University rolls out EMC/Cisco tiered network

The University of Minnesota, an EMC and Cisco user, tiered not only its storage but also its network -- and built it on the cheap.

While many users are still coming to grips with tiering disks, the University of Minnesota has not only successfully tiered its storage but is in the process of attaching hosts to a tiered network as well.

According to Carl Follstad, manager of data management services for the University, up until two years ago, the University was working with the typical "silos" of DAS and a crazy quilt of servers split up over departments. Follstad was brought in to oversee the storage architecture after a request for proposal (RFP) went out for the first of several storage upgrades.

Today, Follstad has a 300 terabytes, three-tiered storage system split between his two core campuses on the east and west bank of the Mississippi river in Minneapolis and a third campus in St. Paul. Tier-1 consists of three EMC Symmetrix DMX 800 arrays for the University's Unix-based e-mail system, along with one DMX 2000 and one DMX 1000 for the rest of the University's most critical applications, including PeopleSoft for student registration and the library's electronic card catalog.

Related articles

Users come to grips with tiered storage

Storage Outlook '06: Taking tiered storage to the next level

 

Users struggle to get tiered storage right

 

Tiered storage with SATA
Attached to this storage via Fibre Channel (FC) are 25 Sun Solaris hosts in five "hubs" between the two main campuses. Each hub is one building, such as a library, where the density of hosts makes running Fibre directly into the facility cost effective.

Tier-2 and Tier-3 applications, meanwhile, are housed in six EMC Clariion arrays -- five of which are CX700s and one a CX500. Tier-2 is connected via FC through the hubs as well but consists of less-critical departmental data, also mainly on Solaris hosts, supporting other business functions, such as file services for departments or research. Tier-3 data is stored on ATA disks housed within the same boxes.

Originally, Follstad said, the Clariion arrays were bought to be Tier-1, but he said the Tier-1 data the University tried to put on the arrays "pushed them too hard."

"It wasn't a matter of the technology being bad -- we're still using the arrays," he said. "But the [guaranteed] availability wasn't what we needed. With our Tier-1 data, we can afford at most one scheduled downtime a year. With the Clariions we were looking at [at least] one scheduled downtime every quarter according to our SLA [service-level agreement]."

All the tiers are connected with Cisco Systems Inc. switches -- six MDS 9509s, four MDS 9216s, eight 9120s and two 9140s, all told. The director-class 9509s and 9216s are used in the FC cores, and the 9120s and 9140s are in the hubs on the main campuses.

Designing such a sprawling system was complex enough, Follstad said, but even tougher was "selling" it to his end users, each of whom must designate money from their departmental operating budget if they want to be connected to the University's central storage systems.

"Midsize departments kept telling me, 'all I need are basic disks,' " Follstad recalled. "It was impossible to sell them even Tier-2 services, even at cost."

Eventually, Follstad said he went to his chief information officer and made a proposal: the university would subsidize part of the cost, and Follstad would go out and find some dirt-cheap disk. Follstad said EMC beat out the competition, 3Par Data Inc. and Nexsan Technologies, eager to stay his main vendor.

But there was another cost to consider, networking. Running FC to small departments in dozens of buildings, connecting hundreds of hosts, would negate the savings Follstad had realized with the Tier-3 disk.

Eventually, he decided on a unique plan -- to use the 30,000-node Gigabit Ethernet network the University had spent years furnishing for its campuses to connect lower priority hosts.

Of course, it wasn't without its challenges.

"The biggest issue was basically that hardly anyone's doing it, there wasn't terribly strong vendor support for it," he said, citing Novell, whose servers are mostly outmoded in industry but still popular in the higher education field.

"It took quite a while before Novell had a stable iSCSI stack," Follstad recalled.

There's also the fact that, given Follstad's position as a "salesman" of storage to his own users, setting up such a system is a gamble.

"I call it my 'Field of Dreams' business model," he said. "I build it, and hopefully, people come."

Personnel issues have kept him from attaching any hosts to the Tier-3 iSCSI SAN yet, but Follstad said he's already seen interest in NAS using the IP network, which he has provided for users using two Celera NS704G systems. Ultimately, Follstad said his goal is to attach 200 to 300 Solaris, Windows, Linux and Novell hosts to the iSCSI SAN for Tier-3 service within the next several months.

A pricing coup as well

Follstad said contractual issues kept him from divulging the exact amount he's spent on his EMC storage over the last several years on the record, but the figure is not what many might estimate given the number of high-end boxes and switches in his shops. "EMC was the price leader when we did our original RFPs," he said.

Of course, Follstad also divulged he was in a unique position in his negotiation process -- before taking the University position, Follstad was a salesman for EMC.

Worth noting, however, was some of his advice to other users on vendor negotiations given his unique experience on both sides of the table.

"Every vendor wants to bake in aspects of their relationship with the customer -- they want it not to be easy for a customer to replace their equipment," he said. "But it can work both ways."

Follstad suggested users look for ways to create a partnership with their vendor; he's a big proponent of beta testing. He also suggested customers sending out an RFP mention possible future projects -- a vendor, taking a long-term view, might lower its pricing in the short term to get their foot in the door.

Any of those steps, Follstad stressed, require the customer to educate themselves.

"One thing I realized coming from EMC is that there's a frustration over customers calling something an RFP when it's really a request for information [RFI]," he said. "Customers have no business approaching a vendor [for an RFP] without a sense of the competitive landscape and basic functionality between products, but I still saw it many times -- customers who didn't understand what they wanted."

This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close