What are the requirements for an initial SAN design? There are a number of initial questions. At the simplest level...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
you need to ask:
1) how many physical locations?
2) how many server ports at each location?
3) how many storage ports are needed at each location?
.SBNfaS4tCqh.6@.ee83ce4/912>CLICK for advice on narrowing down your SAN topology selection. I have to implement a SAN at a site with no previous Fibre Channel infrastructure. What new technologies might be of interest to me (i.e. 10-Gigabit Fibre Channel, Infiniband, iSCSI)?
Working at a green field site, you have the happy situation of making a decision based on where we are now. In terms of storage networking, I think we are really looking at Fibre Channel and iSCSI. Most of what I hear is that Infiniband is still mostly inside the server/cluster.
iSCSI: I think this will have a big year. With the Microsoft announcements (.NET) and support coming, I think by the middle of this year this will be a serious contendor. There is now wire speed iSCSI capability out there for a number of platforms and operating systems -- both HBA and switch.
Fibre Channel: Don't worry about 10Gb, or even 2Gb too much. 10Gb is really pitched at ISLs rather than servers (though I suspect some arrays may also start to use it). As for 4Gb, until recently, most of the talk about this was in back end storage.
CLICK for Simon's recommended protocol. What is the best tool available to measure Fibre Channel performance in a switched environment?
Of course, most HBAs and disk arrays allow you to measure performance using their own tools. However, for the environment as a whole, the starting point is the switch itself.
A good switch vendor builds into their product the ability to collect useful data at the basic hardware level. Brocade, with their ASIC design, did a great job of this. With their 2Gb product, they can even look inside the packet. Similarly, Nishan has a layer three switch/router that is good at this. And I would expect any good core switch or director to have good capabilities.
.qBXtaUT5Cro.0@.ee83ce4/895>CLICK for advice on how to expose this data. Why are switches used between servers and storage in a SAN environment? How do you choose the best switch? What features should I look for when selecting a switch?
In the early days of storage consolidation, we simply had disk arrays and tape libraries with lots of ports. Although this provided some benefits, this did not give much flexibility in terms of choosing best-in-breed products.
By using a network of some sort in between, you can choose the most appropriate servers for different applications. The most appropriate storage again may be from more than one place, and easily allocate any storage to any server. Switched topologies simply give better performance than shared topologies, like hubs.
.rB6caTMaC2r.0@.ee83ce4/889>CLICK for three things to think about when choosing a switch. How do I know if I should go with arbitrated loop or switched fabric? The SAN would be for a small department/workgroup type environment.
To be honest, with products like the Brocade 3200 and 3800, and the McDATA Sphereon 4500, the switch prices are now down to the point that arbitrated loop is primarily just used at the back end of storage arrays.
.QCw6ad94CwU.3@.ee83ce4/902!viewtype=convdate&skip=&expand=>CLICK for the benefits of fabric devices. How do I best connect SANs over distance and how far can it go?
There are two parts to this: the physical bit and the interesting bit. Using optical, you can go reasonable distances, depending on where you are (certainly 10's to 100's of kilometers). Using IP, you can (in theory) go as far as you want. We have customers in Europe going 600km and in the U.S. we have a customer going 5,000km.
The difficulty is getting a FC SAN to work with these long hops, whether optical or IP. This is why we developed the iFCP protocol.
.ZBbVaWPjCIA.0@.ee83ce4/905>CLICK for ways to get the best performance over distance using iFCP. What's the difference between hard zoning and soft zoning? When do I use one or the other?
Software zoning is where the switch does not tell you what you do not need to know -- but if you dial the number you can still get through.
Hardware zoning is where the switch hardware enforces the zone even if your system goes mad and starts randomly sending frames to possible PIDs (port IDs).
.iBHaaUKyCLX.0@.ee83ce4/898>CLICK for other zoning methods and how to configure a zone. I'll be working with no more than five switches but uptime and performance must be "guaranteed." How should I configure the SAN for best performance?
The two most likely designs would be full mesh and core-edge.
1) Full mesh: In this case, every switch has a connection to every other switch. Five switches is the sensible maximum for a full mesh design, so it's fine so long as you do not want to grow it any larger. Also, with a full mesh, you do want to localize traffic with the servers on the same switch as the storage they are accessing. One way to do this on a storage array is to have one port connected to each switch in the SAN.
2) Core-edge: Five switches is a nice number to start core-edge. Two switches at the core, three at the edge, then have each edge switch with an ISL (or two) to each core switch.
.IBX8aWoNC5w.0@.ee83ce4/908>CLICK for more factors in configuring a SAN for performance. Should I try to localize traffic by connecting storage and the servers accessing that storage to the same switch?
To be honest, though I know localization is something a lot of people talk about these days, it is really not that much of an issue. With core-edge SANs, the basic premise is to connect the servers to the small edge switches and the storage to the big core switch -- total NON-localization.
.1BDkaWS0C2r.0@.ee83ce4/907>CLICK for additional localization considerations. I'll be working with no more than five switches but uptime and performance must be "guaranteed." How should I configure the SAN for best performance?
Simon Gordon is the EMEA Business Development Manager for Nishan Systems based in the United Kingdom. Simon has been working as a European expert in storage networking technology for the last five years. His experience includes more than two years as a leading expert in Brocade Communications Systems working closely with many OEMs and two years as the lead European storage guru at Dell's European headquarters. Simon has been working in the IT industry for over 15 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design and, of course, storage networking. He's worked for companies including Digital Equipment, Unisys and NCR. He has a degree in Computer Science from Reading University and is a member of both the British Computer Society and the Institute of Electrical Engineers. He is also a regular contributor to SearchStorage.com. I'll be working with no more than five switches but uptime and performance must be "guaranteed." How should I configure the SAN for best performance?
SAN topologies, part 1: Know your switch options
SAN topologies, part 2: How to design your SAN