Ok, you lost me here. First of all, by hardware I assume SAN hardware, and by concurrent access, well, that can mean a few things. There is concurrent access where more than one server can access the same logical units within the same storage array. There is concurrent access where multiple OS types can share the same storage array. There is concurrent access where a number of servers can share the storage behind the same Fibre port...
in a storage array (this is called "fan in ratio") and concurrent access can also be described as how clusters can use a "shared disk" approach to failover application resources between server nodes. From a NAS perspective, concurrent access can mean how many users can use the NAS at the same time. This is a function of how many open file handles the NAS OS allows and the bandwidth of the connection to the clients.
I'll assume what you mean here is the "fan in ratio", meaning how many servers can I actually attach to a given storage device. This can be determined by how many ports are available on the device for host access. Some storage arrays can handle more servers than others. When using a fabric, you may have to limit how many servers have access to a particular storage port by following the storage vendors recommendation on what they say is their maximum "server to port" specs.
In a SAN, where the fabric adds access points, you can build out modular arrays by adding more controllers, and therefore ports, which will let you scale both connectivity and performance.
You can usually use 4:1 as a good rule of thumb fan in ratio on 1Gb Fibre ports. Since you are "sharing" a port, and each port runs at 100MB per second, then each server will have 25MB per second bandwidth. If using 2Gb ports, then you can use more servers. You also need to take into account how much performance each server really needs, and what "data types" are typical for the application on the servers. By data types I mean transactional and random vs. sequential and high throughput. For instance, don't put an OLTP (online transaction processing) and DSS (decision support) application on the same port if possible. The "physical" server limit per port and the actual "real world" limit will be different. I know an HDS 9900 can connect 127 servers per port times 32 ports, which means 4064 servers can be connected to a single box (127 is an FC-AL limit. SCSI back ends will be different) but I have yet to see someone doing that. So use 4:1 for now if the servers need high performance and 7:1 for lower performance servers. (If allowed by the storage vendor.)
Editor's note: Do you agree with this expert's response? If you have more to share, post it in our Storage Networking discussion forum.
Dig deeper on SAN switch
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each.continue reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment.continue reading
Storage expert Chris Poelker outlines WWN basics in order to answer the question: "Why do HBAs in a SAN have same base?"continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.