Problem solve Get help with specific problems with your technologies, process and projects.

Comparing bandwidth with capacity - Part II

I'm confused with comparing bandwidth with capacity I can transfer.

When it comes to storage we have two factors: network bandwidth and storage capacity. Bandwidth is measured in Kbps, Mbps and Gbps that are in powers of "10." Capacity is measured in KByte, Mbyte, Gbyte and so on. The two questions are: (1) If I have 1 TB or 100 MB of data on my disks, how fast it will be transferred through a FC or Gigabit Ethernet network? (2) How much data I can push through a FC or Fast Ethernet network in a second or hour?

I'm trying to converse MB to Mbps and back and I'm not sure I'm do the proper way.

Those above questions touch the nominal bandwidth of networks. Another issue is how efficient they are. I've heard Fast and Giga Ethernet have about 34% of bandwidth utilized by "metadata" - protocol overhead.

Could you shine some light on how to compare bandwidth with capacity?


Click here for Part I.

Switched topologies are best for reducing access contention - this is true for both Ethernet and Fibre Channel. Ethernet has buses, hubs and switches. FC has both fabrics (switches) and loops. The architecture of a switch has a lot to say about the total utilization. From an Ethernet perspective, FC switches are over-engineered and cost too much. From a Fibre Channel perspective, Ethernet switches don't build enough capacity into their designs to provide consistently high performance levels for all concurrent sessions. Again, the key is the application mix and their characteristics and requirements. Low latency may be a better yardstick for some applications (transaction processing) than network bandwidth or utilization - although network utilization is sometimes driven in benchmarks by transaction applications.

Loops have more contention overhead than fabrics. You might be able to get 50% utilization out of a loop, but that means that you have applications that are actually going to use this much bandwidth. This would only occur during bulk transfer operations like backup. But that does not mean that any one session would be moving that much data. The gating factor for backup is the speed of the tape drive. After that, the ability for the system to transfer data from disk to tape over its internal bus can become a real factor. In a backup application where the disk volumes and the tape drives are on the same network, the transfer rate of the tape drive relates to a 2X requirement in the network as data must be read from the disk and transferred to the system before it is written to the tape drive.

Fabrics have higher network utilizations. In fact it is possible to have more bandwidth available than it appears if systems are connected through the same switch ASIC, that way multiple transfers can be occurring without competing much for switch backplane bandwidth. Still the end-to-end performance is gated by end node capabilities - not the network.

So, this was a lot of explanation about network utilization that probably didn't give you the answer you were looking for. The frustrating thing is that you cannot get the answer you are looking for without knowing a lot about your systems and applications. These things are extremely complicated.

The question is reasonable, but the answers are not.


Editor's note: Do you agree with this expert's response? If you have more to share, post it in our Storage Networking discussion forum.

This was last published in April 2002

Dig Deeper on SAN management



Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.