I'm confused with comparing bandwidth with capacity I can transfer.
When it comes to storage we have two factors: network bandwidth and storage capacity. Bandwidth is measured in Kbps, Mbps and Gbps that are in powers of "10." Capacity is measured in KByte, Mbyte, Gbyte and so on. The two questions are: (1) If I have 1 TB or 100 MB of data on my disks, how fast it will be transferred through a FC or Gigabit Ethernet network? (2) How much data I can push through a FC or Fast Ethernet network in a second or hour?
I'm trying to converse MB to Mbps and back and I'm not sure I'm do the proper way.
Those above questions touch the nominal bandwidth of networks. Another issue is how efficient they are. I've heard Fast and Giga Ethernet have about 34% of bandwidth utilized by "metadata" - protocol overhead.
Could you shine some light on how to compare bandwidth with capacity?
So you have the units right. Networks are measured in bits transmitted and storage is measured in bytes stored. These equate roughly to a 10:1 ratio, which is probably a reasonable conversion factor.
The transfer rate depends on the capabilities of the end nodes that are communicating and the application that is responsible for the transmissions. This is an important point so I'll emphasize it again: DO NOT ASSOCIATE TRANSMISSION RATES FOR AN APPLICATION WITH NETWORK BANDWIDTH - THEY ARE TWO DIFFERENT YARDSTICKS. The ability to utilize network bandwidth depends on many implementation variables, including the system architecture of the entities on the network, their raw processor capabilities, network interface and driver efficiency, network design and topology, upper layer protocols and the number of systems on the network and their usage characteristics.
Although it is tempting, you shouldn't start with an assumption of maximum wire speed utilization. You will be disappointed every time. It helps to have more realistic expectations based on end node capabilities.
For starters, there are very few systems that can fill a Gb pipe, regardless of whether it is Ethernet of FC. So thinking about bandwidth in terms of an individual system's capabilities is a waste of time. The network bandwidth is a measurement of the total traffic, not the "exchange rate" between any two entities. The end to end traffic is almost always gated by the end node's ability to send and receive data.
The upper layer protocols have a lot to do with network utilization. Connection oriented protocols like TCP reduce utilization by requiring acknowledgements to be returned from the receiver before a sender transmits more data. But the benefits of TCP far outweigh the overhead suction. TCP helps ensure that any two pairs of systems sending/ receiving do not hog the network.
Quality of the network also can have a significant impact on network utilization. Fiber optic networks with low error rates are equated with far fewer retransmissions - that means higher utilization also.
Typical Ethernet implementations, whether they are 10 Mb, 100 Mb or GbE have a performance max of ~40% - experts don't agree, on the exact number but they agree on the range. However, you can achieve higher utilizations given the right conditions. For instance a crossover cable that directly connects two systems using TCP/IP protocols over a 10 Mb link might be able to achieve 66% utilization. The key here is that there are only two systems exchanging data and that the systems are sufficiently fast compared to the speed of the network. This is not a realistic example, however because two systems isn't a realistic network. Adding more systems reduces the total maximum network utilization because there is more overhead associated with access contention. However, this doesn't necessarily mean less data is being transferred because most networks are not pushed to their maximums.
Click here for Part II
Related Q&A from Marc Farley
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.