Why does a SAN access data from the server faster then a local hard drive?
Several things determine the speed for data access from a server, including:
- Speed of the storage interface (1 Gb, 2 Gb Fibre Channel versus 20-40 MB SCSI)
- Speed of server adapters (PCI and PCI-X bus speed)
- The server's I/O rate and ability (processor speed, memory)
- Congestion or storage interface activity (other activity)
- Speed and configuration of the storage system (cache, RAID, disks, etc).
- The type of data and data access pattern (large, small, random, sequential)
- Type and speed of disk drives being used (RPM, latency, throughput)
Assuming an apples-to-apples comparison, it is possible for a local hard disk to perform similar to a
-attached hard disk -- that is if they have the same performance characteristics, similar speed storage interfaces and configuration. However, typically it's not an apples-to-apples comparison. Generally, you're comparing the speed of a local-attached hard disk drive to the performance of a SAN-attached RAID array. This is not a fair comparison because the RAID array may have cache; more disk drives are configured using RAID, which could help performance. Also, the local hard disk drive may be attached to a slower interface than the SAN. For perspective, people also compare the wire speed of the direct-attached host storage to that of a SAN interface. It is important to understand what is being compared and try not to get fooled into believing wire speed performance numbers, especially on individual disk drives.
Read Randy Kerns' answer to this question.
Dig Deeper on SAN technology and arrays
When cloud durability is added to the mix, cloud providers are able to tout a high number of nines of availability.
Cloud storage can be less expensive from a cost-per-gigabyte perspective, but it's important not to lose sight of other benefits as a value ...
Using solid-state in the cloud can boost performance, but first be sure you look past cost per gigabyte and are aware of any constraints from ...