Concurrent access effect on cache depends on what platform is being utilized. Concurrent access is different when used on NAS opposed to SAN. In NAS, the cache is usually system memory of the NAS platform, and allows R/W to the SAME data from different platforms. The cache then has the effect of eliminating access to the slower rotating drives, by fulfilling all requests for that data from memory (cache).
Read requests, therefore, are very fast. Writes are different though, as most NAS devices prefer to optimize reads to increase scalability. On SAN arrays, concurrent access means the CACHE is the single point of contact, not the files. Therefore, since the array may be shared by many different OS's, and each OS acts differently during I/O, the SAN vendor should tune cache by connection, rather than globally.
Most of the enterprise vendors use automatic tuning algorithms in cache that monitor the connection type (OS), and the type of data I/O to disk (are we doing more reads than writes, are we more random than sequential....) and adjusts cache on the fly to optimize the connection. Hitachi and Compaq use this approach. If a vendor forces you to choose read or write cache on a global basis, then tuning becomes problematic and usually cannot be done "on the fly". Most vendors provide tools to gather statistics on cache hit performance by LUN that enables you to track this to determine hot spindles and future needs.
This was first published in March 2001