When writing to disk, an Oracle process waits until it "hears" from the storage device that the write is "done." It turns out that different storage devices have different criteria for when a write is "done." Some when they get the data in their cache, some when it is in their cache and mirrored-cache and others when they find they can write the first block to a physical disk drive. And likely, there are other criteria used as well.
What are the risk/performance trade-offs of these various notification criteria?
The obvious risk is that a sudden disaster could occur that would keep the data in cache from being written to disk. This can occur if the storage subsystem is physically damaged in some way. Not necessarily a likely scenario but within the realm of possibility.
The real issue with databases is whether or not write ordering is preserved when flashing cache memory. If the cash is tuned to flush on a first in, first out basis then write ordering will be preserved. However, if the cache uses some other algorithms to determine which blocks to flush then it is possible to have data integrity problems with the media image on disk.
The safest method is to use write-through cache which only acknowledges the write after it has been committed to disk.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.
This was first published in June 2003