I've been doing some research on NAS devices lately. One of the nice features touted by NetApp, that I'd like to...
make use of, is SnapShot.
I've spoken to NetApp about this feature. They compare their snapshot technology to other vendors saying theirs is superior since on a block change, the original block is not moved, a new block is written and the pointer is updated. The old block is kept around as long as the snapshot is kept around.
Conversely, other vendors have a snapshot swap space where when a snapshot is taken any changes to the block result in the following. The original block is copied to the swap space and the original block is overwritten, resulting in two physical writes for every logical writes. However, these vendors seem to think these writes won't cause much overhead. Also they point out that the WAFL file system on a NetApp filer will eventually become very fragmented causing a performance hit at about 70%-80% capacity.
Can you shed any light on the two options presented above?
You need to always be careful when you get negative marketing. Negative marketing is where a vendor says bad things about a competitor's product rather than saying what's good about theirs and let you make the decision. When I hear negative marketing, I usually ask the person to stop and if they persist, either I leave or they do. It usually comes from a person that is really not competent that is doing the negative marketing and it takes just a few seconds with appropriate questions to expose them. At any rate, they are wasting your time.
Now to answer the question, NetApp does use a log-structure file system (which they call WAFL) that allows them to use pointers to connect the information for file systems (which equal a LUN in this case). A snapshot for NetApp just involves pointer manipulation so there is a minimal amount of overhead and no extra disk space consumed.
The SnapRestore is an interesting function in that the file system is a read-only copy. There are more details on the NetApp Web site that you can read to get the details. A log-structured file system is targeted at writing data at the next available location to minimize electro-mechanical motion on the disk (and thereby same time). There is potential for the next available location to be in disparate places requiring some movement but on average it will be much better than always seeking to a new location. Free space collection to provide localized, contiguous blocks would be a background task that could improve performance somewhat and NetApp will be very likely adding that in the future.
Right now, the competitor's claims are saying it's not optimal and kind of a strange negative marketing statement. I wouldn't be concerned, if it provides the performance you need.
Evaluator Group, Inc.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.