Data deduplication -- often called intelligent compression or single-instance storage -- is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, such as disk, flash or tape. Redundant data blocks are replaced with a pointer to the unique data copy. In that way, data deduplication closely aligns with incremental backup, which copies only the data that has changed since the previous backup.
For example, a typical email system might contain 100 instances of the same 1 megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB of storage space. With data deduplication, only one instance of the attachment is stored; each subsequent instance is referenced back to the one saved copy. In this example, a 100 MB storage demand drops to 1 MB.
Target vs. source deduplication
Data deduplication can occur at the source or target level.
Source-based dedupe removes redundant blocks before transmitting data to a backup target at the client or server level. There is no additional hardware required. Deduplicating at the source reduces bandwidth and storage use.
In target-based dedupe, backups are transmitted across a network to disk-based hardware in a remote location. Using deduplication targets increases costs, although it generally provides a performance advantage compared to source dedupe, particularly for petabyte-scale data sets.
Techniques to deduplicate data
There are two main methods used to deduplicate redundant data: inline and post-processing deduplication. Your backup environment will dictate which method you use.
Inline deduplication analyzes data as it is ingested in a backup system. Redundancies are removed as the data is written to backup storage. Inline dedupe requires less backup storage, but can cause bottlenecks. Storage array vendors recommend that their inline data deduplication tools be turned off for high-performance primary storage.
Post-processing dedupe is an asynchronous backup process that removes redundant data after it is written to storage. Duplicate data is removed and replaced with a pointer to the first iteration of the block. The post-processing approach gives users the flexibility to dedupe specific workloads and to quickly recover the most recent backup without hydration. The tradeoff is a larger backup storage capacity than is required with inline deduplication.
File-level vs. block-level deduplication
Data deduplication generally operates at the file or block level. File deduplication eliminates duplicate files, but is not an efficient means of deduplication.
File-level data deduplication compares a file to be backed up or archived with copies that are already stored. This is done by checking its attributes against an index. If the file is unique, it is stored and the index is updated; if not, only a pointer to the existing file is stored. The result is that only one instance of the file is saved, and subsequent copies are replaced with a stub that points to the original file.
Block-level deduplication looks within a file and saves unique iterations of each block. All the blocks are broken into chunks with the same fixed length. Each chunk of data is processed using a hash algorithm, such as MD5 or SHA-1.
This process generates a unique number for each piece, which is then stored in an index. If a file is updated, only the changed data is saved, even if only a few bytes of the document or presentation have changed. The changes don't constitute an entirely new file. This behavior makes block deduplication far more efficient. However, block deduplication takes more processing power and uses a much larger index to track the individual pieces.
Variable-length deduplication is an alternative that breaks a file system into chunks of various sizes, allowing the deduplication effort to achieve better data reduction ratios than fixed-length blocks. The downsides are that it also produces more metadata and tends to be slower.
Hash collisions are a potential problem with deduplication. When a piece of data receives a hash number, that number is then compared with the index of other existing hash numbers. If that hash number is already in the index, the piece of data is considered a duplicate and does not need to be stored again. Otherwise, the new hash number is added to the index and the new data is stored. In rare cases, the hash algorithm may produce the same hash number for two different chunks of data. When a hash collision occurs, the system won't store the new data because it sees that its hash number already exists in the index. This is called a false positive, and it can result in data loss. Some vendors combine hash algorithms to reduce the possibility of a hash collision. Some vendors are also examining metadata to identify data and prevent collisions.
Data deduplication vs. compression vs. thin provisioning
Another technique often associated with deduplication is compression. However, the two techniques operate differently: data dedupe seeks out redundant chunks of data, while compression uses an algorithm to reduce the number of bits needed to represent data.
Compression and delta differencing are often used with deduplication. Taken together, these three data reduction techniques are designed to optimize storage capacity.
Analyst Mike Matchett discusses the benefits of compression and deduplication and explains how the two differ from each other.
Thin provisioning optimizes how capacity is utilized in a storage area network. Conversely, erasure coding is a method of data protection that breaks data into fragments and encodes each fragment with redundant data pieces to help reconstruct corrupted data sets.
Other benefits of deduplication include:
Deduplication of primary data and the cloud
Data deduplication originated in backup and secondary storage, although it is possible to dedupe primary data sets. It is particularly helpful to maximize flash storage capacity and performance. Primary storage deduplication occurs as a function of the storage hardware or operating system software.
Techniques for data dedupe hold promise for cloud services providers in the area of rationalizing expenses. The ability to deduplicate what they store results in lower costs for disk storage and bandwidth for off-site replication.
Learn about the importance of choosing a virtual server backup product with strong deduplication and replication capabilities.