Looking for something else?
The process involves examining a backup file set and locating the blocks that have changed since the last backup period. Changed data, rather than the entire file set, can then be sent to the backup target.
Because only those blocks that contain differences are extracted and backed up, it is possible to perform much faster and more frequent backup cycles without monopolizing bandwidth. If an initial backup or disaster recovery set of 20 TB takes a full day, for example, a typical delta difference of 10 GB per day can be transferred in just a few hours.
Because an incremental approach reduces redundancy, it also has the potential to help a storage administrator optimize storage capacity. Smaller backup portions use storage space far more efficiently, foregoing the many file duplications that waste space in repeated full backups.
See also: differencing disk