Definition

# Kilo, mega, giga, tera, peta, exa, zetta and all that

Kilo, mega, giga, tera, peta, exa, zetta are among the list of prefixes used to denote the quantity of something, such as a byte or bit in computing and telecommunications. Sometimes called prefix multipliers, these prefixes are also used in electronics and physics. Each multiplier consists of a one-letter abbreviation and the prefix it stands for.

In communications, electronics and physics, multipliers are defined in powers of 10, from 10-24 to 1024, proceeding in increments of three orders of magnitude -- 103 or 1,000. In IT and data storage, multipliers are defined in powers of two, from 210 to 280, proceeding in increments of 10 orders of magnitude -- 210 or 1,024.

Examples of quantities or phenomena in which power-of-10 prefix multipliers apply include frequency -- including computer clock speeds -- physical mass, power, energy, electrical voltage and electrical current. Power-of-10 multipliers are also used to define binary data speeds. For example, 1 kilobit per second (kbps) is equal to 103, or 1,000 bits per second (bps); 1 megabit per second (Mbps) is equal to 106, or 1,000,000 bps. The lowercase k is the technically correct symbol for kilo when it represents 103, although the uppercase K is often used.

When binary data is stored in memory or fixed media, such as a hard drive, magnetic tape or CD-ROM, power-of-two multipliers are used. Technically, the uppercase K should be used for kilo when it represents 210. Therefore, 1 kilobyte (KB) is 210, or 1,024 bytes; 1 megabyte (MB) is 220, or 1,048,576 bytes.

The choice of power-of-10 versus power-of-two prefix multipliers can appear random. It helps to remember that multiples of bits are almost always expressed in powers of 10, while multiples of bytes are usually expressed in powers of two. Data speed is rarely expressed in bytes per second, and data storage or memory is seldom expressed in bits.

### History and origin of kilo, mega and more

The prefix kilo (1,000) first came into existence between 1865 and 1870. Though mega is used these days to mean "extremely good, great or successful," its scientific meaning is 1 million.

Giga comes from the Greek word for giant, and the first use of the term is believed to have taken place at the 1947 conference of the International Union of Pure and Applied Chemistry. Tera (1 trillion) comes from the Greek word teras or teratos, meaning "marvel, monster," and has been in use since approximately 1947.

The prefixes exa (1 quintillion) and peta (1 quadrillion) were added to the International System of Units (SI) in 1975. However, the origin and history of peta with data measurement terms is unclear. Zetta (1 sextillion) was added to the SI metric prefixes in 1991.

When the prefixes are added to the term byte, it creates units of measurement ranging from 1,000 bytes (kilobyte) to 1 sextillion bytes (zettabyte) of data storage capacity. A megabyte is 1 million bytes of data storage capacity, according to the IBM Dictionary of Computing.

A gigabyte (GB) is equivalent to about 1 billion bytes. There are two standards for measuring the number of bytes in a gigabyte: base-10 and base-2. Base-10 uses the decimal system to show that 1 GB equals one to the 10th power of bytes, or 1 billion bytes. This is the standard most data storage manufacturers and consumers use today. Computers typically use the base-2, or binary, form of measurement. Base-2 has 1 GB as equal to 1,073,741,824 bytes. The discrepancy between base-10 and base-2 measurements became more distinct as vendors began to manufacture data storage media with more capacity.

A terabyte (TB) is equal to approximately 1 trillion bytes, or 1,024 GB. A petabyte (PB) is equal to two to the 50th power of bytes. There are 1,024 TB in a PB, and about 1,024 PB equal 1 exabyte (EB). A zettabyte is equal to about 1,000 EB, or 1 billion TB.

When it comes to quantifying just how much data storage capacity is offered by kilobytes, megabytes and so on, consider the following chart:

### Terabyte vs. petabyte: What would it look like?

In his book, The Singularity is Near, futurist Raymond Kurzweil estimated the capacity of a human being's functional memory to be 1.25 TB. This means that the memories of 800 human beings fit into 1 PB of storage.

How much data is a petabyte exactly?

If the average MP3 encoding is approximately 1 MB per second (MBps), and the average song lasts about four minutes, then a petabyte of songs could play continuously for more than 2,000 years. If the average smartphone camera photo is 3 MB, and the average printed photo is 8.5-inches wide, a petabyte of photos placed side by side would be more than 48,000 miles long. That is almost long enough to wrap around the equator twice. According to Wes Biggs, CTO at Adfonic, 1 PB can store the DNA of the entire population of the United States and then clone them twice.

If you counted all the bits in 1 PB of storage at a rate of 1 bps, it would take 285 million years, according to data analysts from Deloitte Analytics. A bit is a binary digit, either a 0 or 1; a byte is eight binary digits long. If you counted 1 bps, it would take 35.7 million years.

### Yottabytes and data storage

The future of data storage may be the yottabyte. It's a measure of storage capacity equal to approximately 1,000 zettabytes, 1 trillion terabytes, a million trillion megabytes or 1 septillion bytes.

Written in decimal form, a yottabyte looks like this: 1,208,925,819,614,629,174,706,176. The prefix yotta is based on the Greek letter iota. According to Paul McFedries' book Word Spy, it would take 86 trillion years to download a 1 yottabyte file; by comparison, the entire contents of the Library of Congress would equal just 10 TB.

According to a 2010 Gizmodo article, storing a yottabyte of data on terabyte-size disk drives would require 1 billion city block-size data centers, equal to combining the states of Rhode Island and Delaware. As of late 2016, memory density had grown to the point where a yottabyte could be stored on SDX cards occupying no more than twice the size of the Hindenberg.

See Kibi, mebi, gibi, tebi, pebi and all that, which are relatively new prefixes designed to express power-of-two multiples.

This was last updated in February 2017

## Content

Find more PRO+ content and other member only offers, here.

#### Join the conversation

Send me notifications when other members comment.
What's the largest amount of data we'll be able to store on personal devices in the next decade?
Cancel
More than just PETABYTES, I think.
Cancel
I believe PETA BYTE is the recommended Storage capacity for a personal devices in the coming Decade!
Cancel
zetta-bytes, if we continue at the present rate, BUT if we find a need for storing more data it could accelerate or in the public finds that multi-tera-bytes are enough the pace will slow down. It is up to the "paying" public as to how fast the storage size will grow. (keep in mind that expansion is based on 2^ so each bit of address doubles the capacity of the storage)
Cancel
It would probably not be more than a few terrabytes, everything else is store in the cloud space on some servers.
Cancel
yotta
Cancel
OK... there are couple of things to look at here. One is need and the other one is actual attainable capacity of a given technology. What is scientifically possible is different from what it can be mass produced and what will have a demand by either enterprise and residential users. For home users a few terabytes is more than enough compared to the need that folks like google or youtube will have.
Cancel
1billion yottabytes evidently.

Seriously though perhaps a few Petabytes?
Cancel
Are we talking theoretical or practical? Much of this depends on what we expect our devices to actually do. Theoretically, we could store petabytes of data on our devices, but I'd have to ask myself fundamentally why I would want to. It reminds me of the days when we had hundreds of gigabyte iPods, you know for those people that had to have every album they ever owned forever on one device, because they had a need to have months worth of continuous listening. Possible? Yes. Practical? Not really, especially with cloud storage and transfer capability. Could we be actively be storing terabytes of days in the next few years on our mobile devices? that seems a lot more likely, if for no other reason I'd not want that many eggs in just one basket.
Cancel
100,1000 TB already exists so by 2020 peta byte
Cancel
what is the need for such huge storage in personal devices, while we will not be able to use 1 terabyte of data completely in one life time.
Cancel
The future problems are not the Zettas or Exas. We don't have yet the OS's for that mass to crunch in "real time". We need multi-logistic, heuristic algorithmic processors. Like neurons, but with entanglement-controlled "fiering" of the units.
Cancel

## SearchSolidStateStorage

• ### Tegile Systems IntelliFlash HD

Tegile Systems IntelliFlash HD intended to appeal to medium and large enterprises looking to consolidate multiple workloads on a ...

• ### Nimble Storage All Flash Arrays (AF-Series)

Nimble's first all-flash array family scales from 20 TB to petabyte and 8.2 PB of usable storage and 1.2 million IOPS per ...

• ### Hitachi Data Systems Virtual Storage Platform VSP F800

Modular rack-mounted Hitachi Data Systems VSP F800 runs the vendor's SVOS operating system software and is equipped with ...

## SearchConvergedInfrastructure

• ### Land a job deploying converged and hyper-converged infrastructure

If you have the technical skills to run a converged or hyper-converged architecture, reading the interviewer is the only thing ...

• ### How to get upper management on board with HCI products

If you want to use hyper-converged infrastructure in your company, getting approval -- and budget -- from higher ups is the first...

• ### DataCore Hyper-converged Virtual SAN (PSP5 Update)

DataCore's Hyper-converged Virtual SAN update wins bronze award for server-based storage in 2016 Storage Products of the Year ...

## SearchCloudStorage

• ### Businesses use enterprise file sync-and-share market to collaborate

Enterprise file sync and share solves numerous business problems by offering organizations a way to provide more secure document ...

• ### Open source object storage startup OpenIO adds hardware

Up to 96 nano nodes fit in OpenIO's SLS-4U96 chassis. Each ARM CPU manages a single high-capacity disk and a small amount of ...

## SearchDisasterRecovery

• ### How will the disaster recovery model evolve over the next five years?

There's a lot more going on in the backup and disaster recovery market than just a shift to the cloud. Technologies have to ...

• ### Scale HC3 provides DR for engineering firm

Scale Computing HC3 clusters provide primary storage and -- with Brocade switching -- data protection, in the form of disaster ...

• ### Ransomware recovery cases show it's possible to come back from attack

Real estate, police and law organizations all had one thing in common: successful ransomware data recovery. These case studies ...

## SearchDataBackup

• ### Quest Rapid Recovery incremental backup software saves data in a flash

Quest Software's block-level, incremental backup software provides in-place recovery, integration with Microsoft VSS and tight ...

• ### Cohesity's converged data protection platform goes ROBO

Cohesity upgraded its product line with a virtual edition appliance for converged protection in remote offices and branch offices...

• ### Komprise Data Management Grid broadens file analytics

The Komprise data plane sits out of band to return adaptive file analytics on data and metadata. The update supports unlimited ...

Close