In most computer systems, a byte is a unit of data that is eight binary digits long. A byte is the unit most computers use to represent a character such as a letter, number, or typographic symbol (for example, "g", "5", or "?"). A byte can also hold a string of bits that need to be used in some larger unit for application purposes (for example, the stream of bits that constitute a visual image for a program that displays images or the string of bits that constitutes the machine code of a computer program).
In some computer systems, four bytes constitute a word, a unit that a computer processor can be designed to handle efficiently as it reads and processes each instruction. Some computer processors can handle two-byte or single-byte instructions.
A byte is abbreviated with a "B". (A bit is abbreviated with a small "b".) Computer storage is usually measured in byte multiples. For example, an 820 MB hard drive holds a nominal 820 million bytes - or megabytes - of data. Byte multiples are based on powers of 2 and commonly expressed as a "rounded off" decimal number. For example, one megabyte ("one million bytes") is actually 1,048,576 (decimal) bytes. (Confusingly, however, some hard disk manufacturers and dictionary sources state that bytes for computer storage should be calculated as powers of 10 so that a megabyte really would be one million decimal bytes.)
Some language scripts require two bytes to represent a character. These are called double-byte character sets (DBCS).
According to Fred Brooks, an early hardware architect for IBM, project manager for the OS/360 operating system, and author of The Mythical Man-Month, Dr. Werner Buchholz originated the term byte in 1956 when working on IBM's STRETCH computer.
Continue Reading About byte
- 'How many bytes for...' is a fascinating look at storage requirements and capacities of real-world examples.