Why 1 byte = 8 bits?
Why 1 byte = 8 bits?
Hi all,
I was just wondering why bytes have been designed to hold 8 bits? Or why does 8 bits comprise one byte? Is there any peculiar reasons for choosing the number 8? Why not some other numbers?
-NOTNULL
I was just wondering why bytes have been designed to hold 8 bits? Or why does 8 bits comprise one byte? Is there any peculiar reasons for choosing the number 8? Why not some other numbers?
-NOTNULL
Re:Why 1 byte = 8 bits?
It's just what became the most common. Many early architectures had byte sizes <> 8 bit.
Every good solution is obvious once you've found it.
Re:Why 1 byte = 8 bits?
Well... because you can store a bit offset in a full amount of bits (no wastage), and a full alphabet fits in it without cramming and without leaving a load of wasted space.NOTNULL wrote: I was just wondering why bytes have been designed to hold 8 bits? Or why does 8 bits comprise one byte? Is there any peculiar reasons for choosing the number 8? Why not some other numbers?
The options for the first category were 4, 8 and 16. For the second they were between 6 and 10. The only match is 8.
On a sidenote, it wasn't always like this. There were machines that worked with 36-bit words, which packed 4 characters of each 9 bits. Also, there are numerous other machines that do something else. People have just agreed on the byte because it works nicely.
Re:Why 1 byte = 8 bits?
Referring to the Jargon File, emphasis is mine:
byte: /bi:t/, n.
[techspeak] A unit of memory or data equal to the amount used to represent one character; on modern architectures this is invariably 8 bits. Some older architectures used byte for quantities of 6, 7, or (especially) 9 bits, and the PDP-10 supported bytes that were actually bitfields of 1 to 36 bits! These usages are now obsolete, killed off by universal adoption of power-of-2 word sizes.
Historical note: The term was coined by Werner Buchholz in 1956 during the early design phase for the IBM Stretch computer; originally it was described as 1 to 6 bits (typical I/O equipment of the period used 6-bit chunks of information). The move to an 8-bit byte happened in late 1956, and this size was later adopted and promulgated as a standard by the System/360. The word was coined by mutating the word ?bite? so it would not be accidentally misspelled as bit. See also nybble.
Every good solution is obvious once you've found it.
Re:Why 1 byte = 8 bits?
Why are there 5,280 feet in a mile? Why seven days in a week? On the PC, a byte just happens to be 8 bits. There has been 7 bits, 10 bits, and other things. But on the ubiquitous PC, it happens to be 8 bits, and the PC sets the standard, so a byte is known to be 8 bits, even though there are other kinds of bytes.
Re:Why 1 byte = 8 bits?
I've always wondered about those two. First ,why define such weird transformation things if you can just say a thousand XYZ or such. Second, an 8-day week would be better, allows for a longer weekend .NotTheCHEAT wrote: Why are there 5,280 feet in a mile? Why seven days in a week?
Monday mornings... :-\
Re:Why 1 byte = 8 bits?
Legacy. In some cultures numerical base was not equal to 10. And there were happy early days where measurement did not have to be precise, so they measured in hands, foot, spans, strides and whatever.Candy wrote: First ,why define such weird transformation things if you can just say a thousand XYZ or such.
Four 7-day weeks happen to equal one full phase of the moon, which was the basis for all early calendars.Second, an 8-day week would be better, allows for a longer weekend .
Every good solution is obvious once you've found it.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Why 1 byte = 8 bits?
Now, I'm not gonna cite a certain book for this would in all earnest open a full can o' worms. *rofl*
well - for the bit & myte stuff? isn't that because it's easier to construct adders which contain in-ports in a number which is any power of 2? (say a byte is 2^3)
well - for the bit & myte stuff? isn't that because it's easier to construct adders which contain in-ports in a number which is any power of 2? (say a byte is 2^3)
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Why 1 byte = 8 bits?
As much as it has become a reflex, the PC is not to be blamed for everything. System/360 not only predated the PC, but even the 8088 and - lo and behold! - even the founding of Intel.NotTheCHEAT wrote: On the PC, a byte just happens to be 8 bits. There has been 7 bits, 10 bits, and other things. But on the ubiquitous PC, it happens to be 8 bits, and the PC sets the standard, so a byte is known to be 8 bits, even though there are other kinds of bytes.
Every good solution is obvious once you've found it.
Re:Why 1 byte = 8 bits?
Well... no. Full adders are a sequence of normal single-bit adders, potentially optimized by chopping it in half and calculating the upper half with and without carry, then later on checking which is correct. That will work with any length, the optimized with any length a multiple (not power) of two. Nine or ten would work nicely.beyond infinity wrote: well - for the bit & myte stuff? isn't that because it's easier to construct adders which contain in-ports in a number which is any power of 2? (say a byte is 2^3)
It does matter for some other instructions, say bitcount. If you make an instruction of that, you make the same speed with 16 bits as with 9 bits (since you need 4 levels of adding). For 8-bits, you only need 3. Yet, this doesn't explain why it isn't 16 or 32 bits, which would nowadays, given Unicode and 32/64-bits processors, be more practical.
*off to developing hardware in VHDL again* -> yep, I'm enjoying myself at my internship .. now if that buffer of 256bytes would just not render as 2800 logic elements but as memory bits...
Re:Why 1 byte = 8 bits?
About the mile:
A mile is the distance a Roman soldier covered in 1000 steps.
A mile is the distance a Roman soldier covered in 1000 steps.
Re:Why 1 byte = 8 bits?
If I'm not mistaken, a mile is around 1.6 km. That'd mean a Roman soldier covered 1.6 meters with one step. It seems unlikely. However, you are right. You just should've included that you meant 1000 double steps.
Re:Why 1 byte = 8 bits?
Note that one double step is exactly one loop in an AGI view.Eero R?nik wrote: If I'm not mistaken, a mile is around 1.6 km. That'd mean a Roman soldier covered 1.6 meters with one step. It seems unlikely. However, you are right. You just should've included that you meant 1000 double steps.
Re:Why 1 byte = 8 bits?
And a sea mile is the distance a roman soldier could cover with 1000 strokes?
Every good solution is obvious once you've found it.