When ASCII was invented, 36-bit computers were popular, which would fit five ASCII characters with just one unused bit per 36-bit word. Before, 6-bit character codes were used, where a 36-bit word could fit six of them.
Crucially, "the 7-bit coded character set" is described on page 6 using only seven total bits (1-indexed, so don't get confused when you see b7 in the chart!).
There is an encoding mechanism to use 8 bits, but it's for storage on a type of magnetic tape, and even that still is silent on the 8th bit being repurposed. It's likely, given the lack of discussion about it, that it was for ergonomic or technical purposes related to the medium (8 is a power of 2) rather than for future extensibility.
Notably, it is mentioned that the 7-bit code is developed "in anticipation of" ISO requesting such a code, and we see in the addenda attached at the end of the document that ISO began to develop 8-bit codes extending the base 7-bit code shortly after it was published.
So, it seems that ASCII was kept to 7 bits primarily so "extended ASCII" sets could exist, with additional characters for various purposes (such as other languages, but also for things like mathematical symbols).
Mackenzie claims that parity was explicit concern for selecting 7 bit code for ASCII. He cites X3.2 subcommittee, although does not provide any references which document exactly, but considering that he was member of those committees (as far as I can tell) I would put some weight to his word.
When ASCII was invented, 36-bit computers were popular, which would fit five ASCII characters with just one unused bit per 36-bit word. Before, 6-bit character codes were used, where a 36-bit word could fit six of them.
This is not true. ASCII (technically US-ASCII) was a fixed-width encoding of 7 bits. There was no 8th bit reserved. You can read the original standard yourself here: https://ia600401.us.archive.org/23/items/enf-ascii-1968-1970...
Crucially, "the 7-bit coded character set" is described on page 6 using only seven total bits (1-indexed, so don't get confused when you see b7 in the chart!).
There is an encoding mechanism to use 8 bits, but it's for storage on a type of magnetic tape, and even that still is silent on the 8th bit being repurposed. It's likely, given the lack of discussion about it, that it was for ergonomic or technical purposes related to the medium (8 is a power of 2) rather than for future extensibility.
Notably, it is mentioned that the 7-bit code is developed "in anticipation of" ISO requesting such a code, and we see in the addenda attached at the end of the document that ISO began to develop 8-bit codes extending the base 7-bit code shortly after it was published.
So, it seems that ASCII was kept to 7 bits primarily so "extended ASCII" sets could exist, with additional characters for various purposes (such as other languages, but also for things like mathematical symbols).
Mackenzie claims that parity was explicit concern for selecting 7 bit code for ASCII. He cites X3.2 subcommittee, although does not provide any references which document exactly, but considering that he was member of those committees (as far as I can tell) I would put some weight to his word.
https://hcs64.com/files/Mackenzie%20-%20Coded%20Character%20... sections 13.6 and 13.7
I would love to think this is true, and it makes sense, but do you have any actual evidence for this you could share with HN?