Comment by happytoexplain
7 hours ago
I have a love-hate relationship with backwards compatibility. I hate the mess - I love when an entity in a position of power is willing to break things in the name of advancement. But I also love the cleverness - UTF-8, UTF-16, EAN, etc. To be fair, UTF-8 sacrifices almost nothing to achieve backwards compat though.
> To be fair, UTF-8 sacrifices almost nothing to achieve backwards compat though.
It sacrifices the ability to encode more than 21 bits, which I believe was done for compatibility with UTF-16: UTF-16’s awful “surrogate” mechanism can only express code units up to 2^21-1.
I hope we don’t regret this limitation some day. I’m not aware of any other material reason to disallow larger UTF-8 code units.
That isn't really a case of UTF-8 sacrificing anything to be compatible with UTF-16. It's Unicode, not UTF-8 that made the sacrifice: Unicode is limited to 21 bits due to UTF-16. The UTF-8 design trivially extends to support 6 byte long sequences supporting up to 31-bit numbers. But why would UTF-8, a Unicode character encoding, support code points which Unicode has promised will never and can never exist?
In an ideal future (read: fantasy), utf-16 gets formally deprecated and trashed, freeing the surrogate sequences and full range for utf-8.
Or utf-16 is officially considered a second class citizen, and some code points are simply out of its reach.
> It sacrifices the ability to encode more than 21 bits, which I believe was done for compatibility with UTF-16: UTF-16’s awful “surrogate” mechanism can only express code units up to 2^21-1z
Yes, it is 'truncated' to the "UTF-16 accessible range":
* https://datatracker.ietf.org/doc/html/rfc3629#section-3
* https://en.wikipedia.org/wiki/UTF-8#History
Thompson's original design could handle up to six octets for each letter/symbol, with 31 bits of space:
* https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt
You could even extend UTF-8 to make 0xFE and 0xFF valid starting bytes, with 6 and 7 following bytes each, and get 42 bits of space. I seem to remember Perl allowed that for a while in its v-strings notation.
Edit: just tested this, Perl still allows this, but with an extra twist: v-notation goes up to 2^63-1. From 2^31 to 2^36-1 is encoded as FE + 6 bytes, and everything above that is encoded as FF + 12 bytes; the largest value it allows is v9223372036854775807, which is encoded as FF 80 87 BF BF BF BF BF BF BF BF BF BF. It probably doesn't allow that one extra bit because v-notation doesn't work with negative integers.
That limitation will be trivial to lift once UTF-16 compatibility can be disregarded. This won’t happen soon, of course, given JavaScript and Windows, but the situation might be different in a hundred or thousand years. Until then, we still have a lot of unassigned code points.
In addition, it would be possible to nest another surrogate-character-like scheme into UTF-16 to support a larger character set.
It's always dangerous to stick one's neck out and say "[this many bits] ought to be enough for anybody", but I think it's very unlikely we'll ever run out of UTF-8 sequences. UTF-8 can represent about 1.1 million code points, of which we've assigned about 160,000 actual characters, plus another ~140,000 in the Private Use Area, which won't expand. And that's after getting nearly all of the world's known writing systems: the last several Unicode updates have added a few thousand characters here and there for very obscure and/or ancient writing systems, but those won't go on forever (and things like emojis rarely only get a handful of new code points per update, because most new emojis are existing code points with combining characters).
If I had to guess, I'd say we'll run out of IPv6 addresses before we run out of unassigned UTF-8 sequences.
the limitation tomorrow will be today's implementations, sadly.
> I love when an entity in a position of power is willing to break things in the name of advancement.
It's less fun when you have things that need to keep working break because someone felt like renaming a parameter, or that a part of the standard library looks "untidy"
I agree! And yet I lovingly sacrifice my man-hours to it when I decide to bump that major version number in my dependency manifest.
Yeah I honestly don't know what I would change. Maybe replace some of the control characters with more common characters to save a tiny bit of space, if we were to go completely wild and break Unicode backward compatibility too. As a generic multi byte character encoding format, it seems completely optimal even in isolation.