Comment by rowls66
9 hours ago
I think more effort should have been made to live with 65,536 characters. My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis. I think that adding emojis to unicode is going to be seen a big mistake. We already have enough network bandwith to just send raster graphics for images in most cases. Cluttering the unicode codespace with emojis is pointless.
You are mistaken. Chinese Hanzi and the languages that derive from or incorporate them require way more than 65,536 code points. In particular a lot of these characters are formal family or place names. USC-2 failed because it couldn't represent these, and people using these languages justifiably objected to having to change how their family name is written to suit computers, vs computers handling it properly.
This "two bytes should be enough" mistake was one of the biggest blind spots in Unicode's original design, and is cited as an example of how standards groups can have cultural blind spots.
UTF-16 also had a bunch of unfortunate ramifications on the overall design of Unicode, e.g. requiring a substantial chunk of BMP to be reserved for surrogate characters and forcing Unicode codepoints to be limited to U+10FFFF.
> My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis
This week's Unicode 17 announcement [1] mentions that of the ~160k existing codepoints, over 100k are CJK codepoints, so I don't think this can be true...
[1] https://blog.unicode.org/2025/09/unicode-170-release-announc...
Your understanding is incorrect; a substantial number of the ranges allocated outside BMP (i.e. above U+FFFF) are used for CJK ideographs which are uncommon, but still in use, particularly in names and/or historical texts.
The silly thing is, lots of emoji these days aren't even a single code point. So many emoji these days are two other code points combined with a zero width joiner. Surely we could've introduced one code point which says "the next code point represents an emoji from a separate emoji set"?
CJK unification (https://en.wikipedia.org/wiki/CJK_Unified_Ideographs) i.e. combining "almost same" Chinese/Japanese/Korean characters into the same codepoint, was done for this reason, and we are now living with the consequence that we need to load separate Traditional/Simplified Chinese, Japanese, and Korean fonts to render each language. Total PITA for apps that are multi-lingual.
This feels like it should be solveable with introducing a few more marker characters, like one code point representing "the following text is traditional Chinese", "the following text is Japanese", etc? It would add even more statefulness to Unicode, but I feel like that ship has already sailed with the U+202D LEFT-TO-RIGHT OVERRIDE and U+202E RIGHT-TO-LEFT OVERRIDE characters...
Unicode used to have a system of in-band language tags, but it was deprecated https://www.unicode.org/faq//languagetagging.html
There is a way to do it: https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_b...
However, it's not used widely and has problems with variant-naïve fonts.
I entirely agree that we could've cared better for the leading 16 bit space. But protocol-wise adding a second component (images) to the concept of textual strings would've been a terrible choice.
The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification, where we already had a whooping 1.1 million code points at our disposal.
> The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification
I'm not sure what you mean by this. The UTF-8 specification was written long before emoji were included in Unicode, and generally has no bearing on what characters it's used to encode.