Comment by unwind
2 days ago
Very cool and impressive performance.
I was worried (I find it confusing when Unicode "shadows" of normal letters exist, and those are of course also dangerous in some cases when they can be mis-interpreted for the letter they look more or less exactly like) by the article's use of U+212A (Kelvin symbol) as sample text, so I had to look it up [1].
Anyway, according to Wikipedia the dedicated symbol should not be used:
However, this is a compatibility character provided for compatibility with legacy encodings. The Unicode standard recommends using U+004B K LATIN CAPITAL LETTER K instead; that is, a normal capital K.
That was comforting, to me. :)
> I find it confusing when Unicode "shadows" of normal letters exist, and those are of course also dangerous in some cases when they can be mis-interpreted for the letter they look more or less exactly like
Isn't this why Unicode normalization exists? This would let you compare Unicode letters and determine if they are canonically equivalent.
It's why the Unicode Collation Algorithm exists.
If you look in allkeys.txt (the base UCA data, used if you don't have language-specific stuff in your comparisons) for the two code points in question, you'll find:
The numbers in the brackets are values on level 1 (base), level 2 (typically used for accents), level 3 (typically used for case). So they are to compare identical under the UCA, in almost every case except for if you really need a tiebreaker.
Compare e.g. :
which would compare equal to those under a case-insensitive accent-sensitive collation, but _not_a case-sensitive one (case-sensitive collations are always accent-sensitive, too).
Are the meanings for the levels for each code point defined somewhere (accent, casing, etc)?
1 reply →
Normalization wouldn’t address this.
What do you mean? All four normal forms of the Kelvin 'K' are the Latin 'K', as far as I can tell.
Normalization forms NFKC and NFKD that also handle compatibility equivalence do.
1 reply →