← Back to context

Comment by WalterBright

5 hours ago

Unicode should be for visible characters. Invisible characters are an abomination. So are ways to hide text by using Unicode so-called "characters" to cause the cursor to go backwards.

Things that vanish on a printout should not be in Unicode.

Remove them from Unicode.

Unicode is "designed to support the use of text in all of the world's writing systems that can be digitized"

Unicode needs tab, space, form feed, and carriage return.

Unicode needs U+200E LEFT-TO-RIGHT MARK and U+200F RIGHT-TO-LEFT MARK to switch between left-to-right and right-to-left languages.

Unicode needs U+115F HANGUL CHOSEONG FILLER and U+1160 HANGUL JUNGSEONG FILLER to typeset Korean.

Unicode needs U+200C ZERO WIDTH NON-JOINER to encode that two characters should not be connected by a ligature.

Unicode needs U+200B ZERO WIDTH SPACE to indicate a word break opportunity without actually inserting a visible space.

Unicode needs MONGOLIAN FREE VARIATION SELECTORs to encode the traditional Mongolian alphabet.

  • [flagged]

    • That's a very narrow view of the world. One example: In the past I have handled bilingual english-arabic files with switches within the same line and Arabic is written from left to right.

      There are also languages that are written from to to bottom.

      Unicode is not exclusively for coding, to the contrary, pretty sure it's only a small fraction of how Unicode is used.

      > Somehow people didn't need invisible characters when printing books.

      They didn't need computers either so "was seemingly not needed in the past" is not a good argument.

      3 replies →

So we need a new standard problem due to the complexity of the last standard? Isn't unicode supposed to be a superset of ASCII, which already has control characters like new space, CR, and new lines? xD

  • The only ones people use any more are newline and space. A tab key is fine in your editor, but it's been more or less abandoned as a character. I haven't used a form feed character since the 1970s.

That ship has sailed, but I consider Unicode a good thing, yet I consider it problematic to support Unicode in every domain.

I should be able to use Ü as a cursed smiley in text, and many more writing systems supported by Unicode support even more funny things. That's a good thing.

On the other hand, if technical and display file names (to GUI users) were separate, my need for crazy characters in file names, code bases and such are very limited. Lower ASCII for actual file names consumed by technical people is sufficient to me.

Another dum dum Unicode idea is having multiple code points with identical glyphs.

Rule of thumb: two Unicode sequences that look identical when printed should consist of the same code points.

  • If anything, Unicode should have had more disambiguated characters. Han unification was a mistake, and lower case dotted Turkish i and upper case dotless Turkish I should exist so that toUpper and toLower didn't need to know/guess at a locale to work correctly.

  • So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?

    And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?

    • > So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?

      Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.

      > And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?

      Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?

      Those Unicode homonyms are a solution looking for a problem.

      6 replies →

    • What about numbers? Would they be assigned to arabic only? I guess someone will be offended by that.

      While at it we could also unify I, | and l. It's too confusing sometimes.

      1 reply →

  • As far as I know, glyphs are determined by the font and rendering engine. They're not in the Unicode standard.

    • Fraktur (font) and italic (rendering) are in the Unicode standard, although Hackernews will not render them. (I suspect that the Hackernews software filters out the nuttier Unicode stuff.)

  • I don't think that would help much. There are also characters which are similar but not the same and I don't think humans can spot the differences unless they are actively looking for them which most of the time people are not. If only one of two glyphs which are similar appear in the text nobody would likely notice, expectation bias will fuck you over.

Invisible characters are there for visible characters to be printed correctly...

  • I'll grant that a space and a newline are necessary. The rest, nope.

    • You're talking about a subset of ASCII then. Unicode is supposed to support different languages and advanced typography, for which those characters are necessary. You can't write e.g. Arabic or Hebrew without those "unnecessary" invisible characters.

      1 reply →

Good luck with that given there are invisible characters in ascii.

Also this attack doesnt seem to use invisible characters just characters that dont have an assigned meaning.