← Back to context

Comment by maxdamantus

20 hours ago

Can you give an example of how Go's approach causes people to lose data? This was alluded to in the blog post but they didn't explain anything.

It seems like there's some confusion in the GGGGGP post, since Go works correctly even if the filename is not valid UTF-8 .. maybe that's why they haven't noticed any issues.

Imagine that you're writing a function that'll walk the directory to copy some files somewhere else, and then delete the directory. Unfortunately, you hit this

https://github.com/golang/go/issues/32334

oops, looks like some files are just inaccessible to you, and you cannot copy them.

Fortunately, when you try to delete the source directory, Go's standard library enters infinite loop, which saves your data.

https://github.com/golang/go/issues/59971

  • Ah, mentioning Windows filenames would have been useful.

    I guess the issue isn't so much about whether strings are well-formed, but about whether the conversion (eg, from UTF-16 to UTF-8 at the filesystem boundary) raises an error or silently modifies the data to use replacement characters.

    I do think that is the main fundamental mistake in Go's Unicode handling; it tends to use replacement characters automatically instead of signalling errors. Using replacement characters is at least conformant to Unicode but imo unless you know the text is not going to be used as an identifier (like a filename), conversion should instead just fail.

    The other option is using some mechanism to preserve the errors instead of failing quietly (replacement) or failing loudly (raise/throw/panic/return err), and I believe that's what they're now doing for filenames on Windows, using WTF-8. I agree with this new approach, though would still have preferred they not use replacement characters automatically in various places (another one is the "json" module, which quietly corrupts your non-UTF-8 and non-UTF-16 data using replacement characters).

    Probably worth noting that the WTF-8 approach works because strings are not validated; WTF-8 involves converting invalid UTF-16 data into invalid UTF-8 data such that the conversion is reversible. It would not be possible to encode invalid UTF-16 data into valid UTF-8 data without changing the meaning of valid Unicode strings.

  • IMO the right thing to do here is even messier than Go’s approach, which is give people utf-16-ish strings on Windows.

    • They have effectively done this (since the linked issue was raised), by just converting Windows filenames to WTF-8.

      I think this is sensible, because the fact that Windows still uses UTF-16 (or more precisely "Unicode 16-bit strings") in some places shouldn't need to complicate the API on other platforms that didn't make the UCS-2/UTF-16 mistake.

      It's possible that the WTF-8 strings might not concatenate the way they do in UTF-16 or properly enforced WTF-8 (which has special behaviour on concatenation), but they'll still round-trip to the intended 16-bit string, even after concatenation.