As a Turkish speaker who was using a Turkish-locale setup in my teenage years these kinds of bugs frustrated me infinitely. Half of the Java or Python apps I installed never run. My PHP webservers always had problems with random software. Ultimately, I had to change my system's language to English. However, US has godawful standards for everything: dates, measurement units, paper sizes.
When I shared computers with my parents I had to switch languages back-and-forth all the time. This helped me learn English rather quickly but, I find it a huge accessibility and software design issue.
If your program depends on letter cases, that is a badly designed program, period. If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.
While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.
I don't care if Unicode releases a conversion map. Natural-language behavior should always require natural language metadata too. Even modern languages like Rust did a crappy job of enforcing it: https://doc.rust-lang.org/std/primitive.char.html#method.to_... . Yes it is significantly safer but converting 'ß' to 'SS' in German definitely has gotchas too.
> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.
POSIX requires that many functions account for the current locale. I'm not sure why you are blaming GNU for this.
I'm not sure why you are blaming POSIX! The role of POSIX is to write down what is already common practice in almost all POSIX-like systems. It doesn't usually specify new behaviour.
There are OS-level settings for date and unit formats but not all software obeys that, instead falling back to using the default date/unit formats for the selected locale.
They’re about as independent as system language defaults causing software not to work properly. It’s that whole realm of “well we assumed that…” design error.
> > However, US has godawful standards for everything: dates, measurement units, paper sizes.
> Isn't the choice of language and date and unit formats normally independent.
You would hope so but, no. Quite a bit software tie the language setting to Locale setting. If you are lucky, they will provide an "English (UK)" option (which still uses miles or FFS WTF is a stone!).
On Windows you can kinda select the units easily. On Linux let me introduce you to the journey to LC_ environment variables: https://www.baeldung.com/linux/locale-environment-variables . This doesn't mean the websites or the apps will obey them. Quite a few of them don't and just use LANGUAGE, LANG or LC_TYPE as their setting.
My company switched to Notion this year (I still miss Confluence). It was hell until last month since they only had "English (US)" and used M/D/Y everywhere with no option to change!
I usually set the Ireland locale, they use English but use civilized units. Sometimes there's also a "English (Europe)" or "English (Germany)" locale that works too.
> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake.
It wasn’t a mistake for local software that is supposed to automatically use the user’s locale. It’s what made a lot of local software usefully locale-sensitive without the developer having to put much effort into it, or even necessarily be aware of it. It’s the reason why setting the LC_* environment variables on Linux has any effect on most software.
The age of server software, and software talking to other systems, is what made that default less convenient.
On the contrary, the locale APIs are problematic for many reasons. If C had just been like "well C only supports the C locale, write your own support if that's what you want", much more software would have been less subtly broken.
There's a few fundamental problems with it:
1. The locale APIs weren't designed very well and things were added over the years that do not play nice with it.
So like as an example, what should `int toupper(int c)` return? (By the way, the paramater `c` is really an unsigned char, if you try to put anything but a single byte here, you get undefined behavior. What if you're using something that uses a multibyte encoding? You only get one byte back so that doesn't really help there either.
Many of the functions were clearly designed for the "1 character = 1 byte" world, which is a key assumption of all of these APIs. Which is fine if you're working with ASCII, but blows up as soon as you change locales.
And even so, it creates problems where you try to use it. Say I have a "shell" but all of the commands are internally stored as uppercase, but you want to be compatible. If you try to use anything outside of ASCII with locales, you can't just store the command list in uppercase form because then they won't match when doing a string comparison using the obvious function for it (strcmp). You have to use strcoll instead, and sometimes you just, might not have a match for multibyte encodings.
2. The locale is global state.
The worst part about it is that it's actually global state (not even like faux-global state like errno). This basically means that it's basically wildly thread unsafe as you can have thread 1 running toupper(x) while another thread, possibly in a completely different library, calling setlocale (as many library functions do to guard against the semantics of a lot of standard library functions changing unexpectedly). And boom, instant undefined behavior, with basically nothing you could reasonably do about it. You'll probably get something out of it, but the pieces are probably going to display weirdly unless your users are from the US, where the C locale is pretty close to the US locale.
This means any of the functions in this list[1] is potentially a bomb:
And there are some important ones in there too like strerror. Searching through GitHub as a random sample, it's not uncommon to see these functions be used[2], and really, would you expect `isdigit` to be thread-unsafe?
It's a little better with POSIX as they define a bunch of "_r" variants of functions like strerror and the like which at least give some thread safety (and uselocale at least is a thread-only variant of setlocale, which lets you safely do the whole "guard all library calls to `uselocale(LC_ALL, "C")`"). But Windows doesn't support uselocale so you have to use _configthreadlocale instead.
It also creates hard to trace bug reports. Saying you only support ASCII or whatever is, well it's not great today, but it's at least somewhat understandable, and is commonly seen to be the lowest common denominator for software. Sure, ideally we'd all use byte strings where we don't care or UTF-8 where we actually want to work with text (and maybe UTF-16 on Windows for certain things), but that's just a feature that doesn't exist, whereas memory corruption when you do something with a string but only for people in a certain part of the world in certain circumstances is not really a great user experience or developer experience for that matter.
The thing, I actually like C in a lot of ways. It's a very useful programming language and has incredible importance even today and probably for the far future, but I don't really think the locale API was all that well designed.
I live in Germany now, so I generally set it to Irish nowadays. Since I like ISO-style enter key, I use UK keyboard layout (also easier to switch to Turkish than ANSI-layout). However many OSes now have a English (Europe) locale too
I thought locale is mostly controlled by the environment. So you can run your system and each program with it's own separate locale settings if you like.
I wish there was a single letter universal locale with sane values, maybe call it U or E, with:
ISO (or RFC....) date time,
UTF-8 default (maybe also alternative with ISO8859-1)
decimal point in numbers and _ for thousands,
metric paper / A4, ...,
unicode neutral collation
> If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.
There is a deeper bug within Unicode.
The Turkish letter TURKISH CAPITAL LETTER DOTLESS I is represented as the code point U+0049, which is named LATIN CAPITAL LETTER I.
The Greek letter GREEK CAPITAL LETTER IOTA is represented as the code point U+0399, named... GREEK CAPITAL LETTER IOTA.
The relationship between the Greek letter I and the Roman letter I is identical in every way to the relationship between the Turkish letter dotless I and the Roman letter I. (Heck, the lowercase form is also dotless.) But lowercasing works on GREEK CAPITAL LETTER IOTA because it has a code point to call its own.
Should iota have its own code point? The answer to that question is "no": it is, by definition, drawn identically to the ascii I. But Unicode has never followed its principles. This crops up again and again and again, everywhere you look. (And, in "defense" of Unicode, it has several principles that directly contradict each other.)
Then people come to rely on behavior that only applies to certain buggy parts of Unicode, and get messed up by parts that don't share those particular bugs.
It’s not a bug, it’s a feature. The reason is that ISO 8859-7 [0] used for Greek has a separate character code for Iota (for all greek letters, really),
while ISO 8859-3 [1] and -9 [2] used for Turkish do not for the usual dotless uppercase I.
One important goal of Unicode is to be able to convert from existing character sets to Unicode (and back) without having to know the language of the text that is being converted. If they had invented a separate code point for I in Turkish, then when converting text from those existing ISO character encodings, you’d have to know whether the text is Turkish or English or something else, to know which Unicode code point to map the source “I” into. That’s exactly what Unicode was designed to avoid.
Interesting one. That and and relying on system character encodings is a source of subtle bugs. I've been bitten by that many times with e.g. XML parsing in the past. Modern Kotlin thankfully has very few (if any) places left where this can happen. Kotlin has parameters with default values. So anything that relies on a character encoding usually has a parameter of encoding set to UTF-8.
The bug here was the default Java implementation that Kotlin uses on JVM. On kotlin-js both toLowerCase() and lowercase() do exactly the same thing. Also, the deprecation mechanism in Kotlin is kind of cool. The deprecated implementation is still there and you could use it with a compiler flag to disable the error.
@Deprecated("Use lowercase() instead.", ReplaceWith("lowercase(Locale.getDefault())", "java.util.Locale"))
@DeprecatedSinceKotlin(warningSince = "1.5", errorSince = "2.1")
@kotlin.internal.InlineOnly
public actual inline fun String.toLowerCase(): String = (this as java.lang.String).toLowerCase()
/**
* Returns a copy of this string converted to lower case using Unicode mapping rules of the invariant locale.
*
* This function supports one-to-many and many-to-one character mapping,
* thus the length of the returned string can be different from the length of the original string.
*
* @sample samples.text.Strings.lowercase
*/
@SinceKotlin("1.5")
@kotlin.internal.InlineOnly
public actual inline fun String.lowercase(): String = (this as java.lang.String).toLowerCase(Locale.ROOT)
When I saw "Turkish alphabet bug", I just knew it was some version of toLower() gone horribly wrong.
(I'm sure there's a good reason, but I find it odd that compiler message tags are invariably uppercase, but in this problem code they lowercased it to go do a lookup from an enum of lowercase names. Why isn't the enum uppercase, like the things you're going to lookup?)
With Turkish you can't safely case-fold with toupper() or tolower() in a C/US locale: i->I and I->i are both wrong. Uppercasing wouldn't work. You have to use Unicode or Latin-5 to manage it.
You misunderstood the parent post. They where suggesting to look up the exact string that ends up in the message, without any conversion. So if the message contains INFO, ERROR, etc. then look up "INFO", "ERROR"...
I am one of the maintainers is the Scala compiler, and this is one of the things that immediately jump to me when I review code that contains any casing operation. Always explicitly specify the locale. However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
Also, this is the last remaining major system-dependent default in Java. They made strict floating point the default in 17; UTF-8 the default encoding some versions later (21?); only the locale remains. I hope they make ROOT the default in an upcoming version.
FWIW, in the Scala.js implementation, we've been using UTF-8 and ROOT as the defaults forever.
I agree that Locale.ROOT is the canonical choice. But in this case, Locale.US also makes sense: it isn't some abstract "US is some kind of the global default", it is saying "we know are upcasing an English word".
> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
I have no idea what `Locale.ROOT` refers to, and I'd be worried that it's accidentally the same as the system locale or something, exactly the sort of thing that will unexpectedly change when a Turkish-speaker uses a computer or what have you.
> I'd be worried that it's accidentally the same as the system locale or something
The API docs clearly specify that Locale.ROOT “is regarded as the base locale of all locales, and is used as the language/country neutral locale for the locale sensitive operations.”
> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
Isn't it kind of strange to say that Locale.US is too US centric, and therefore we'll invent a new, fictitious locale, the contents of which is all the US defaults, but which we'll call "the base locale of all locales"? That somehow seems even more US centric to me than just saying Locale.US.
Setting the locale as Locale.US is at least comprehensible at a glance.
In September 2020, nearly a year after the coroutines bug had been fixed and forgotten
[…]
When they came to fix this issue, the Kotlin team weren’t leaving anything to chance. They scoured the entire compiler codebase for case-conversion operations—calls to capitalize(), decapitalize(), toLowerCase(), and toUpperCase()”
Bloody late, I would say. If something like this happened in OpenBSD, I think they would have done that, and more (the article doesn’t mention tooling to detect the introduction of new similar bugs ofof adding warnings to documentation), at the first spotting of such a bug.
How come no reviewer mentioned such things when the first fix was reviewed?
Ugh, I've had the exact same problem in a Java project, which meant I had to go through thousands and thousands of lines of code and make sure that all 'toLowerCase()' on enum names included Locale.ENGLISH as parameter.
As the article demonstrates, the error manifests in a completely inscrutable way. But once I saw the bug from a couple of users with Turkish sounding names, I zeroed in on it. And cursed a few times under my breath whoever messed up that character table so bad.
They do. But a generic warning about locale-dependence doesn't really tell you that ASCII-strings will be broken.
For nearly every purpose ASCII is the same in every locale. If you have a string that is guaranteed to be ASCII (like an enum constant is in most code styles), it's easy to think "not a problem here" and move on.
I knew from the headline that this would be the Turkish I thing, but I couldn't fathom why a compiler would care about case-folding. "I don't know Kotlin, but surely its syntax is case-sensitive like all the other commonly used languages nowadays?"
> The code is part of a class named CompilerOutputParser, and is responsible for reading XML files containing messages from the Kotlin compiler. Those files look something like this:
> If capitalize() was an ambiguous name, what should its replacement be called? Can you think of a name that describes the function’s behaviour more clearly?
In c#, setting every letter to its uppercase form is ToUpper, and I think capitalise is perfectly reasonable for setting the first character. I'm not sure I've ever referred to uppercasing a string as capitalising it
I have always wondered why Turkey chose to Latinize in this way. I understand that the issue is having two similar vowels in Turkish, but not why they decided to invent the dotless I, when other diacritics already existed. Ĭ Î Ï Í Ì Į Ĩ and almost certainly a dozen other would've worked, unless there was already some significance to the dot in Turkish that's not obvious.
Computers and localisation weren't relevant back in the early 20th century. The dotless existed before the dotted i (in Greek script as iota). Some European scholars putting an extra dot on the letter to make it stand out a bit more are as much to blame as the Turks for making the distinction between the different i-vowels clear.
Really, this bug is nothing but programmers failing to take into account that not everybody writes in English.
It's not exactly programmers failing to take into account that no everybody writes in English - if that were the case, then it would simply be impossible to represent the Turkish lowercase-dotless and uppercase-dotted I at all. The actual problem is failing to take into account that operations on text strings that work in one language's writing might not work the same way in a different language's writing system. There's a lot of languages in the world that use the Latin writing system, and even if you are personally a fluent speaker and writer of several of them, you might simply have not learned about Turkish's specific behavior with I.
> Really, this bug is nothing but programmers failing to take into account that not everybody writes in English.
This bug is the exact opposite of that. The program would have worked fine had it used pure ASCII transforms (±0x20); it was the use of library functions that did in fact take Turkish into account that caused the problem.
More broadly, this is not an easy issue to solve. If a Turkish programmer writes code, what is the expected behaviour for metaprogramming and compilers? Are the function names in English or Turkish? What about variables, object members, struct fields? You could have one variable name that references some government ID number using its native Turkish name, right next to another variable name that uses the English "ID". How does the compiler know what locale to use for which symbol?
Boiling all of this down to 'just be more considerate' is not actually constructive or actionable.
I don't know... I understand the history and reasons for this capitalization behavior in Turkish, and my native language isn't English, which had to use a lot of strange encodings before the introduction of UTF-8.
But messing around with the capitalization of ASCII <= codepoint(127) is a risky business, in my opinion. These codepoints are explicitly named:
"LATIN CAPITAL LETTER I"
"LATIN SMALL LETTER I"
and requiring them to not match exactly during capitalization/diminuitization sounds very risky.
The issue is not the invention of the dotless I, it already exists, the issue is that the took a vowerl , i/I, and the assigned the lower case to one vowel, and the upper case to a different one, and invented what left missing.
It's like they decided that the uppercase of "a" is "E" and the uppercase of "e" is "A".
This is misleading, because it assumes that i/I naturally represent one vowel, which is just not the case. i/I represents one vowel in _English_, when written with a latin script. ̶I̶n̶ ̶f̶a̶c̶t̶ ̶e̶v̶e̶n̶ ̶t̶h̶i̶s̶ ̶i̶s̶n̶'̶t̶ ̶c̶o̶r̶r̶e̶c̶t̶,̶ ̶i̶/̶I̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶s̶ ̶o̶n̶e̶ ̶p̶h̶o̶n̶e̶m̶e̶,̶ ̶n̶o̶t̶ ̶o̶n̶e̶ ̶v̶o̶w̶e̶l̶.̶ <see troad's comment for correction>
There is no reason to assume that the English representation is in general "correct", "standard", or even "first". The modern script for Turkish was adopted around the 1920's, so you could argue perhaps that most typewriters presented a standard that should have been followed. However, there was variation even between different typewriters, and I strongly suspect that typewriters weren't common in Turkey when the change was made.
Nope, we decided to do it the correct and logical way for our alphabet. Some glyphs are either dotted or dotless. So, we have Iı, İi, Oo, Öö, Uu, Üü, Cc, Çç, Ss and Şş. You see the Ii pair is actually the odd one in the series.
Also, we don't have serifs in our I. It's just a straight line. So, it's not even related to your Ii pair in English. You can't dictate how we write our straight lines, can you.
The root cause of the problem is in the implementation and standardization of the computer systems. Computers are originally designed only for English alphabet in mind. And patched to support other languages over time, poorly. Computers should obey the language rules, not the other way around.
I don’t think that’s the right way to think about it. It’s not like they were Latinizing Turkish with ASCII in mind. They wanted a one-to-one mapping between letters and sounds. The dot versus no dot marks where in your mouth or throat the vowel is formed. They didn’t have this concept that capital I automatically pairs with lowercase i. The dot was always part of the letter itself. The reform wasn’t trying to fit existing Western conventions, it was trying to map the Turkish sounds to symbols.
Not really. Turkish has a feature that is called "vowel harmony". You match suffixes you add to a word based on a category system: low pitch vs high pitch vowels where a,ı,o,u are low pitch and e,i,ö,ü are high pitch.
Ö and ü were already borrowed from German alphabet. Umlaut-added variants of 'ö' and 'ü' have a similar effect on 'o' and 'u' respectively: they bring a back vowel to front. See: https://en.wikipedia.org/wiki/Vowel . Similarly removing the dots bring them back.
Turkish already had i sound and its back variant which is a schwa-like sound: https://en.wikipedia.org/wiki/Close_back_unrounded_vowel . It has the same relation in IPA as 'ö' has to 'o' and 'ü' has to 'u'. Since the makers of the Turkish variant of Latin Alphabet had the rare chance of making a regular pronunciation system with the state of the language and since removing the dots had the effect of making a front vowel a back vowel, they simply copied this feature from ö and ü to i:
Just remove the dots to make it a back vowel! Now we have ı.
When comes to capitalization, ö becomes Ö, ü becomes Ü. So it is just logical to make the capital of i İ and the lowercase of I ı.
There was actually three! i (as in th[i]s), î (as in ch[ee]se) and ı which sounds nothing like the first two, it sounds something like the e in bag[e]l. I guess it sounded so different that it warranted such a drastic symbolic change.
Turkish exhibits a vowel harmony system and uses diacritics on other vowels too and the choice to put "i" together with other front vowels like "ü" and "ö" and put "ı" together with back vowels like "u" and "o" is actually pretty elegant.
The latinization reform of the Turkish language predates computers and it was hard to foresee the woes that future generations would have had with that choice
I was scrolling and scrolling, waiting for the author to mention the new methods, which of course every Android Dev had to migrate to at some point. And 99% of us probably thought how annoying this change is, even though it probably reduced the number of bugs for Turkish users :)
Unrelated, but a month ago I found a weird behaviour where in a kotlin scratch file, `List.isEmpty()` is always true. Questioned my sanity for at least an hour there... https://youtrack.jetbrains.com/issue/KTIJ-35551/
Ramazan Çalçoban sent his estranged wife Emine the text message:
Zaten sen sıkışınca konuyu değiştiriyorsun.
"Anyhow, whenever you can't answer an argument, you change the subject."
Unfortunately, what she thought he wrote was:
Zaten sen sikişınce konuyu değiştiriyorsun.
"Anyhow, whenever they are fucking you, you change the subject."
This led to a fight in which the woman was stabbed and died and the man committed suicide in prison.
In C# programming, you are able to specify a culture every time you call a function such as numbers <-> strings, or case conversion. Or you specify the "Invariant Culture", which is basically US English. But the default culture is still based on your system's locale, you need to explicitly name the invariant culture everywhere. Because it involves a lot of filling in parameters for many different functions, people often leave it out, then their code breaks on systems where "," is the decimal separator.
You can also change the default culture to the invariant culture and save all the headaches. Save the localized number conversion and such for situations where you actually need to interact with localized values.
The same is true for Java/Kotlin (in this case at least). The problem is that there is a zero parameter version that implicitly depends on global state, so you may end up with the bug unless you were already familiar with the issue at hand - I think the same applies for C#.
Though linters will routinely catch this particular issue FWIW.
Wow this is bad. Even for a language like Java having vanilla strings as some sort of enum like value, and then even going further and downcasing them is a 100% bugmagnet waiting for the kaboom.
Wouldn't at least the first issue be solved by using Unicode case folding instead of lowercase? Python, for example, has separate .casefold() and .lower() methods, and AFAIK casefold would always turn I into i, and is much more appropriate for this use case.
Both .casefold() and .lower() in Python use the default Unicode casing algorithms. They're unicode-aware, but locale-naive. So .lower() also works for this purpose; the point of .casefold() is more about the intended semantics.
1. Simple one-to-one mappings -- E.g. `T` to `t`. These are typically the ones handled by `lower()` or similar methods as they work on single characters so can modify a string in place (the length of the string doesn't change).
2. More complex one-to-many mappings -- E.g. German `ß` to `ss`. These are covered by functions like `casefold()`. You can't modify the string in place so the function needs to always write to a new string buffer.
3. Locale-specific mappings -- This is what this bug is about. In Turkish `I` maps to `ı` whereas other languages/locales it maps to `i`. You can only implement this by passing the locale to the case function, irrespective of whether you are also doing (1) or (2).
This is not quite right, at least for Python. .upper() and .lower() (and .casefold() as well) implement the default casing algorithms from the Unicode specification, which are one-to-many (but still locale-naive). Other languages, meanwhile, might well implement locale-aware mapping that defaults to the system locale rather than requiring a locale to be passed.
For a while when I made Minecraft mods, I had my test environment set to Turkish for this exact reason (there's some simple command-line parameter you can pass to the JVM). Half the other mods installed in this environment would have broken textures, but mine never did since I tested it.
Everyone who has used Java has hit this before. Java really should force people to always specify the locale and get rid of the versions of the functions without locale parameters. There is so much hidden broken code out there.
That only helps if devs specify an invariant locale (ROOT for Java) where needed. In practice, I think you'll see devs blindly using using the user's current locale like it silently does today.
The invariant locale can't parse the numbers I enter (my locale uses comma as a decimal separator). More than a few applications will reject perfectly valid numbers. Intel's driver control panel was even so fucked up that I needed to change my locale to make it parse its own UI layout files.
Defaulting to ROOT makes a lot of sense for internal constants, like in the example in this article, but defaulting to ROOT for everything just exposes the problems that caused Sun to use the user locale by default in the first place.
As a Turkish speaker who was using a Turkish-locale setup in my teenage years these kinds of bugs frustrated me infinitely. Half of the Java or Python apps I installed never run. My PHP webservers always had problems with random software. Ultimately, I had to change my system's language to English. However, US has godawful standards for everything: dates, measurement units, paper sizes.
When I shared computers with my parents I had to switch languages back-and-forth all the time. This helped me learn English rather quickly but, I find it a huge accessibility and software design issue.
If your program depends on letter cases, that is a badly designed program, period. If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.
While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.
I don't care if Unicode releases a conversion map. Natural-language behavior should always require natural language metadata too. Even modern languages like Rust did a crappy job of enforcing it: https://doc.rust-lang.org/std/primitive.char.html#method.to_... . Yes it is significantly safer but converting 'ß' to 'SS' in German definitely has gotchas too.
> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.
POSIX requires that many functions account for the current locale. I'm not sure why you are blaming GNU for this.
C wasn't designed to be running facebook, it was designed to not have to write assembly.
I'm not sure why you are blaming POSIX! The role of POSIX is to write down what is already common practice in almost all POSIX-like systems. It doesn't usually specify new behaviour.
> However, US has godawful standards for everything: dates, measurement units, paper sizes.
Isn't the choice of language and date and unit formats normally independent.
There are OS-level settings for date and unit formats but not all software obeys that, instead falling back to using the default date/unit formats for the selected locale.
They’re about as independent as system language defaults causing software not to work properly. It’s that whole realm of “well we assumed that…” design error.
> > However, US has godawful standards for everything: dates, measurement units, paper sizes.
> Isn't the choice of language and date and unit formats normally independent.
You would hope so but, no. Quite a bit software tie the language setting to Locale setting. If you are lucky, they will provide an "English (UK)" option (which still uses miles or FFS WTF is a stone!).
On Windows you can kinda select the units easily. On Linux let me introduce you to the journey to LC_ environment variables: https://www.baeldung.com/linux/locale-environment-variables . This doesn't mean the websites or the apps will obey them. Quite a few of them don't and just use LANGUAGE, LANG or LC_TYPE as their setting.
My company switched to Notion this year (I still miss Confluence). It was hell until last month since they only had "English (US)" and used M/D/Y everywhere with no option to change!
9 replies →
If it's offered, choose EN-Australian or EN-international. Then you get sensible dates and measurement units.
I usually set the Ireland locale, they use English but use civilized units. Sometimes there's also a "English (Europe)" or "English (Germany)" locale that works too.
2 replies →
And if you want it to be more sensible but still not sensible, pick EN-ca.
> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake.
It wasn’t a mistake for local software that is supposed to automatically use the user’s locale. It’s what made a lot of local software usefully locale-sensitive without the developer having to put much effort into it, or even necessarily be aware of it. It’s the reason why setting the LC_* environment variables on Linux has any effect on most software.
The age of server software, and software talking to other systems, is what made that default less convenient.
On the contrary, the locale APIs are problematic for many reasons. If C had just been like "well C only supports the C locale, write your own support if that's what you want", much more software would have been less subtly broken.
There's a few fundamental problems with it:
1. The locale APIs weren't designed very well and things were added over the years that do not play nice with it.
So like as an example, what should `int toupper(int c)` return? (By the way, the paramater `c` is really an unsigned char, if you try to put anything but a single byte here, you get undefined behavior. What if you're using something that uses a multibyte encoding? You only get one byte back so that doesn't really help there either.
Many of the functions were clearly designed for the "1 character = 1 byte" world, which is a key assumption of all of these APIs. Which is fine if you're working with ASCII, but blows up as soon as you change locales.
And even so, it creates problems where you try to use it. Say I have a "shell" but all of the commands are internally stored as uppercase, but you want to be compatible. If you try to use anything outside of ASCII with locales, you can't just store the command list in uppercase form because then they won't match when doing a string comparison using the obvious function for it (strcmp). You have to use strcoll instead, and sometimes you just, might not have a match for multibyte encodings.
2. The locale is global state.
The worst part about it is that it's actually global state (not even like faux-global state like errno). This basically means that it's basically wildly thread unsafe as you can have thread 1 running toupper(x) while another thread, possibly in a completely different library, calling setlocale (as many library functions do to guard against the semantics of a lot of standard library functions changing unexpectedly). And boom, instant undefined behavior, with basically nothing you could reasonably do about it. You'll probably get something out of it, but the pieces are probably going to display weirdly unless your users are from the US, where the C locale is pretty close to the US locale.
This means any of the functions in this list[1] is potentially a bomb:
> fprintf, isprint, iswdigit, localeconv, tolower, fscanf, ispunct, iswgraph, mblen, toupper, isalnum, isspace, iswlower, mbstowcs, towlower, isalpha, isupper, iswprint, mbtowc, towupper, isblank, iswalnum, iswpunct, setlocale, wcscoll, iscntrl, iswalpha, iswspace, strcoll, wcstod, isdigit, iswblank, iswupper, strerror, wcstombs, isgraph, iswcntrl, iswxdigit, strtod, wcsxfrm, islower, iswctype, isxdigit.
And there are some important ones in there too like strerror. Searching through GitHub as a random sample, it's not uncommon to see these functions be used[2], and really, would you expect `isdigit` to be thread-unsafe?
It's a little better with POSIX as they define a bunch of "_r" variants of functions like strerror and the like which at least give some thread safety (and uselocale at least is a thread-only variant of setlocale, which lets you safely do the whole "guard all library calls to `uselocale(LC_ALL, "C")`"). But Windows doesn't support uselocale so you have to use _configthreadlocale instead.
It also creates hard to trace bug reports. Saying you only support ASCII or whatever is, well it's not great today, but it's at least somewhat understandable, and is commonly seen to be the lowest common denominator for software. Sure, ideally we'd all use byte strings where we don't care or UTF-8 where we actually want to work with text (and maybe UTF-16 on Windows for certain things), but that's just a feature that doesn't exist, whereas memory corruption when you do something with a string but only for people in a certain part of the world in certain circumstances is not really a great user experience or developer experience for that matter.
The thing, I actually like C in a lot of ways. It's a very useful programming language and has incredible importance even today and probably for the far future, but I don't really think the locale API was all that well designed.
[1]: Source: https://en.cppreference.com/w/c/locale/setlocale.html
[2]: https://github.com/search?q=strerror%28+language%3AC&type=co...
use Australian English: English but with same settings for everything else, including keyboard layout
I live in Germany now, so I generally set it to Irish nowadays. Since I like ISO-style enter key, I use UK keyboard layout (also easier to switch to Turkish than ANSI-layout). However many OSes now have a English (Europe) locale too
Many Linux distributions provide en_DK specifically for this purpose. English as it is used in Denmark. :-)
5 replies →
Just use English. If you want to program you need to learn it anyway to make sense of anything.
I'm not a native English speaker btw. I learned it as I was learning programming as a kid 20 years ago
I thought locale is mostly controlled by the environment. So you can run your system and each program with it's own separate locale settings if you like.
I wish there was a single letter universal locale with sane values, maybe call it U or E, with:
ISO (or RFC....) date time, UTF-8 default (maybe also alternative with ISO8859-1) decimal point in numbers and _ for thousands, metric paper / A4, ..., unicode neutral collation
but keeps US-English language
> If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.
There is a deeper bug within Unicode.
The Turkish letter TURKISH CAPITAL LETTER DOTLESS I is represented as the code point U+0049, which is named LATIN CAPITAL LETTER I.
The Greek letter GREEK CAPITAL LETTER IOTA is represented as the code point U+0399, named... GREEK CAPITAL LETTER IOTA.
The relationship between the Greek letter I and the Roman letter I is identical in every way to the relationship between the Turkish letter dotless I and the Roman letter I. (Heck, the lowercase form is also dotless.) But lowercasing works on GREEK CAPITAL LETTER IOTA because it has a code point to call its own.
Should iota have its own code point? The answer to that question is "no": it is, by definition, drawn identically to the ascii I. But Unicode has never followed its principles. This crops up again and again and again, everywhere you look. (And, in "defense" of Unicode, it has several principles that directly contradict each other.)
Then people come to rely on behavior that only applies to certain buggy parts of Unicode, and get messed up by parts that don't share those particular bugs.
It’s not a bug, it’s a feature. The reason is that ISO 8859-7 [0] used for Greek has a separate character code for Iota (for all greek letters, really), while ISO 8859-3 [1] and -9 [2] used for Turkish do not for the usual dotless uppercase I.
One important goal of Unicode is to be able to convert from existing character sets to Unicode (and back) without having to know the language of the text that is being converted. If they had invented a separate code point for I in Turkish, then when converting text from those existing ISO character encodings, you’d have to know whether the text is Turkish or English or something else, to know which Unicode code point to map the source “I” into. That’s exactly what Unicode was designed to avoid.
[0] https://en.wikipedia.org/wiki/ISO/IEC_8859-7
[1] https://en.wikipedia.org/wiki/ISO/IEC_8859-3
[2] https://en.wikipedia.org/wiki/ISO/IEC_8859-9
1 reply →
Interesting one. That and and relying on system character encodings is a source of subtle bugs. I've been bitten by that many times with e.g. XML parsing in the past. Modern Kotlin thankfully has very few (if any) places left where this can happen. Kotlin has parameters with default values. So anything that relies on a character encoding usually has a parameter of encoding set to UTF-8.
The bug here was the default Java implementation that Kotlin uses on JVM. On kotlin-js both toLowerCase() and lowercase() do exactly the same thing. Also, the deprecation mechanism in Kotlin is kind of cool. The deprecated implementation is still there and you could use it with a compiler flag to disable the error.
When I saw "Turkish alphabet bug", I just knew it was some version of toLower() gone horribly wrong.
(I'm sure there's a good reason, but I find it odd that compiler message tags are invariably uppercase, but in this problem code they lowercased it to go do a lookup from an enum of lowercase names. Why isn't the enum uppercase, like the things you're going to lookup?)
With Turkish you can't safely case-fold with toupper() or tolower() in a C/US locale: i->I and I->i are both wrong. Uppercasing wouldn't work. You have to use Unicode or Latin-5 to manage it.
You misunderstood the parent post. They where suggesting to look up the exact string that ends up in the message, without any conversion. So if the message contains INFO, ERROR, etc. then look up "INFO", "ERROR"...
It's the bug in the Turkish locale. They hacked Latin alphabet instead of creating a separate letter with separate rules.
Without looking at the source code I think it is because the log functions are lowercase, but I am not sure this is the reason.
> Why isn't the enum uppercase, like the things you're going to lookup?
Another question: why does the log record the string you intended to look up, instead of the string you actually did look up?
I am one of the maintainers is the Scala compiler, and this is one of the things that immediately jump to me when I review code that contains any casing operation. Always explicitly specify the locale. However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
Also, this is the last remaining major system-dependent default in Java. They made strict floating point the default in 17; UTF-8 the default encoding some versions later (21?); only the locale remains. I hope they make ROOT the default in an upcoming version.
FWIW, in the Scala.js implementation, we've been using UTF-8 and ROOT as the defaults forever.
I agree that Locale.ROOT is the canonical choice. But in this case, Locale.US also makes sense: it isn't some abstract "US is some kind of the global default", it is saying "we know are upcasing an English word".
Wouldn't the British locale make more sense then?
> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
I have no idea what `Locale.ROOT` refers to, and I'd be worried that it's accidentally the same as the system locale or something, exactly the sort of thing that will unexpectedly change when a Turkish-speaker uses a computer or what have you.
> I'd be worried that it's accidentally the same as the system locale or something
The API docs clearly specify that Locale.ROOT “is regarded as the base locale of all locales, and is used as the language/country neutral locale for the locale sensitive operations.”
> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.
Isn't it kind of strange to say that Locale.US is too US centric, and therefore we'll invent a new, fictitious locale, the contents of which is all the US defaults, but which we'll call "the base locale of all locales"? That somehow seems even more US centric to me than just saying Locale.US.
Setting the locale as Locale.US is at least comprehensible at a glance.
4 replies →
It is a programming language agnostic equivalent of POSIX C locale with Unicode enhancement.
FTA: “Less than a week later, they had a fix ready: (source: GitHub)
[…]
In September 2020, nearly a year after the coroutines bug had been fixed and forgotten
[…]
When they came to fix this issue, the Kotlin team weren’t leaving anything to chance. They scoured the entire compiler codebase for case-conversion operations—calls to capitalize(), decapitalize(), toLowerCase(), and toUpperCase()”
Bloody late, I would say. If something like this happened in OpenBSD, I think they would have done that, and more (the article doesn’t mention tooling to detect the introduction of new similar bugs ofof adding warnings to documentation), at the first spotting of such a bug.
How come no reviewer mentioned such things when the first fix was reviewed?
Also, why are they using Locale.US, and not Locale.ROOT (https://docs.oracle.com/javase/8/docs/api/java/util/Locale.h...)?
Ugh, I've had the exact same problem in a Java project, which meant I had to go through thousands and thousands of lines of code and make sure that all 'toLowerCase()' on enum names included Locale.ENGLISH as parameter.
As the article demonstrates, the error manifests in a completely inscrutable way. But once I saw the bug from a couple of users with Turkish sounding names, I zeroed in on it. And cursed a few times under my breath whoever messed up that character table so bad.
Were you not using static analysis tools? All of the popular ones will warn about that issue with locales.
They do. But a generic warning about locale-dependence doesn't really tell you that ASCII-strings will be broken. For nearly every purpose ASCII is the same in every locale. If you have a string that is guaranteed to be ASCII (like an enum constant is in most code styles), it's easy to think "not a problem here" and move on.
I knew from the headline that this would be the Turkish I thing, but I couldn't fathom why a compiler would care about case-folding. "I don't know Kotlin, but surely its syntax is case-sensitive like all the other commonly used languages nowadays?"
> The code is part of a class named CompilerOutputParser, and is responsible for reading XML files containing messages from the Kotlin compiler. Those files look something like this:
"Oh."
"... Seriously?"
As if I didn't hate XML enough already.
what do you propose to handle translation messages? how do you think they should map the compiler codes to human messages?
> If capitalize() was an ambiguous name, what should its replacement be called? Can you think of a name that describes the function’s behaviour more clearly?
In c#, setting every letter to its uppercase form is ToUpper, and I think capitalise is perfectly reasonable for setting the first character. I'm not sure I've ever referred to uppercasing a string as capitalising it
I have always wondered why Turkey chose to Latinize in this way. I understand that the issue is having two similar vowels in Turkish, but not why they decided to invent the dotless I, when other diacritics already existed. Ĭ Î Ï Í Ì Į Ĩ and almost certainly a dozen other would've worked, unless there was already some significance to the dot in Turkish that's not obvious.
Computers and localisation weren't relevant back in the early 20th century. The dotless existed before the dotted i (in Greek script as iota). Some European scholars putting an extra dot on the letter to make it stand out a bit more are as much to blame as the Turks for making the distinction between the different i-vowels clear.
Really, this bug is nothing but programmers failing to take into account that not everybody writes in English.
It's not exactly programmers failing to take into account that no everybody writes in English - if that were the case, then it would simply be impossible to represent the Turkish lowercase-dotless and uppercase-dotted I at all. The actual problem is failing to take into account that operations on text strings that work in one language's writing might not work the same way in a different language's writing system. There's a lot of languages in the world that use the Latin writing system, and even if you are personally a fluent speaker and writer of several of them, you might simply have not learned about Turkish's specific behavior with I.
> Really, this bug is nothing but programmers failing to take into account that not everybody writes in English.
This bug is the exact opposite of that. The program would have worked fine had it used pure ASCII transforms (±0x20); it was the use of library functions that did in fact take Turkish into account that caused the problem.
More broadly, this is not an easy issue to solve. If a Turkish programmer writes code, what is the expected behaviour for metaprogramming and compilers? Are the function names in English or Turkish? What about variables, object members, struct fields? You could have one variable name that references some government ID number using its native Turkish name, right next to another variable name that uses the English "ID". How does the compiler know what locale to use for which symbol?
Boiling all of this down to 'just be more considerate' is not actually constructive or actionable.
> that not everybody writes in English.
I don't know... I understand the history and reasons for this capitalization behavior in Turkish, and my native language isn't English, which had to use a lot of strange encodings before the introduction of UTF-8.
But messing around with the capitalization of ASCII <= codepoint(127) is a risky business, in my opinion. These codepoints are explicitly named:
"LATIN CAPITAL LETTER I" "LATIN SMALL LETTER I"
and requiring them to not match exactly during capitalization/diminuitization sounds very risky.
The issue is not the invention of the dotless I, it already exists, the issue is that the took a vowerl , i/I, and the assigned the lower case to one vowel, and the upper case to a different one, and invented what left missing.
It's like they decided that the uppercase of "a" is "E" and the uppercase of "e" is "A".
This is misleading, because it assumes that i/I naturally represent one vowel, which is just not the case. i/I represents one vowel in _English_, when written with a latin script. ̶I̶n̶ ̶f̶a̶c̶t̶ ̶e̶v̶e̶n̶ ̶t̶h̶i̶s̶ ̶i̶s̶n̶'̶t̶ ̶c̶o̶r̶r̶e̶c̶t̶,̶ ̶i̶/̶I̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶s̶ ̶o̶n̶e̶ ̶p̶h̶o̶n̶e̶m̶e̶,̶ ̶n̶o̶t̶ ̶o̶n̶e̶ ̶v̶o̶w̶e̶l̶.̶ <see troad's comment for correction>
There is no reason to assume that the English representation is in general "correct", "standard", or even "first". The modern script for Turkish was adopted around the 1920's, so you could argue perhaps that most typewriters presented a standard that should have been followed. However, there was variation even between different typewriters, and I strongly suspect that typewriters weren't common in Turkey when the change was made.
7 replies →
Nope, we decided to do it the correct and logical way for our alphabet. Some glyphs are either dotted or dotless. So, we have Iı, İi, Oo, Öö, Uu, Üü, Cc, Çç, Ss and Şş. You see the Ii pair is actually the odd one in the series.
Also, we don't have serifs in our I. It's just a straight line. So, it's not even related to your Ii pair in English. You can't dictate how we write our straight lines, can you.
The root cause of the problem is in the implementation and standardization of the computer systems. Computers are originally designed only for English alphabet in mind. And patched to support other languages over time, poorly. Computers should obey the language rules, not the other way around.
9 replies →
I don’t think that’s the right way to think about it. It’s not like they were Latinizing Turkish with ASCII in mind. They wanted a one-to-one mapping between letters and sounds. The dot versus no dot marks where in your mouth or throat the vowel is formed. They didn’t have this concept that capital I automatically pairs with lowercase i. The dot was always part of the letter itself. The reform wasn’t trying to fit existing Western conventions, it was trying to map the Turkish sounds to symbols.
1 reply →
Not really. Turkish has a feature that is called "vowel harmony". You match suffixes you add to a word based on a category system: low pitch vs high pitch vowels where a,ı,o,u are low pitch and e,i,ö,ü are high pitch.
Ö and ü were already borrowed from German alphabet. Umlaut-added variants of 'ö' and 'ü' have a similar effect on 'o' and 'u' respectively: they bring a back vowel to front. See: https://en.wikipedia.org/wiki/Vowel . Similarly removing the dots bring them back.
Turkish already had i sound and its back variant which is a schwa-like sound: https://en.wikipedia.org/wiki/Close_back_unrounded_vowel . It has the same relation in IPA as 'ö' has to 'o' and 'ü' has to 'u'. Since the makers of the Turkish variant of Latin Alphabet had the rare chance of making a regular pronunciation system with the state of the language and since removing the dots had the effect of making a front vowel a back vowel, they simply copied this feature from ö and ü to i:
Just remove the dots to make it a back vowel! Now we have ı.
When comes to capitalization, ö becomes Ö, ü becomes Ü. So it is just logical to make the capital of i İ and the lowercase of I ı.
4 replies →
There was actually three! i (as in th[i]s), î (as in ch[ee]se) and ı which sounds nothing like the first two, it sounds something like the e in bag[e]l. I guess it sounded so different that it warranted such a drastic symbolic change.
Turkish exhibits a vowel harmony system and uses diacritics on other vowels too and the choice to put "i" together with other front vowels like "ü" and "ö" and put "ı" together with back vowels like "u" and "o" is actually pretty elegant.
The latinization reform of the Turkish language predates computers and it was hard to foresee the woes that future generations would have had with that choice
Except for the a/e pair, front and back vowels have dotted and dotless versions in Turkish: ı and i, o and ö, u and ü.
In that case they should've used ï for consistency.
1 reply →
Makes sense enough, but why not use i and ï to be consistent?
5 replies →
It’s always Turkish lol. That was our language of choice to QA anything… if it worked on that it would pretty much work on anything.
I'm shocked there's no mention of "The Turkey Test"
https://blog.codinghorror.com/whats-wrong-with-turkey/
I was scrolling and scrolling, waiting for the author to mention the new methods, which of course every Android Dev had to migrate to at some point. And 99% of us probably thought how annoying this change is, even though it probably reduced the number of bugs for Turkish users :)
Unrelated, but a month ago I found a weird behaviour where in a kotlin scratch file, `List.isEmpty()` is always true. Questioned my sanity for at least an hour there... https://youtrack.jetbrains.com/issue/KTIJ-35551/
well now I wanna know what's going on there!
Could have been worse --
This led to a fight in which the woman was stabbed and died and the man committed suicide in prison.
https://gizmodo.com/a-cellphones-missing-dot-kills-two-peopl...
In C# programming, you are able to specify a culture every time you call a function such as numbers <-> strings, or case conversion. Or you specify the "Invariant Culture", which is basically US English. But the default culture is still based on your system's locale, you need to explicitly name the invariant culture everywhere. Because it involves a lot of filling in parameters for many different functions, people often leave it out, then their code breaks on systems where "," is the decimal separator.
You can also change the default culture to the invariant culture and save all the headaches. Save the localized number conversion and such for situations where you actually need to interact with localized values.
The same is true for Java/Kotlin (in this case at least). The problem is that there is a zero parameter version that implicitly depends on global state, so you may end up with the bug unless you were already familiar with the issue at hand - I think the same applies for C#.
Though linters will routinely catch this particular issue FWIW.
Wow this is bad. Even for a language like Java having vanilla strings as some sort of enum like value, and then even going further and downcasing them is a 100% bugmagnet waiting for the kaboom.
Wouldn't at least the first issue be solved by using Unicode case folding instead of lowercase? Python, for example, has separate .casefold() and .lower() methods, and AFAIK casefold would always turn I into i, and is much more appropriate for this use case.
Both .casefold() and .lower() in Python use the default Unicode casing algorithms. They're unicode-aware, but locale-naive. So .lower() also works for this purpose; the point of .casefold() is more about the intended semantics.
See also: https://stackoverflow.com/questions/19030948 where someone sought the locale-sensitive behaviour.
There are 3 types of case folding:
1. Simple one-to-one mappings -- E.g. `T` to `t`. These are typically the ones handled by `lower()` or similar methods as they work on single characters so can modify a string in place (the length of the string doesn't change).
2. More complex one-to-many mappings -- E.g. German `ß` to `ss`. These are covered by functions like `casefold()`. You can't modify the string in place so the function needs to always write to a new string buffer.
3. Locale-specific mappings -- This is what this bug is about. In Turkish `I` maps to `ı` whereas other languages/locales it maps to `i`. You can only implement this by passing the locale to the case function, irrespective of whether you are also doing (1) or (2).
This is not quite right, at least for Python. .upper() and .lower() (and .casefold() as well) implement the default casing algorithms from the Unicode specification, which are one-to-many (but still locale-naive). Other languages, meanwhile, might well implement locale-aware mapping that defaults to the system locale rather than requiring a locale to be passed.
For a while when I made Minecraft mods, I had my test environment set to Turkish for this exact reason (there's some simple command-line parameter you can pass to the JVM). Half the other mods installed in this environment would have broken textures, but mine never did since I tested it.
Kotlin keywords should be assumed to be English.
Logging levels are not language keywords.
The implied locale of those logging levels is (US) English though. And so any recasing of them should be in that locale.
A stark reminder that all operations on strings are wrong.
And all code is operations on strings. (The code starts out as a string).
Or that strings are not human texts.
Kotlin is not for humans.
Everyone who has used Java has hit this before. Java really should force people to always specify the locale and get rid of the versions of the functions without locale parameters. There is so much hidden broken code out there.
That only helps if devs specify an invariant locale (ROOT for Java) where needed. In practice, I think you'll see devs blindly using using the user's current locale like it silently does today.
The invariant locale can't parse the numbers I enter (my locale uses comma as a decimal separator). More than a few applications will reject perfectly valid numbers. Intel's driver control panel was even so fucked up that I needed to change my locale to make it parse its own UI layout files.
Defaulting to ROOT makes a lot of sense for internal constants, like in the example in this article, but defaulting to ROOT for everything just exposes the problems that caused Sun to use the user locale by default in the first place.
1 reply →
Tldr. ToLowerCase is like converting a time to a string. For human display purposes only.
Java; write once, run anywhere, except on Turkish Windows.
Every programmer learns about Turkish 'i' the hard way, usually at 3 AM in production.