Comment by numpad0
8 days ago
The actually issue according to another comment [0] is this[1]:
> Around iOS 17 (Sept. 2023) Apple updated their autocorrect to use a transformer model which should've been awesome and brought it closer to Gboard (Gboard is a privacy terror but honestly, worth it).
> What it actually did/failed to improve is make your phone keyboard:
> Suck at suggesting accurate corrections to misspelled words
> "Correct" misspelled words with an even worse misspelling
> "Correct" your correctly spelled word with an incorrectly spelled word
Which makes me wonder: is Transformer model good with manipulating short texts and texts with errors at all ? It's kind of known that open weight LLMs don't perform well for CJK conversion tasks[2], and I've also been disappointed by their general lack of typo tolerances myself as well. They're BAD for translating ultrashort sentences and singled out words as well[3]. They're great for vibecoding, though.
Which makes me think, are they usable for anything under <100 bytes at all? Does it seem like they have a minimum usable input entropy or something?
0: https://thismightnotmatter.com/a-little-website-i-made-for-a...
2: The process of yielding "㍑" from "rittoru"
3: No human can translate, e.g. "translate left" in isolation correctly as "move left arm", but LLMs seem to be more all over the place than humans
No comments yet
Contribute on Hacker News ↗