← Back to context

Comment by crooked-v

21 days ago

I suspect an issue at least as big is that they're running into a lot of prompt injection issues (even totally accidentally) with their attempts at personal knowledge base/system awareness stuff, whether remotely processed or not. Existing LLMs are already bad at this even with controlled inputs; trying to incorporate broad personal files in a Spotlight-like manner is probably terribly unreliable.

This is my experience as pretty heavy speech-to-text user (voice keyboard) - as they’ve introduced more AI features, I’ve started to have all sorts of nonsense from recent emails or contacts get mixed into simple transcriptions

It used to have no problem with simple phrases like “I’m walking home from the market” but now I’ll just as often have it transcribe “I’m walking home from the Mark Betts”, assuming Mark Betts was a name in my contacts, despite that sentence making much less structural sense

It’s bad enough that I’m using the feature much less because I have to spend as much time copyediting transcribed text before sending as I would if I just typed it out by hand. I can turn off stuff like the frequently confused notification summaries, but the keyboard has no such control as far as I know