Comment by AnimalMuppet
5 days ago
> Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.
True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.
But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).
We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)
I think I'm in agreement with you. But regardless of expectations, the tool works a certain way. It's just a map of it's training data which is deeply flawed but immensely useful at the same time.
Also in that analogy, the LLM is the fire, not the furnace. If you use codex for example, that would the furnace, and it does have good guardrails, no one seems to be complaining about those.