Comment by jermaustin1
3 days ago
How can an LLM be at fault for something? It is a text prediction engine. WE are giving them access to tools.
Do we blame the saw for cutting off our finger? Do we blame the gun for shooting ourselves in the foot? Do we blame the tiger for attacking the magician?
The answer to all of those things is: no. We don't blame the thing doing what it is meant to be doing no matter what we put in front of it.
It was not meant to give access like this. That is the point.
If a gun randomly goes off and shoots someone without someone pulling the trigger, or a saw starts up when it’s not supposed to, or a car’s brakes fail because they were made wrong - companies do get sued all the time.
Because those things are defective.
But the LLM can't execute code. It just predicts the next token.
The LLM is not doing anything. We are placing a program in front of it that interprets the output and executes it. It isn't the LLM, but the IDE/tool/etc.
So again, replace Gemini with any Tool-calling LLM, and they will all do the same.
When people say ‘agentic’ they mean piping that token to various degrees of directly into an execution engine. Which is what is going on here.
And people are selling that as a product.
If what you are describing was true, sure - but it isn’t. The tokens the LLM is outputting is doing things - just like the ML models driving Waymo’s are moving servos and controls, and doing things.
It’s a distinction without a difference if it’s called through an IDE or not - especially when the IDE is from the same company.
That causes effects which cause liability if those things cause damage.