Comment by amluto
19 hours ago
Humans can do one thing that AI agents are 100% completely incapable of doing: being accountable for their actions.
19 hours ago
Humans can do one thing that AI agents are 100% completely incapable of doing: being accountable for their actions.
You haven't met certain humans. Not all humans have internal capacity for accountability.
The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too.
Bad news! They will not be aware that you have done this and will not care.
The purpose of firing a person shouldn't be vengeance but to remove someone who is unreliable or not cost effective.
It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately.
6 replies →
But it's still a bit more difficult to sue them for leaking your company's data.
At least for now.
Don’t forget learning, humans can learn, LLMs do not learn, they are trained before use.
Do we? Or are we born with pre-training (all the crucial functions the brain does without us having to learn them) and a context window orders of magnitude larger than an LLM?
It is incredible how willing and eager AI boosters are to denigrate the incredible miracle of human consciousness to make their chatbots seem so special.
No, we are not born with all the pre-training we need. That is rather the point of education, teaching people's brains how to process information in new, maybe unintuitive ways.
They learn on the next update :p
That’s training, not learning.
Yup. And eventually there will be online learning, that doesn't require a formal update step. People keep conflating the current implementation, as an inherent feature.
What does that actually mean in practice? You can yell at human if it makes you feel better, sure, but you can do that with an AI agent too, and it's approximately as productive.
I disagree. They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar. Anthropic probably promised no particular outcome, but then what employee does?
And in the reverse, if a person makes a series of impulsive, damaging decisions, they probably will not be able to accurately explain why they did it, because neither the brain nor physiology are tuned to permit it.
Seems pretty much the same to me.
> They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar.
What do you mean by fire? And how is the accountability similar to an employee?
That’s a feature that other humans impose on whoever’s being held accountable. There’s no reason in principle we couldn’t do the same with agents.
How would you fire an agent? This impacts the company that makes the LLM, but not the agent itself.
Yep.