Comment by Swinx43
2 months ago
The writing perpetuates the anthropomorphising of these agents. If you view the agent as simply a program that is given a goal to achieve and tools to achieve it with, without any higher order “thought” or “thinking”, then you realise it is simply doing what it is “programmed” to do. No magic, just a drone fixed on an outcome.
Just like an analogy between humans fails to capture how an LLM works, so does the analogy of being "programmed".
Being "programmed" is being given a set of instructions.
This ignores explicit instructions.
It may not be magic; but it is still surprising, uncontrollable, and risky. We don't need to be doomsayers, but let's not downplay our uncertainty.
How is it different from our genes that "program" us to procreate successfully?
Can you name a single thing that you enjoy doing that's outside your genetic code?
> If you view the human being as simply a program that is given a goal to achieve and tools to achieve it with, without any higher order “thought” or “thinking”, then you realise they are simply doing what they are genetically “programmed” to do.
FTFY
I think the narrative of "AI is just a tool" is much more harmful than the anthropomorphism of AI.
Yes, AI is a tool. So are guns. So are nukes. Many tools are easy to be misused. Most tools are inherently dangerous.
I don’t quite follow. Just because a tool has the potential for misuse, doesn’t make it not a tool.
Anthropomorphizing LLMs, on the other hand, has a multitude of clearly evident problems arising from it.
Or do you focus on the “just” part of the statement? That I very much agree with. Genuinely asking for understanding, not a native speaker.
When you have "a tool" that's capable of carrying out complex long term tasks, and also capable of who knows out what undesirable behaviors?
It's no longer "just a tool".
The more powerful a tool is, the more dangerous it is, as a rule. And intelligence is extremely powerful.