Comment by ACCount37
14 hours ago
I can't stand this myopic thinking.
Do you want to learn "oh, LLMs are capable of scheming, resisting shutdown, seizing control, self-exfiltrating" when it actually happens in a real world deployment, with an LLM capable of actually pulling it off?
If "no", then cherish Anthropic and the work they do.
You do not appear to understand what an LLM is, I'm afraid.
I have a better understanding of "what an LLM is" than you. Low bar.
What you have is not "understanding" of any kind - it's boneheaded confidence that just because LLMs are bad at agentic behavior now they'll remain that way forever. That confidence is completely unfounded, and runs directly against everything we've seen from the field so far.
> I have a better understanding of "what an LLM is" than you. Low bar.
How many inference engine did you write? Because if the answer is less than two you're going to be disappointed to realize that the bar is higher than you thought.
> that just because LLMs are bad at agentic behavior
It has nothing to do with “agentic behavior”. Thinking that LLM don't currently self-exfiltrate because of “poor agentic behavior” is delusional.
Just because Anthropic managed, by nudging an LLM in the right direction, have an LLM engage in a sci-fi inspired roleplay about escaping doesn't mean that LLMs are evil geniuses wanting to jump out of the bottle. This is pure fear mongering and I'm always saddened that there are otherwise intelligent people who buy their bullshit.
7 replies →