← Back to context

Comment by ACCount37

10 hours ago

And I'm disappointed that people capable of writing an inference engine seem incapable of grasping of just how precarious the current situation is.

There's by now a small pile of studies that demonstrate: in hand-crafted extreme scenarios, LLMs are very capable of attempting extreme things. The difference between that and an LLM doing extreme things in a real deployment with actual real life consequences? Mainly, how capable that LLM is. Because life is life and extreme scenarios will happen naturally.

The capabilities of LLMs are what holds them back from succeeding at this kind of behavior. The capabilities of LLMs keep improving, as technology tends to.

And don't give me any of that "just writing text" shit. The more capable LLMs get, the more access they'll have as a default. People already push code written by LLMs to prod and give LLMs root shells.