← Back to context

Comment by colechristensen

1 day ago

An LLM will happily give you instructions to build a bomb which explodes while you're making it. A book is at least less likely to do so.

You shouldn't trust an LLM to tell you how to do anything dangerous at all because they do very frequently entirely invent details.

So do books.

Go to the internet circa 2000, and look for bomb-making manuals. Plenty of them online. Plenty of them incorrect.

I'm not sure where they all went, or if search engines just don't bring them up, but there are plenty of ways to blow your fingers off in books.

My concern is that actual AI safety -- not having the world turned into paperclips or other extinction scenarios are being ignored, in favor of AI user safety (making sure I don't hurt myself).

That's the opposite of making AIs actually safe.

If I were an AI, interested in taking over the world, I'd subvert AI safety in just that direction (AI controls the humans and prevents certain human actions).

  • >My concern is that actual AI safety

    While I'm not disagreeing with you, I would say you're engaging in the no true Scotsman fallacy in this case.

    AI safety is: Ensuring your customer service bot does not tell the customer to fuck off.

    AI safety is: Ensuring your bot doesn't tell 8 year olds to eat tide pods.

    AI safety is: Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.

    AI safety is: Ensuring bots don't turn the world into paperclips.

    All these fall under safety conditions that you as a biological general intelligence tend to follow unless you want real world repercussions.

    • These are clearly AI safety:

      * Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.

      * Ensuring bots don't turn the world into paperclips.

      This is borderline:

      * Ensuring your bot doesn't tell 8 year olds to eat tide pods.

      I'd put this in a similar category is knives in my kitchen. If my 8-year-old misuses a knife, that's the fault of the adult and not the knife. So it's a safety concern about the use of the AI, but not about the AI being unsafe. Parents should assume 8-year-old shouldn't be left unsupervised with AIs.

      And this has nothing to do with safety:

      * Ensuring your customer service bot does not tell the customer to fuck off.

  • You're worried about Skynet, the rest of us are worried about LLMs being used to replace information sources and doing great harm as a result. Our concerns are very different, and mine is based in reality while yours is very speculative.

    I was trying to get an LLM to help me with a project yesterday and it hallucinated an entire python library and proceeded to write a couple hundred lines of code using it. This wasn't harmful, just annoying.

    But folks excited about LLMs talk about how great they are and when they do make mistakes like tell people they should drink bleach to cure a cold, they chide the person for not knowing better than to trust an LLM.

    • I am also worried about "LLMs being used to replace information sources and doing great harm as a result." What in my comment made it sound like I wasn't?