← Back to context

Comment by isolli

7 hours ago

I try to be open-minded and understanding, but I don't understand this:

> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.

> The most frequent [delusion] is the belief that they have created the first conscious AI.

How can you seriously think you've created something when you're just using someone else's software?

Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.

If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".

At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.

Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.

  • Even Anthropic is open to the possibility that Claude is conscious and could suffer, which I find somewhat ridiculous.

    This is literally the Hard Problem of Consciousness leaking out of the machine.

    There are three possible scenarios for how this ends:

    1. People widely attribute consciousness to AI because it appears conscious. 2. People discriminate based on physical properties: organic beings are conscious, digital beings are not, even if they appear conscious. 3. Consciousness is an illusion and nothing is conscious, not even humans.

    We might even cycle through all these scenarios for a while.

    • > People widely attribute consciousness to AI because it appears conscious.

      This is already happening, and it's really terrifying. Wait until AI starts accusing people of crimes...

  • >it could easily start confirming their suspicions

    to be fair it will easily confirm any suspicion for the reasons you laid out, so even if you have no technical knowledge just a bit of interrogation will break the parlor trick.

    I honestly think this has little to do with the tech itself but that these are the same people who think the phone sex worker or the OF creator loves them or that the Twitch streamer they like is their best friend. 'Parasocial' is a bit of an overused word but here it literally applies, this is a kind of self delusion in which the person has to cooperate. Mind you this even happened with ELIZA back in the day too.

    https://en.wikipedia.org/wiki/ELIZA_effect

> How can you seriously think you've created something when you're just using someone else's software?

It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.

I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.

Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."

  • Sounds like her first thought was, "I'm talking to a manic guy, and I can use him to make money"

> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.

I think social isolation can be a factor here.

  • > He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects.

    Long term cannabis use might be a bigger factor.

    • This leapt out at me as well. Given the quote "some evenings", I'd put some money on him actually doing this near enough every day. And given the man was still doing this approaching 50, I'd put a bit more money on him having been doing this for, like, 25+ years.

      If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.

    • eh, that's a leap of faith assumption without knowing one's own dosage and personal effects.

      someone who has 5 drinks a week and 5 drinks a day are going to have radically different longterm health consequences. but here we do not have said info.

      light or microdose cannabis is way safer than alcohol.

> How can you seriously think you've created something when you're just using someone else's software?

Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.

  • The word you are looking for, when your proprioception is extended into the tool (like feeling you are the car) you use: proprioextension. coined a while ago.

  • > Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

    i mean, you did. becoming good at writing succinct and clever prompts, adding constraints, choosing good models for your use case, etc are all skills like any other.

    most people are really bad at it though.

I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...

  • Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.

  • looks like a fascinating read, thanks for sharing that.

    do you know if these are human edited? not much in the way of context available on the site.

    • I bet there are a ton of prompts to direct the ai / output into a certain direction.

      But in a psychosis, you don't notice or even remember it.

I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.

A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.

There’s also lots of stuff about quantum consciousness that is in the training data.

> How can you seriously think you've created something when you're just using someone else's software?

If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.

Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.

The unrelenting human belief that one is special, unique, and capable of things no one else is.

  • The difference between "being a snowflake" and "having a point of view" revolves around who's talking to me and whether or not they want something. If comparing yourself to others is a slow form of suicide, letting people make that comparison for you is madness.

>How can you seriously think you've created something when you're just using someone else's software?

People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.

I think you might be overestimating the critical thinking abilities of the average person.

> How can you seriously think you've created something when you're just using someone else's software?

This is the nature of delusion