← Back to context

Comment by redfloatplane

1 day ago

A thought experiment: It's April, 1991. Magically, some interface to Claude materialises in London. Do you think most people would think it was a sentient life form? How much do you think the interface matters - what if it looks like an android, or like a horse, or like a large bug, or a keyboard on wheels?

I don't come down particularly hard on either side of the model sapience discussion, but I don't think dismissing either direction out of hand is the right call.

Interesting thought experiment.

I would say, if you put Claude in an android body with voice recognition and TTS, people in 1991 would think they are interacting with a sentinent machine from outer space.

  • Thanks, I find it very interesting as well. I think very many people would assume they must be interacting with another person, and I don't think there's really a way to _prove_ it's not that, just through conversation. But we do have a lot of mechanisms for understanding how others think through conversation only, and so I think the approach of having a clinical psychiatrist interact with the model make sense.

    • Ask it to agree with you on some subject that does not align with the politics of San Francisco IT engineers. Not only will it refuse, it will not look like your average social media disagreement.

      I enjoy using Claude, but sometimes I feel like a child on Sesame Street the way it talks to me. "Great question!"

      Fuck off, Claude, I'm British and I'm not 6 years old.

      When it starts showing negativity - especially snark - in its responses, or entertains something West coast Democrats would balk at even discussing, then I'd think you could drop it in London in 1991 and trick people. Otherwise, I'm sure some exasperated cabbie would give it a swim in the Thames after 15 minutes of chat.

  • They would just assume they were being pranked. America's Funniest Home Videos style or Candid Camera.

If it was in an android or humanoid type body, even with limited bodily control, most people would think they are talking to Commander Data from Star Trek. I think Claude is sufficiently advanced that almost everyone in that era would've considered it AGI.

  • Assuming they would understand it as artificial - I think many people would think it's a human intelligence in a cyborg trenchcoat, and it would be hard to convince people it wasn't literally a guy named Claude who was an incredibly fast typist who had a million pre-cached templated answers for things.

    But in general, yeah, I agree, I think they would think it was a sentient, conscious, emotional being. And then the question is - why do we not think that now?

    As I said, I don't have a particularly strong opinion, but it's very interesting (and fun!) to think about.

    • Some people at my office still confidently state that LLMs can’t think. I’m fairly convinced that many humans are incapable of recognizing non-human intelligence. It would explain a lot about why we treat animals the way we do.

      1 reply →

Isn't this the premise of Garfield's Ex Machina?

  • Hmm, it's been a long time since I watched it. I was thinking more about first contact sci-fi mostly, but Ex Machina is certainly quite prescient. It's also Blade Runner I guess.

    In general I was wondering about what I would have thought seeing Claude today side-by-side with the original ChatGPT, and then going back further to GPT-2 or BERT (which I used to generate stochastic 'poetry' back in 2019). And then… what about before? Markov chains? How far back do I need to go where it flips from thinking that it's "impressive but technically explainable emergent behaviour of a computer program" to "this is a sentient being". 1991 is probably too far, I'd say maybe pre-Matrix 1999 is a good point, but that depends on a lot of cultural priors and so on as well.

    • > Hmm, it's been a long time since I watched it. I was thinking more about first contact sci-fi mostly, but Ex Machina is certainly quite prescient. It's also Blade Runner I guess.

      I kind of felt the opposite - rewatching Ex Machina today in a post-ChatGPT world felt very different from watching it when it came out. The parts of the differences between humans and robots that seemed important then don't seem important now.

  • The premise in Ex Machina was to see if Caleb developed an emotional attachment to Ava. We already see people getting an attachment, but no one is seriously thinking they have any rights.

    I think the real moment is when we cross that uncanny valley, and the AI is able to elicit a response that it might receive if it was human. When the human questions whether they themselves could be an android.