Comment by nkohari

6 months ago

That topic (ship's computer vs. Data) is actually discussed at length in-universe during The Measure of a Man. [0] The court posits that the three requirements for sentient life are intelligence, self-awareness, and consciousness. Data is intelligent and self-aware, but there is no good measure for consciousness.

[0] https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...

Using science fiction as a basis for philosophy isn't wise, especially TNG which has a very obvious flavor of "optimistic human exceptionalism" (contrast with DS9, where I think Eddington even makes this point).

Doesn't ChatGPT fulfill these criteria too?

  • Again, there's no real measure for consciousness, so it's difficult to say. If you ask me, frontier models meet the definition of intelligence, but not the definition of self-awareness, so they aren't sentient regardless of whether they are conscious. This is a pretty fundamental philosophical question that's been considered for centuries, outside of the context of AI.

    • ChatGPT knows about the ChatGPT persona. Much like I know the persona I play in society and at home. I don't know what the "core" me is like at all. I don't have access to it. It seems like a void. A weird eye. No character, no opinions.

      The persona; I know very well.

      3 replies →

  • its not self-aware, regardless what it tells you (see the original link)

    • I'm not sure what you're referring to in the original link, can you please paste an excerpt?

      But thinking about it - how about this, what if you have a fully embodied LLM-based robot, using something like Figure's Helix architecture [0], with a Vision-Language-Action model, and then have it look at the mirror and see itself - is that on its own not sufficient for self-awareness?

      [0] https://www.figure.ai/news/helix