That was basically my first ever question to chatgpt. Unfortunately given that current models are guessing at the next most probable word, they're always going to eschew to the most standard responses.
Sure! But if I experience it, and then write about my experience, parts of it become available for LLMs to learn from. Beyond that, even the tacit aspects of that experience, the things that can't be put down in writing, will still leave an imprint on anything I do and write from that point on. Those patterns may be more or less subtle, but they are there, and could be picked up at scale.
I believe LLM training is happening at a scale great enough for models to start picking up on those patterns. Whether or not this can ever be equivalent to living through the experience personally, or at least asymptomatically approach it, I don't know. At the limit, this is basically asking about the nature of qualia. What I do believe is that continued development of LLMs and similar general-purpose AI systems will shed a lot of light on this topic, and eventually help answer many of the long-standing questions about the nature of conscious experience.
> will shed a lot of light on this topic, and eventually help answer
I dunno. I figure it's more likely we keep emulating behaviors without actually gaining any insight into the relevant philosophical questions. I mean what has learning that a supposed stochastic parrot is capable of interacting at the skill levels presently displayed actually taught us about any of the abstract questions?
The things that people "don't write down" do indeed get written down. The darkest, scariest, scummiest crap we think, say, and do are captured in "fiction"... thing is, most authors write what they know.
That was basically my first ever question to chatgpt. Unfortunately given that current models are guessing at the next most probable word, they're always going to eschew to the most standard responses.
It would be neat to find an inversion of that.
of course! but maybe there is something that you have to experience, before you can understand it.
Sure! But if I experience it, and then write about my experience, parts of it become available for LLMs to learn from. Beyond that, even the tacit aspects of that experience, the things that can't be put down in writing, will still leave an imprint on anything I do and write from that point on. Those patterns may be more or less subtle, but they are there, and could be picked up at scale.
I believe LLM training is happening at a scale great enough for models to start picking up on those patterns. Whether or not this can ever be equivalent to living through the experience personally, or at least asymptomatically approach it, I don't know. At the limit, this is basically asking about the nature of qualia. What I do believe is that continued development of LLMs and similar general-purpose AI systems will shed a lot of light on this topic, and eventually help answer many of the long-standing questions about the nature of conscious experience.
> will shed a lot of light on this topic, and eventually help answer
I dunno. I figure it's more likely we keep emulating behaviors without actually gaining any insight into the relevant philosophical questions. I mean what has learning that a supposed stochastic parrot is capable of interacting at the skill levels presently displayed actually taught us about any of the abstract questions?
6 replies →
whereof one cannot speak, thereof one must remain silent.
The things that people "don't write down" do indeed get written down. The darkest, scariest, scummiest crap we think, say, and do are captured in "fiction"... thing is, most authors write what they know.