Comment by captainbland
16 hours ago
While I'm sure that subconsciously influenced what I wrote, it was more a general jab at the sentiment that negative externalities can always be justified so long as a technology has users who prefer to use it.
16 hours ago
While I'm sure that subconsciously influenced what I wrote, it was more a general jab at the sentiment that negative externalities can always be justified so long as a technology has users who prefer to use it.
Ah, I thought you were just referring to the decades-long use of the most massive supercomputers to simulate nuclear arsenal maintenance and explosions (maybe literally at the molecular/atomic/sub-atomic level).
Yeah. Did you see article that they made a brain organoid (actual brain neurons on a chip) play DOOM?. What are those neurons experiencing?
> What are those neurons experiencing?
A reasonable explanation is that a few neurons probably don't have conscience so they can't really experience anything.
It's an interesting question as to what that level is likely to be though. The chip in question apparently has around 800,000 neurons (https://www.forbes.com/sites/johnkoetsier/2025/06/04/hardwar...) so not a trivial quantity which makes it significantly more complex than most insects' forebrains but still less complex than any mammal.
I think once they're able to put 15 million such neurons on a single device that puts them in the range of more relatable animals like mice and Syrian hamsters, and I also expect that relatability is also what will drive most opinions about consciousness.
>a few neurons probably don't have conscience
Given our piss poor understanding of consciousness, I have to ask: on what grounds do you make this claim?
> What are those neurons experiencing?
Doom. (Obviously.)
I hadn't until you mentioned it but now I have! I expect one day they'll generate a language model on one and then we can just ask it, assuming they don't give it a special rule about never describing its experiences.
The language model's output would be informed by its weights, not by its experiences as wetware. Substrate does not make a computation special: that's the whole point of the Chinese Room thought experiment.
What mechanism are you imagining that would allow a LLM built of neurons to describe what it's like to be made of neurons, when a LLM built of GPUs cannot describe what it's like to be organised sand? The LLM in the GPU cluster is evaluated by performing the same calculations that could be performed by intricate clockwork, or very very slowly by generations of monks using pencil and paper. Just as the monks have thoughts and feelings, it is conceivable (though perhaps impossible) that the brain tissue implementing a LLM has conscious experience; but if so, that experience would not be reflected in the LLM's output.
2 replies →