Comment by applfanboysbgon
15 hours ago
> AI is stochastic, not static and deterministic.
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
The same argument is made about the human neural network
1. That is not the claim you originally made.
2. Not provably so.
3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.
Trivially simple programs (rule sets) can give rise wildly complex systems.
Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.
Coincidences appears to be an emergent property of (relatively) simple matter.
70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.