Comment by tom2026hn
4 hours ago
No, LLMs are fundamentally designed as probabilistic engines for next-token prediction, from which intelligence-like functions have emerged as a byproduct. Such emergence is not guaranteed, given that the underlying mechanisms are not fully understood. Consequently, one cannot dismiss the possibility of consciousness arising.
No comments yet
Contribute on Hacker News ↗