← Back to context

Comment by shinycode

14 hours ago

Even if LLM will one day be autonomously updated, they started from us, from our knowledge. The human brain « is smart », it’s wired up to be in any kind of culture or knowledge. We fill up to be smarter from experience but LLM can’t do that, I can’t teach Claude something that it will use with you the next day, it needs to be retrained with knowledge stopping at some point. Even if technology catches up and the machine becomes more autonomous, what will say this machine would ever want to integrate to our society or share anything with us ? They have eternity, given there is electricity. Why would they want anything to do with humans if you go that way ? If it’s really conscious, should we consider it a slave then ? Why couldn’t « it » have fundamental rights and freedom to do whatever it wants ?

Humans have a mechanism to make live changes to their neural network and clean up messes while sleeping. I see no reason for llms to not be able to do this other than the fact that it is resource intensive (which will continue to go down)

  • The analogy holds technically, but there’s a missing piece: the brain doesn’t just update weights, it does so guided by experience that matters to a situated, embodied agent with drives and stakes. Sleep consolidation isn’t random cleanup, it’s selective based on salience and emotion. An LLM updating more efficiently is progress, but it’s still optimizing a loss function. Whether that ever approximates what the brain does during sleep depends entirely on whether you think the what (weight updates) is sufficient, or whether the why (relevance to a lived experience) is what makes it meaningful. So yes, the resource argument will weaken over time. But the architectural gap may be deeper than just compute.