← Back to context

Comment by lolive

4 days ago

AI frenzy almost convinced me that sleep was the training of our neural network with all the prompts of the day.

And now this /o\

That's what I still 'believe'. Wake-sleep algorithm [1] is a good start for speculation. I think brain needs to be in a different mode to reorganize its weights and to forget unnecessary things to prevent overfitting. In this mode we happen to be unconscious. I also believe dreams are just hallucinations caused by random noise input to the system. The brain converts noise distribution to a meaningful distribution and samples from that. I have zero evidence btw, but I believe these are related.

[1] https://en.wikipedia.org/wiki/Wake-sleep_algorithm

  • When we don’t sleep, we can lose sensory and cognitive coherence. Mild visual hallucinations begin and reality can start slipping.

    Sleep itself is characterized by coherent neural activity— the large number of brain regions with synchronized neural activity. The slow waves where huge numbers are all firing close together in a rhythm. Low frequency and high amplitude delta brainwaves (1-2 hertz).

    Complex adaptive brain activity requires more complex firing than a simple rhythmic frequency. So, in a way, the complex activity must be stopped in order to support global synchrony.

    Why would our neurons want to all fire synchronously? Well, it is healthy for neurons to fire together in a causal manner— neurons release growth hormones then. That neural growth during synchronized firing is the basis of “neurons that fire together wire together.” And it doesn’t seem coincidental that a successfully predicted model feels good, as in the case of successfully throw a ball in a basket. Neurons are trying to predict other neuron firing and respond to it. If they are unable to effectively, they may become like the 1/3 of our baby neurons in the cortex — they will be pruned and die.

    Good feelings is positive reinforcement—behaviors leading to good feelings get reinforcement. The feeling of harmony or harmonization, where we have to balance a broad set of internal neural impulses, feels good when we do it well. We feel harmony in music — and in our own internal sensory resonance to the world.

    Hypothesis 1: the harmonization of neural activity might cause conscious feelings due to the convergence of the activity to platonic forms (see Platonic Representation Hypothesis in LLM research).

    Returning to sleep — this is a proposal for why sleep feels good. Synchronization might intrinsically feel good. But because the sleep also disrupts your working memory contextual attunements (ie, whatever your day was about) - taking your brain into deep synchrony — it strengthens the overall dendritic connections between the synchronizing neurons.

    And, because it wears off the edges of your previous experiences — you can return refreshed.

    In this way, sleep seems to contribute to the overall integrity of the operation of our intelligence. Without it, we lose integrity and internal harmony.

    And yet, not sleeping is one of my favorite drugs. Can be a major performance enhancer, even if it is variable.

    Hypothesis 2: Not sleeping increases the (statistical) temperature of the brain.

Curious how the zeitgeist changes, on a previous AI cycle we could thought sleep was required/generated by a semi-space garbage collection brain-LISP process :)

> sleep was the training of our neural network with all the prompts of the day

Periods of sleep certainly seem to be used in that sort of way, but that is an extra use evolution found for the sleep cycle once it existed rather than the reason sleep developed in the first place.

There are a number of things that seem tied to, or at least aligned with, our wake/sleep cycle that likely didn't exist when sleep first came about.

Jesus christ, not even a biology thread is safe in the orange website.

  • Philosophers of mind have always tried to describe the brain using contemporary technology analogies. It's only natural and nothing to frown at.

    Descartes compared the human mind to waterworks and hydraulic machines, other authors used mechanical clocks, telegraph systems, digital computers, and (in the recent decades) neural networks.

    In the end it's all computing and to a degree all of those models serve as good analogies to the wetware, one just needs to avoid drawing wild conclusions from it.

    I'm sure there will be new analogies in the future as our tech progresses.

    We don't literally train on today's prompts while we sleep, but there actually _are_ some _computing_ tasks going on in our brains at that time that seem to be important for the system.

  • Indeed. Animals without linguistic ability (like fruit flies) need sleep, but after ChatGPT's release in 2022, now tech bros think LLMs specifically might model the animal brain in general because of anthropocentrism and anthropomorphism.

    It's also a fundamental misunderstanding of how LLMs work, mixing up inference with training.

    • Come on, don't be uncharitable, language isn't inherently necessary for models like LLMs, you can train something similar on visual inputs. Fruit flies have neurons that pass around ~probabilities/signal strengths to each other to represent their environments and basic concepts, it's not way off as an analogy.

    • It was applicable to all neural networks, not just LLMs.

      Can we say that after ChatGPT's release in 2022, now antitech bros think everything is about LLMs specifically?

      6 replies →