← Back to context

Comment by andrepd

4 days ago

Jesus christ, not even a biology thread is safe in the orange website.

Philosophers of mind have always tried to describe the brain using contemporary technology analogies. It's only natural and nothing to frown at.

Descartes compared the human mind to waterworks and hydraulic machines, other authors used mechanical clocks, telegraph systems, digital computers, and (in the recent decades) neural networks.

In the end it's all computing and to a degree all of those models serve as good analogies to the wetware, one just needs to avoid drawing wild conclusions from it.

I'm sure there will be new analogies in the future as our tech progresses.

We don't literally train on today's prompts while we sleep, but there actually _are_ some _computing_ tasks going on in our brains at that time that seem to be important for the system.

Indeed. Animals without linguistic ability (like fruit flies) need sleep, but after ChatGPT's release in 2022, now tech bros think LLMs specifically might model the animal brain in general because of anthropocentrism and anthropomorphism.

It's also a fundamental misunderstanding of how LLMs work, mixing up inference with training.

  • Come on, don't be uncharitable, language isn't inherently necessary for models like LLMs, you can train something similar on visual inputs. Fruit flies have neurons that pass around ~probabilities/signal strengths to each other to represent their environments and basic concepts, it's not way off as an analogy.

  • It was applicable to all neural networks, not just LLMs.

    Can we say that after ChatGPT's release in 2022, now antitech bros think everything is about LLMs specifically?

    • The statement was "AI frenzy almost convinced me that sleep was the training of our neural network with all the prompts of the day."

      Prompts are specific to LLMs. Most neural networks don't have prompts.

      Additionally, prompts happen during LLM inference, not LLM training. There are many non-technical people who claim they have experience "training" LLMs, when they are just an end user who added a lot of tokens to the context window during inference.

      5 replies →