Comment by libraryofbabel

2 months ago

You really think I didn't already know how LLMs are put together when I wrote my comment? I've implemented these things from scratch in PyTorch. Of course I know the building blocks.

And if you want to get pedantic and technical, you didn't even get the reductionism right! Modern LLMs don't use the logistic regression sigmoid function for network activation nonlinearity anymore, they use things like ReLU or GELU. You're about 15 years behind.

Reductionism is counterproductive in biology ("human brains are voltage spikes across membranes, nothing more") and it's counterproductive here as well. LLMs have nontrivial emergent behavior. The interesting questions are all around what that behavior is and how it arises in the network during training, and if you refuse to engage beyond bare reductionism you won't even be able to ask those questions, let alone answer them.