← Back to context

Comment by daxfohl

3 months ago

True, but LLMs are already really good at that kind of thing. Even back in 2015, before transformers, here's a karpathy blog post showing how you could find specific neurons that tracked things like indent position, approx column location, long quotes, etc.

https://karpathy.github.io/2015/05/21/rnn-effectiveness/

That said, I do think algorithms and system designs are very visual. It's way harder to explain heaps and merge sorts and such from just text and code. Granted, it's 2025 now and modern LLMs seem to have internalized those types of concepts ~perfectly for a while now, so IDK if there's much to gain by changing approaches at that level anymore.