Comment by kgwxd
14 hours ago
> LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.
The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.
LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.
I like working on computers because it minimizes the amount of biological-like things I have to work with.
I suppose transistors is a bad example.
Perhaps a better analogy would be the Linux kernel. It's built by biological humans, and fallible ones at that. And yet, I don't feel the need to learn the intricacies of kernel internals, because it's reliable enough that it's essentially never the kernel's fault when my code doesn't work.
Kernel is a bad analogy, if you understand how it behaves you can understand how its built. LLMs don't have that, their behaviour is not completely defined by how they are built.
Every abstraction is leaky, its not like I have 1 in every 100 tickets I work on needs understanding of the existence of filesystem buffers, it's in the back of my mind, it's always there. I didn't read linux kernel source, but I know it's existence. LLM output doesn't have that.