Comment by legorobot

23 days ago

I agree that there's a lot of complexity today due to the process in which we write code (people, lack of understanding the problem space, etc.) vs the problem itself.

Would we say us as humans also have captured the "best" way to reduce complexity and write great code? Maybe there's patterns and guidelines but no hard and fast rules. Until we have better understanding around that, LLMs may also not arrive at those levels either. Most of that knowledge is gleamed when sticking with a system -- dealing with past choices and requiring changes and tweaks to the code, complexity and solution over time. Maybe the right "memory" or compaction could help LLMs get better over time, but we're just scratching the surface there today.

LLMs output code as good as their training data. They can reason about parts of code they are prompted and offer ideas, but they're inherently based on the data and concepts they've trained on. And unfortunately...its likely much more average code than highly respected ones that flood the training data, at least for now.

Ideally I'd love to see better code written and complexity driven down by _whatever_ writes the code. But there will always been verification required when using a writer that is probabilistic.