Comment by visarga
10 hours ago
Another possibility is to implement the same spec twice, and do differential testing, you can catch diverging assumptions and clarify them.
10 hours ago
Another possibility is to implement the same spec twice, and do differential testing, you can catch diverging assumptions and clarify them.
Isn't that too much work?
Instead, just learning concepts with AI and then using HI (Human Intelligence) & AI to solve the problem at hand—by going through code line by line and writing tests - is a better approach productivity-, correctness-, efficiency-, and skill-wise.
I can only think of LLMs as fast typists with some domain knowledge.
Like typists of government/legal documents who know how to format documents but cannot practice law. Likewise, LLMs are code typists who can write good/decent/bad code but cannot practice software engineering - we need, and will need, a human for that.