I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.
AI moves engineering into higher-level thinking much like compilers did to Assembly programming back in the day
> hard problems are best solved by breaking them down into smaller, easier sub-problems
I'm ok doing that with a junior developer because they will learn from it and one day become my peer. LLMs don't learn from individual interactions, so I don't benefit from wasting my time attempting to teach an LLM.
> much like compilers did for Assembly programming back in the day
The difference is that programming in let's say C (vs assembler) or Python vs C saves me time. Arguing with my agent in English about which Python to write often takes more time than just writing the Python myself in my experience.
I still use LLMs to ask high-level questions, sanity-check ideas, write some repetitive code (in this enum, convert all camelCase names to snake_case) or the one-off hacky script which I won't commit and thus the quality bar is lower (does this run and solve my very specific problem right now?). But I'm not convinced by agents yet.
> I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.
That's surely me solving the problem, not the agent?
> while those who love solving narrow hard problems find AI can often do it better now
I spend all day in coding agents. They are terrible at hard problems.
I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.
AI moves engineering into higher-level thinking much like compilers did to Assembly programming back in the day
> hard problems are best solved by breaking them down into smaller, easier sub-problems
I'm ok doing that with a junior developer because they will learn from it and one day become my peer. LLMs don't learn from individual interactions, so I don't benefit from wasting my time attempting to teach an LLM.
> much like compilers did for Assembly programming back in the day
The difference is that programming in let's say C (vs assembler) or Python vs C saves me time. Arguing with my agent in English about which Python to write often takes more time than just writing the Python myself in my experience.
I still use LLMs to ask high-level questions, sanity-check ideas, write some repetitive code (in this enum, convert all camelCase names to snake_case) or the one-off hacky script which I won't commit and thus the quality bar is lower (does this run and solve my very specific problem right now?). But I'm not convinced by agents yet.
1 reply →
> I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.
That's surely me solving the problem, not the agent?
1 reply →
[dead]