← Back to context

Comment by PeterStuer

3 hours ago

This is just the sort of bloated overcomplication I often see in first iteration AI generated solutions before I start pushing back to reduce the complexity.

Usually, after 4-5 iterations, you can get something that has shed 80-90% of the needless overcomplexification.

My personal guess is this is inherent in the way LLMs integrate knowledge during training. You always have a tradeoff in contextualization vs generalization.

So the initial response is often a plugged together hack from 5 different approaches, your pushbacks provide focus and constraints towards more inter-aligned solution approaches.