← Back to context

Comment by lelanthran

9 hours ago

> With AI, the cost of not thinking upfront is high and the cost of being wrong in upfront decisions is low, so we bias towards that.

I don't really understand what that means:

1. If the cost of not thinking upfront is high, that means you need to think upfront.

2. If the cost of being wrong upfront is low, that means you don't need to think upfront.

To me, it looks like those assertions contradict each other.

Maybe I expressed that clumsily.

With historical development, investing in hypotheticals can be wasteful. Make the fewest assumptions until you get real user feedback.

With AI, we make more decisions upfront. Being wrong about those decisions has a low cost because the investment in implementation is cheap. We are less worried about building the wrong things because we can just try again quickly. It's cheap to be wrong.

The more decisions we make upfront, the more hands-off we can be. A shallow set of decisions might stall out earlier than a deeper set of decisions could. So we need to think through things more deeply. So we need to think more.