Comment by phamilton
19 hours ago
I think harder because of AI.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
> I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
So... you didn't have to do that prior to using agents?
Not as much upfront. I had plenty of opportunities to adjust and correct along the way. With AI, the cost of not thinking upfront is high and the cost of being wrong in upfront decisions is low, so we bias towards that.
But beyond that, I have been thinking deeply about AI itself, which has all sorts of new problems. Permissions, verification, etc.
> With AI, the cost of not thinking upfront is high and the cost of being wrong in upfront decisions is low, so we bias towards that.
I don't really understand what that means:
1. If the cost of not thinking upfront is high, that means you need to think upfront.
2. If the cost of being wrong upfront is low, that means you don't need to think upfront.
To me, it looks like those assertions contradict each other.