He was the keynote at YOW! so I can't capture all the nuance and hope I'm not doing him a disservice with my interpretation, but the tl;dr is he:
"LLMs drastically decrease the cost of experimenting during the very earliest phases of a project, like when you're trying to figure out if the thing is even worth building or a specific approach might yield improvements, but loses efficacy once you're past those stages. You can keep using LLMs sustainably with a very tight loop of telling it to do the thing the cleaning up the results immediately, via human judgement."
I.e, I don't think he can relate at all to the experience of letting them run wild and getting a good result.
He was the keynote at YOW! so I can't capture all the nuance and hope I'm not doing him a disservice with my interpretation, but the tl;dr is he:
"LLMs drastically decrease the cost of experimenting during the very earliest phases of a project, like when you're trying to figure out if the thing is even worth building or a specific approach might yield improvements, but loses efficacy once you're past those stages. You can keep using LLMs sustainably with a very tight loop of telling it to do the thing the cleaning up the results immediately, via human judgement."
I.e, I don't think he can relate at all to the experience of letting them run wild and getting a good result.