Comment by nprz

18 hours ago

You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.

This is still a stumbling block for a lot of people. Plenty of people could've found an answer to a problem they had if they had just googled it, but they never did. Or they did, but they googled something weird and gave up. AI use is absolutely going to be similar to that.

Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.

Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.

  • Insurance is complicated, not frequently discussed online, and all code depends on a ton of domain knowledge and proprietary information.

    I'm in a similar domain, the AI is like a very energetic intern. For me to get a good result requires a clear and detailed enough prompt I could probably write expression to turn it into code. Even still, after a little back and forth it loses the plot and starts producing gibberish.

    But in simpler domains or ones with lots of examples online (for instance, I had an image recognition problem that looked a lot like a typical machine learning contest) it really can rattle stuff off in seconds that would take weeks/months for a mid level engineer to do and often be higher quality.

    Right in the chat, from a vague prompt.

Step one: you have to know to ask that. Nobody in that orbit knows how to do that. And these aren’t dumb people. They just aren’t devs.