Comment by tptacek
4 days ago
You always have to be careful. But worth calling out that using CombinedOutput() like that is also a common flaw in human code.
4 days ago
You always have to be careful. But worth calling out that using CombinedOutput() like that is also a common flaw in human code.
The difference is that humans learn. I got bit by this behavior of CombinedOutput once ten years ago, and no longer make this mistake.
This applies to AI, too, albeit in different ways:
1. You can iteratively improve the rules and prompts you give to the AI when coding. I do this a lot. My process is constantly improving, and the AI makes fewer mistakes as a result.
2. AI models get smarter. Just in the past few months, the LLMs I use to code are making significantly fewer mistakes than they were.
But my gripe with your first point is that by the time I write an exact detailed step-by-step prompt for them, I could have written the code by hand. Like there is a reason we are not using fuzzy human language in math/coding, it is ambiguous. I always feel like doing those funny videos where you have to write exact instructions on how to make a peanut butter sandwich, getting deliberately misinterpreted. Except it is not fun at all when you are the one writing the instructions.
2. It's very questionable that they will get any smarter, we have hit the plateau of diminishing returns. They will get more optimized, we can run them more times with more context (e.g. chain of thought), but they fundamentally won't get better at reasoning.
2 replies →
That you don't know when it will make a mistake and that it is getting harder to find them are not exactly encouraging signs to me.
2 replies →
And you can build automatic checks that reinforce correct behavior for when the lessons haven’t been learned, by bot or human.