Comment by Leynos
2 days ago
It's the following that is problematic: "I asked each of them to fix the error, specifying that I wanted completed code only, without commentary."
GPT-5 has been trained to adhere to instructions more strictly than GPT-4. If it is given nonsense or contradictory instructions, it is a known issue that it will produce unereliable results.
A more realistic scenario would have been for him to have requested a plan or proposal as to how the model might fix the problem.
No comments yet
Contribute on Hacker News ↗