Comment by hansmayer

6 days ago

Please, please avoid recommending LLMs for problems where the user cannot reliably verify it's outputs. These tools are still not reliable (and given how they work, they may never be 100% reliable). It's likely the OP could get a "summary" which contains hallucinations or incorrect statements. It's one thing when experienced developers use Copilot or similar to avoid writing boilerplate and boring parts of the code - they still have competence to review, control and adapt the outputs. But for someone looking to get introduced to a hard topic, such as the OP, it's a very bad advice as they have no means of checking the output for correctness. A lot of us already have to deal with junior folks spitting out the AI slop on a daily basis, probably using the tools they way you suggested. Please don't introduce more of AI slop nonsense into the world.