Comment by extragalaxial

7 days ago

[flagged]

Please, please avoid recommending LLMs for problems where the user cannot reliably verify it's outputs. These tools are still not reliable (and given how they work, they may never be 100% reliable). It's likely the OP could get a "summary" which contains hallucinations or incorrect statements. It's one thing when experienced developers use Copilot or similar to avoid writing boilerplate and boring parts of the code - they still have competence to review, control and adapt the outputs. But for someone looking to get introduced to a hard topic, such as the OP, it's a very bad advice as they have no means of checking the output for correctness. A lot of us already have to deal with junior folks spitting out the AI slop on a daily basis, probably using the tools they way you suggested. Please don't introduce more of AI slop nonsense into the world.

This is getting downvoted but I would also recommend it. It's much faster than reading papers and, unless you are doing cutting edge research, LLMs will be able to accurately explain everything you need to know for common algorithms like this.

  • It's getting down-voted because it is a very bad advice, one that can be refuted by already known facts. Your comment is even worse in this regards and is very misleading - the LLMs are definitely not going to "accurately explain everything you need to know", it's not a magical tool that "knows everything", it's a statistical parrot which infers the most likely sequence of tokens, which results in inaccurate responses often enough. There is already a lot of incompetent folks relying blindly on these un-reliable tools, please do not introduce more AI-slop based thinking into the world ;)

    • You left out the "for common algorithms like this" part of my comment. None of what you said applies to learning simple, well-established algorithms for software development. If it's history, biology, economics etc. then sure, be wary of LLM inaccuracies, but an algorithm is not something you can get wrong.

      I don't personally know much about DHTs so I'll just use sorting as an example:

      If an LLM exlains how a sorting algorithm works, and it explains why it fulfills certain properties about time complexity, stability, parallelizability etc. and backs those claims up with example code and mathematical derivations, then you can verify that you understand it by working through the logic yourself and implementing the code. If the LLM made a mistake in its explanation, then you won't be able to understand it because it's can't possibly make sense; the logic won't work out.

      Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.

      2 replies →

  • It's getting downvoted because it's the equivalent of saying "google it".

    • And because LLMs will "explain" things that contain outright hallucinations - a beginner won't know which parts are real and which parts are suspect.

      1 reply →

    • Exactly. Nothing wrong with LLMs, but we’re trying to have a human conversation here – which would be impossible if people would have all their conversations with LLMs instead.