Comment by Asraelite
6 days ago
You left out the "for common algorithms like this" part of my comment. None of what you said applies to learning simple, well-established algorithms for software development. If it's history, biology, economics etc. then sure, be wary of LLM inaccuracies, but an algorithm is not something you can get wrong.
I don't personally know much about DHTs so I'll just use sorting as an example:
If an LLM exlains how a sorting algorithm works, and it explains why it fulfills certain properties about time complexity, stability, parallelizability etc. and backs those claims up with example code and mathematical derivations, then you can verify that you understand it by working through the logic yourself and implementing the code. If the LLM made a mistake in its explanation, then you won't be able to understand it because it's can't possibly make sense; the logic won't work out.
Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I meant it also for the (unwittingly) left-out part of your comment. Firstly, by saying this parrot will explain "everything that you need to know ..." you're pushing your own standards onto everyone else. Maybe the OP really wants to understand it deeply and learn about edge cases, and understand how it really works. I dont think I would rely on a statistical parrot (yes, that's really how they work, only on a large scale) to teach me stuff like that. At best, they are to be used with railguards as some kind of a personal version of "rain man", with the exception that the "rain man" was not hallucinating when counting cards :)
> Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I'm pretty sure that's exactly how they work.
Depending on the quality of the LLM and the complexity of the thing your asking about good luck fact checking it's output. It is about the same effort as finding direct sources and verified documentation or resources written by humans.
LLMs generate human like answers by using statistics and other techniques on a huge corpus. They do hallucinate but what is less obvious is that a "correct" LLM output is still a hallucination. It just happens to be a slightly useful hallucination that isn't full of BS.
As the LLM takes in inconsistent input and always outputs inconsistent output you * will * have to fact check everything it says. Making it useless for automated reasoning or explanations and a shiny turd in most respects.
The useful things LLMs are reported to do where an emergent effect found by accident by natural language engineers trying to build chat bots. LLM's are not sentient and have no idea if the output is good or bad.