Comment by tharant
6 months ago
> In the very narrow fields where I have a deep understanding, LLM output is mostly garbage > Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading.
Why do you have to make that assumption? An expert arborist likely won’t know much about tuning GC parameters for the JVM but that won’t make them “worse than useless” or “actively misleading” when discussing other topics, and especially not when it comes to the stuff that’s relatively tangential to their domain.
I think the difference we have is that I don’t expect the models to be experts in any domain nor do I expect them to always provide factual content; the library can provide factual content—if you know how to use it right.
There's a term corollary to what you're trying to argue here: https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
> You open the newspaper to an article on some subject you know well... You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.