Comment by sdenton4

2 days ago

Yeah, this is by far the biggest value I've gotten from LLMs - just pointing me to the area of literature neither me nor any of my friends have heard of, but which have spent a decade running about the problems we're running into.

In this case, all that matters is that the outputs aren't complete hallucination. Once you know the magic jargon, everything opens up easily with traditional search.

The issue is when the output is good sounding but complete fantasy.

I’ve had it ‘dream’ up entire fake products, APIs, and even libraries before.

  • And that becomes obvious when you go to look for it. These work best in situations where false positives have low cost/impact, but true positives are easily verifiable and have high impact. in other words, problems in epistemological p-space. :)