← Back to context

Comment by Legend2440

6 months ago

Wow, that is quite obscure. Even with the name I can't find any references to it on Google. I'm not surprised that the LLMs don't know about it.

You can always make stuff up to trigger AI hallucinations, like 'which 1990s TV show had a talking hairbrush character?'. There's no difference between 'not in the training set' and 'not real'.

Edit: Wait, no, there actually was a 1990s TV show with a talking hairbrush character: https://en.wikipedia.org/wiki/The_Toothbrush_Family

This is hard.

> There's no difference between 'not in the training set' and 'not real'.

I know what you meant but this is the whole point of this conversation. There is a huge difference between "no results found" and a confident "that never happened", and if new LLMs are trained on old ones saying the latter then they will be trained on bad data.

>> You can always make stuff up to trigger AI hallucinations

Not being able to find an answer to a made up question would be OK, it's ALWAYS finding an answer with complete confidence that is a major problem.