Comment by 8bitsrule
8 days ago
I've most disliked made-up, completely incorrect answers easily proven to be so, followed by GPT-grovelling when contradicted with the facts, promises to 'learn' and 'I'll strive to do better'. Time after time over months, the same dodging and weaseling.
A simple 'I don't know, I haven't got access to the answer' would be a great start. People who don't know better are going to swallow those crap answers. For this we need to produce much more electricity?
LLMs need regular transparency - fact checking so the user can verify an validate accuracy.
DuckDuck's service started adding one or two backup source citations to their 'Search help' service lately. One can check them to see how trustworthy they are.
Of course backup citations can be hard for 'services' breaking copyright laws to train their machines to be thieves. And legally getting into the realm of academic papers could be expensive.
So for them, providing a trustworthy source is hard to do. Actually knowledgeable people are needed to distinguish trustworthy sources. So there are several discernable motivations to just fake it ... if you don't care what you're doing to the innocent.