Comment by binwiederhier
2 years ago
I'm a software engineer, and I more or less stopped asking ChatGPT for stuff that isn't mainstream. It just hallucinates answers and invents config file options or language constructs. Google will maybe not find it, or give you an occasional outdated result, but it rarely happens that it just finds stuff that's flat out wrong (in technology at least).
For mainstream stuff on the other hand ChatGPT is great. And I'm sure that Gemini will be even better.
The important thing is that with Web Search as a user you can learn to adapt to varying information quality. I have a higher trust for Wikipedia.org than I do for SEO-R-US.com, and Google gives me these options.
With a chatbot that's largely impossible, or at least impractical. I don't know where it's getting anything from - maybe it trained on a shitty Reddit post that's 100% wrong, but I have no way to tell.
There has been some work (see: Bard, Bing) where the LLM attempts to cite its sources, but even then that's of limited use. If I get a paragraph of text as an answer, is the expectation really that I crawl through each substring to determine their individual provenances and trustworthiness?
The shape of a product matters. Google as a linker introduces the ability to adapt to imperfect information quality, whereas a chatbot does not.
As an exemplar of this point - I don't trust when Google simply pulls answers from other sites and shows it in-line in the search results. I don't know if I should trust the source! At least there I can find out the source from a single click - with a chatbot that's largely impossible.
> it rarely happens that it just finds stuff that's flat out wrong
"Flat out wrong" implies determinism. For answers which are deterministic such as "syntax checking" and "correctness of code" - this already happens.
ChatGPT, for example, will write and execute code. If the code has an error or returns the wrong result it will try a different approach. This is in production today (I use the paid version).
Dollars to doughnuts says they are using GPT3.5.
I'm currently working with some relatively obscure but open source stuff (JupyterLite and Pyodide) and ChatGPT 4 confidently hallucinates APIs and config options when I ask it for help.
With more mainstream libraries it's pretty good though
I use chatgpt4 for very obscure things
If I ever worried about being quoted then I’ll verify the information
otherwise I’m conversational, have taken an abstract idea into a concrete one and can build on top of it
But I’m quickly migrating over to mistral and if that starts going off the rails I get an answer from chatgpt4 instead