Comment by steveklabnik
10 months ago
> ChatGPT search was only released in November last year
It is entirely possible that I simply got involved at a particular moment that was crazy lucky: it's only been a couple of weeks. I don't closely keep up with when things are released, I had just asked ChatGPT something where it did a web search, and then immediately read a "it cannot do search" claim right after.
> An LLM, on its own, is not a search engine and can not scan the web for information.
In a narrow sense, this is true, but that's not the claim: the claim is "You cannot use it as a search engine, or as a substitute for searching." That is pretty demonstrably incorrect, given that many people use it as such.
> Trusting an offline LLM with an informational search is sometimes a really bad idea ("who are all the presidents that did X").
I fully agree with this, but it's also the case with search engines. They also do not always "encompass the fully body of the published human thought" either, or always provide answers that are comprehensible.
I recently was looking for examples of accomplishing things with a certain software architecture. I did a bunch of searches, which led me to a bunch of StackOverflow and blog posts. Virtually all of those posts gave vague examples which did not really answer my question with anything other than platitudes. I decided to ask ChatGPT about it instead. It was able to not only answer my question in depth, but provide specific examples, tailored to my questions, which the previous hours of reading search results had not afforded me. I was further able to interrogate it about various tradeoffs. It was legitimately more useful than a search engine.
Of course, sometimes it is not that good, and a web search wins. That's fine too. But suggesting that it's never useful for a task is just contrary to my actual experience.
> The fact that they're incorrect when they say that LLM's can't trigger search doesn't seem that "hilarious" to me, at least.
It's not them, it's the overall state of the discourse. I find it ironic that the fallibility of LLMs is used to suggest they're worthless compared to a human, when humans are also fallible. OP did not directly say this, but others often do, and it's the combination that's amusing to me.
It's also frustrating to me, because it feels impossible to have reasonable discussions about this topic. It's full of enthusiastic cheerleaders that misrepresent what these things can do, and enthusiastic haters that misrepresent what these things can do. My own feelings are all over the map here, but it feels impossible to have reasonable discussions about it due to the polarization, and I find that frustrating.
If you've only been using AI for a couple of weeks, that's quite likely a factor. AI services have been improving incredibly quickly, and many people have a bad impression of the whole field from a time when it was super promising, but basically unusable. I was pretty dismissive until a couple of months ago, myself.
I think the other reason people are hostile to the field is that they're scared it's going to make them economically redundant, because a tsunami of cheap, skilled labor is now towering over us. It's loss-aversion bias, basically. Many people are more focused on that risk than on the amazing things we're able to do with all that labor.