Comment by JoshCole

2 years ago

> Anyone who's played around with it knows that it's fun but it's not a search engine replacement and it doesn't know nor understand things.

I've noticed that some people have been talking as if the view is nonsense in the direction you imply it is nonsense, but I think the argument for a lack of sensory ties is much stronger when stated in the other direction.

This fails when arguing against the straw man position, but of course, drinking water also fails when addressing the weakest possible justification for it - clearly drinking water is bad because if you drink water you die of osmosis, though technically true, isn't really reflective of the actual predictions made by the people advancing the claim that we ought to drink more water than we do.

So I'll give the argument for language models replacing search engines not being nonsense, but the position that no one could arrive at the belief actually being nonsense.

Lets start with people not arriving at that belief being nonsense. I think it is nonsense in two ways. One way is that because some people have used them, then claimed they would be replacing search engines for some queries, it follows that you deny your senses with regard to the existence of these people. This is not sensory tied. So it is non-sensical. The second way is your belief anticipates experiences, but this experience you are now having in which someone disagrees with you is in contradiction to the anticipated experienced supposed by the belief. So it fails to predict past experiences and fails to predict current experiences, so it probably fails to predict future experiences. There probably will be someone in the future who, after interacting with a language model, thinks it can serve as a replacement for some search engine queries.

Now in contrast, their claims were not non-sensical when they thought they could replace some search engine queries with language model queries. By that I also mean that they were not non-sensical in two respects. The first is that they arrived at the belief after replacing queries to search engines with queries to language models. Then, after having found value in doing so, they announced a belief congruent with that value, intimately connecting their belief to their senses. The second is that, having arrived at their belief, their belief paid rent in anticipated experience: it successfully predicted three noteworthy events: the internal code red status at Google, you.com's language model as enrichment for search, and bing.com's use of language model as enrichment for search. So it successfully predicted past experiences, successfully predicted current experiences, and may or may not predict future experiences - I think most people who hold this view tend to think there will be some refinements on the current generation of language models, in particular, that they will further refined at query time with relevant data, like fact databases, to help correct for some of the existing approximation error. This is my belief. I anticipate this happening. You can judge whether I anticipate successfully by watching reality. I should note that part of the reason I think this will happen is because I've already seen it happen. So I'm not really making a bold prediction here, but I suspect you will later see this happen, because I've seen this happen.

Anyway... the belief that language models can replace some search queries is not a non-sensical belief, like beliefs in fairies is, but the belief that no one arrives at a belief that they could be is a non-sensical belief, because people do arrive at such beliefs, and so therefore the belief that they don't is fanciful and not reflective of reality.