← Back to context

Comment by majormajor

2 years ago

I would love to know their plan for having new facts propagate into these models.

My idle speculation makes me think this is a hard problem. If ChatGPT kills Search it also kills the websites that get surfaced by search that were relying on money from search-directed users. So stores are fine, but "informational" websites are probably in for another cull. Paywall premium publications are probably still fine - the people currently willing to pay for new, somewhat-vetted, human content still will be. But things like the Reddits of the world might be in for a hard time since all those "search hack" type uses of "search google for Reddit's reviews of this product" are short-circuited, if this is actually more effective than search.

Meanwhile, SEO folks will probably try to maximize what they can get out of the declining search market by using these tools to flood the open web with even more non-vetted bullshit than it's already full of.

So as things change, how does 1 tiny-but-super-reliable amateur website (say, an expert on .NET runtime internals's individual blog) make a dent in the "knowledge" of the next iteration of these models? How do they outweigh the even-bigger sea of crap that the rest of the web has now become when future training is done?

The other interesting thing is that if people stop using websites, then it reduce revenue for those websites > then development of new pages and sources stops/slows, how does ChatGPT improve? If the information for it to learn isn't there.

We need the source information to be continually generated in order for ChatGPT to improve.

I almost never find these kind of websites with search, except I already know there might be one on a specific topic.

The way I find them is from forums, chat, links from other such sites. All goes into my RSS reader.

I use search with !wiki or !mdn etc. most of the time.