← Back to context

Comment by schmorptron

5 hours ago

That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.

In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.

SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.

Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.

> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.

I think the deeper question is how SO would get paid for that.

Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)

Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.

The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.

This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.

  • Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.

    There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha

    the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better

    Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs