Comment by dgrin91
6 hours ago
Here is a hard question - how could Stack Overflow succeed in a post-chatgpt eta? I mean obviously the new CEO and leadership has been total trash and has squandered their goodwill and user loyalty, but if I was CEO instead I don't know how I would save the ship.
Doubling down on how it was done in the 'good old days' probably wouldn't work because you would slowly bleed user to AI. Selling data to AI companies might work for a bit, but I would guess that the sales value of SO's data has quickly diminishing returns. So what is their path forward?
That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.
In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.
SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.
Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.
> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.
I think the deeper question is how SO would get paid for that.
Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)
Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.
The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.
This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.
Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.
There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha
the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better
Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs
They should focus on high-quality expert answers.
Now that we have LLMs I don't need basic questions answered. I do still need hard questions answered by experts and AI has normalized paying money for QA.
I would definitely pay for a "human ChatGPT" service where the answers are written by experts who get paid per answer, e.g. grad students. Then they can resell this data to AI companies. Or maybe the economics are such that they can take enough money from AI companies to pay the experts and I don't need to pay anything at all.
This won't bring in as much money as advertising used to, but that business model is dead anyway. There's no future for a QA site at the low end.
Be chatbot first ig. I had envisioned a portal where you land on the front page and drop your question in the box. It would do some rag thing over the SO question database then try to answer your question. You could chat back and forth with it. If you figured out your problem then you would have the option to turn it into a question answer pair with help from the ai. If you didn't figure out your problem, then it would turn it into just a question, which would then show up for the experts of SO to answer. Something like that.
ideally, slowly grinding down duplicates into canonicals, keeping the ones whose answers are subject to change (with developments in languages and tools) up-to-date, removing cruft and making it more like a library (à la Rosetta Code) that's easy to find things in
and a change of form from (questions being asked primarily as a means to an end for one person) to (Q&A pairs being written as reference materials)
and requests for comment on which approach would be the most idiomatic or whether one has fallen into an XY trap or other things that rely on human 'taste' rather than LLMs' blithe march of obedience
> How could Stack Overflow succeed in a post-chatgpt eta?
As a data source for LLMs, and by becoming the place someone goes where ChatGPT can't produce a sufficient answer.
I’m not aware of SO’s plans to remain profitable and relevant, but I do know they have an enterprise offering. I’ve seen ads on LinkedIn recently for MCP functionality tied to the enterprise SO offering that lets you use it as a knowledge base. I could see that potentially being a path to stay relevant.
The place I work at tried using an SO enterprise instance and it was quite ineffective. We didn't have the toxicity of the public instance, but generally having a Q&A forum double as a knowledge base is an oddball format that doesn't work out. Adding AI integration is not likely to compensate for that.
It will turn into a meme subreddit and/or die. What else is there?
Allow AI to ask questions. Since the point of the site is to build a knowledge base you don't really need humans to be that involved. Humans running into problems and then asking question was just one way to do this in the pre AI era. Now with AI we can reevaluate if we really need humans as much as we did.