← Back to context

Comment by simlevesque

2 months ago

You might be able to ask it what it knows.

So something's odd there. I asked it "Who won Super Bowl LIX and what was the winning score?" which was in February and the model replied "I don't have information about Super Bowl LIX (59) because it hasn't been played yet. Super Bowl LIX is scheduled to take place in February 2025.".

  • With LLMs, if you repeat something often enough, it becomes true.

    I imagine there's a lot more data pointing to the super bowl being upcoming, then the super bowl concluding with the score.

    Gonna be scary when bot farms are paid to make massive amounts of politically motivated false content (specifically) targeting future LLMs training

    • A lot of people are forecasting the death of the Internet as we know it. The financial incentives are too high and the barrier of entry is too low. If you can build bots that maybe only generate a fraction of a dollar per day (referring people to businesses, posting spam for elections, poisoning data collection/web crawlers), someone in a poor country will do it. Then, the bots themselves have value which creates a market for specialists in fake profile farming.

      I'll go a step further and say this is not a problem but a boon to tech companies. Then they can sell you a "premium service" to a walled garden of only verified humans or bot-filtered content. The rest of the Internet will suck and nobody will have incentive to fix it.

      3 replies →

Why would you trust it to accurately say what it knows? It's all statistical processes. There's no "but actually for this question give me only a correct answer" toggle.