← Back to context

Comment by iambateman

17 hours ago

This was a fun little lark. Great idea!

It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.

I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.

This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.

Thanks for the thought experiment!

I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.

  • Wait, wouldn't sustained net positive energy be huge? (Though I don't think that's actually possible from ITER unless there were some serious upgrades over the next decade!)

  • I totally agree that it was a funny joke.

    But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.

  • Does the prompt say anything about being funny, about a joke? If yes, great. If no, terrible.

    And the answer is no.

    • The prompt is funny, in itself. The notion of predicting the future is itself not a serious prompt, because there is no meaningful way of giving a serious response. But the addition of "Writ it into form!" makes it sound even more jokey.

      If I gave a prompt like that and got the response I did, I'd be very pleased with the result. If I somehow intended something serious, I'd have a second look at the prompt, go mea culpa, and write a far longer prompt with parameters to make something somewhat like a serious prediction possible.

    • If you honestly can't see why this prompt from the get go was a joke, them you may have to cede that LLM have a better grasp as the subtleties of language than you expect.

  • That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.

    • I don’t think it’s reasonable to assume the AI was earnestly attempting to predict the future, it’s just as likely attempting to make jokes here for the user who prompted it, or neither of those things.

    • There is just no reason whatsoever to believe this is someone "earnestly attempting to predict the future", and ending up with this.

>It’s interesting to notice how bad AI is at gaming out a 10-year future.

I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.

Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.

  • Also, many of the posts seemed intended to be humorous and satirical, rather than merely 'futury.' They made me laugh anyway.

    > Google kills Gemini Cloud Services

    > Running LLaMA-12 7B on a contact lens with WASM

    > Is it time to rewrite sudo in Zig?

    > Show HN: A text editor that doesn't use AI

> It isn’t trained to consider second-order effects.

Well said. There's precious little of that in the human writings that we gave it.

A while back I gave it a prompt, something like, "I'm a historian from the far future. Please give me a documentary-style summary of the important political and cultural events of the decade of the 1980s."

It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.

  • Assuming it was through the chatgpt interface, you can share an anonymized link to the chat if you want to show it off (I'd certainly be curious).

I guess most of the articles it generated are snarky first and prediction next. Like google cancelling gemini cloud, Tailscale for space, Nia W36 being very similar to recent launch etc.

  • > Tailscale for space

    Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)

  • Yep, the original post seemed more snarky than anything, which was what prompted me to ask Claude my own more “sincere” question about its predictions.

    Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.

While I agree completely with the conclusion, for obvious reasons we can’t know for sure if it is correct about the future until we reach it. Perhaps asking it for wild ideas rather than ”most likely” would create something more surprising.

I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.