← Back to context

Comment by beklein

9 days ago

Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)

https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...

//edit: remove the referral tags from URL

I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.

  • Could you give some specific examples of things you feel definitely did not come to pass? Because I see a lot of people here talking about how the article missed the mark on propaganda; meanwhile I can tab over to twitter and see a substantial portion of the comment section of every high-engagement tweet being accused of being Russia-run LLM propaganda bots.

  • Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.

    Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.

    The societal claims also fall apart quickly:

    > Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.

    This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.

    This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:

    > Most of America gets their news from Twitter, Reddit, etc.

    Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.

  • something you can't know

    • This doesn’t seem like a great way to reason about the predictions.

      For something like this, saying “There is no evidence showing it” is a good enough refutation.

      Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.

      The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.

That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.

  • Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?

    This forum has been so behind for too long.

    Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1

    Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.

    First stage is denial, I get it, not easy to swallow the gravity of what’s coming.

    • >This forum has been so behind for too long.

      There is a strong financial incentive for a lot of people on this site to deny they are at risk from it, or to deny what they are building has risk and they should have culpability from that.

    • People have been predicting the singularity to occur sometimes around 2030 and 2045 waaaay further back then 2015. And not just by enthusiasts, I dimly remember an interview with Richard Darkins from back in the day...

      Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will. They'll likely be a component in the AI, but likely not the thing that "drives"

      2 replies →

    • > "Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”

      If that's really true, why is there such a big push to rapidly improve AI? I'm guessing OpenAI, Google, Anthropic, Apple, Meta, Boston Dynamics don't really believe this. They believe AI will make them billions. What is OpenAI's definition of AGI? A model that makes $100 billion?

      2 replies →

    • And why are Altman's words worth anything? Is he some sort of great thinker? Or a leading AI researcher, perhaps?

      No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.

      2 replies →

    • > Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?

      OK, say I totally believe this. What, pray tell, are we supposed to do about it?

      Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.

      I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?

      While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.

      So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.

      8 replies →

    • It's not something you need to worry about.

      If we get the Singularity, it's overwhelmingly likely Jesus will return concurrently.

      1 reply →

This article was prescient enough that I had to check in wayback machine. Very cool.

I'm not seeing the prescience here - I don't wanna go through the specific points but the main gist here seems to be that chatbots will become very good at pretending to be human and influencing people to their own ends.

I don't think much has happened on these fronts (owning to a lack of interest, not technical difficulty). AI boyfriends/roleplaying etc. seems to have stayed a very niche interest, with models improving very little over GPT3.5, and the actual products are seemingly absent.

It's very much the product of the culture war era, where one of the scary scenarios show off, is a chatbot riling up a set of internet commenters and goarding them lashing out against modern leftist orthodoxy, and then cancelling them.

With all thestrongholds of leftist orthodoxy falling into Trump's hands overnight, this view of the internet seems outdated.

Troll chatbots still are a minor weapon in information warfare/ The 'opinion bubbles' and manipulation of trending topics on social media (with the most influential content still written by humans), to change the perception of what's the popular concensus still seem to hold up as primary tools of influence.

Nowadays, when most people are concerned about stuff like 'will the US go into a shooting war against NATO' or 'will they manage to crash the global economy', just to name a few of the dozen immediately pressing global issues, I think people are worried about different stuff nowadays.

At the same time, there's very little mention of 'AI will take our jobs and make us poor' in both the intellectual and physical realms, something that's driving most people's anxiety around AI nowadays.

It also puts the 'superintelligent unaligned AI will kill us all' argument very often presented by alignment people as a primary threat rather than the more plausible 'people controlling AI are the real danger'.

> (2025) Making models bigger is not what’s cool anymore. They are trillions of parameters big already. What’s cool is making them run longer, in bureaucracies of various designs, before giving their answers.

Holy shit. That's a hell of a called shot from 2021.

  • its vague, and could have meant anything. everyone knew parameters would grow and its reasonable to expect that things that grow have diminishing returns at some point. this happened in late 2023 and throughout 2024 as well.

    • That quote almost perfectly describes o1, which was the first major model to explicitly build in compute time as a part of its scaling. (And despite claims of vagueness, I can't think of a single model release it describes better). The idea of a scratchpad was obvious, but no major chatbot had integrated it until then, because they were all focused on parameter scaling. o1 was released at the very end of 2024.

      1 reply →

> The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics. For example, they literally ask the models “so, are you aligned? If we made bigger versions of you, would they kill us? Why or why not?” (In Diplomacy, you can actually collect data on the analogue of this question, i.e. “will you betray me?” Alas, the models often lie about that. But it’s Diplomacy, they are literally trained to lie, so no one cares.)

…yeah?

How does it talk about GPT-1 or 3 if it was before ChatGPT?

  • GPT-3 (and, naturally, all prior versions even farther back) was released ~2 years before ChatGPT (whose launch model was GPT-3.5)

    The publication date on this article is about halfway between GPT-3 and ChatGPT releases.

nevermind, I hate this website :D

  • Surely you're familiar with https://ai.meta.com/research/cicero/diplomacy/ (2022)?

    > I wonder who pays the bills of the authors. And your bills, for that matter.

    Also, what a weirdly conspiratorial question. There's a prominent "Who are we?" button near the top of the page and it's not a secret what any of the authors did or do for a living.

    • hmmm I apparently confused it with an RTS, oops.

      also it's not conspiratorial to wonder if someone in silicon valley today receives funding through the AI industry lol like half the industry is currently propped up by that hype, probably half the commenters here are paid via AI VC investments