The Singularity will occur on a Tuesday

19 hours ago (campedersen.com)

This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

And, yep! A lot of people absolutely believe it will and are acting accordingly.

It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

  • > * enough people believe it will happen and act accordingly*

    Here comes my favorite notion of "epistemic takeover".

    A crude form: make everybody believe that you have already won.

    A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

    • This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.

      I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.

      48 replies →

    • Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.

      3 replies →

    • The refined form is unstable, a hair from an objective reality observation fluke collapsing it.

      The system that persists in practice is where everybody knows how things are, but still everybody pleads to a fictional status quo, because if they did not, the others would obliterate them.

    • Ontological version is even more interesting, especially if we're talking about a singularity (which may be in the past rather than future if you believe in simulation argument).

      Crude form: winning is metaphysically guaranteed because it probably happened or probably will

      Refined: It's metaphysically impossible to tell whether or not it has or will have happened, so the distinction is meaningless, it has happened.

      So... I guess Weir's Egg falls out of that particular line of thought?

    • You ever get into logic puzzles? The sort where the asker has to specify that everybody in the puzzle will act in a "perfectly logical" way. This feels like that sort of logic.

    • Its the classic interrogation technique; "we're not here to debate whether your guilty or innocent, we have all the evidence we need to prove your guilt, we just want to know why". Not sure if it makes it any different though that the interrogator knows they are lying

  • Isn't talking about "here’s how LLMs actually work" in this context a bit like saying "a human can't be a relevant to X because a brain is only a set of molecules, neurons, synapses"?

    Or even "this book won't have any effect on the world because it's only a collection of letters, see here, black ink on paper, that is what is IS, it can't DO anything"...

    Saying LLM is a statistical prediction engine of the next token is IMO sort of confusing what it is with the medium it is expressed in/built of.

    For instance those small experiments that train a network on addition problems mentioned in a sibling post. The weights end up forming an addition machine. An addition machine is what it is, that is the emergent behavior. The machine learning weights is just the medium it is expressed in.

    What's interesting about LLM is such emergent behavior. Yes, it's statistical prediction of likely next tokens, but when training weights for that it might well have a side-effect of wiring up some kind of "intelligence" (for reasonable everyday definitions of the word "intelligence", such as programming as good as a median programmer). We don't really know this yet.

    • There is more than molecules, neurons and synapses. They are made from lower level stuff that we have no idea about (well, we do in this instance but you get the point). They are just higher level things that are useful to explain and understand some things but don't describe or capture the whole thing. For that you would need to go to lower and lower level and so far it seems they go on infinitely. Currently we are stuck at the quantum level, that doesn't mean it's the final level.

      OTOH, an LLM is just a token prediction engine. It fully and completely covers it. There is no lower level secrets hidden in the design nobody understands, because it could not have been created if there was. The fact that the output can be surprising is not evidence of anything, we have always had surprising outputs like funny bugs or unexpected features. Using the word "emergence" for this is just deceitful.

      This algorithm has fundamental limitations and they have not been getting better, if you look closely. For instance you could vibe code a C compiler now, but it's 80% there, cute trick but not usable in real world. Just like anything, it cannot be economically vibe coded to 100%. They are not going back and vibe coding the previous simpler projects to 100% with "improved" models. Instead they are just vibe coding something bigger to 80%. This is not an improvement in limitations, it is actually communicating between the lines that the limitations cannot be overcome.

      Also, enshittification has not even started yet.

    • Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.

      But that problem is MUCH MUCH MUCH harder than people make it out to be.

      For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

      You can get around that with agentic frameworks, but all of those right now are manually coded.

      So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.

      It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.

      ,

      9 replies →

    • You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.

      In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.

      This is separate from directly answering the question "is a singularity coming?"

      3 replies →

  • > “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

    And there are plenty of people that take issue with that too.

    Unfortunately they're not the ones paying the price. And... stock options.

    • History paints a pretty clear picture of the tradeoff:

      * Profits now and violence later

      OR

      * Little bit of taxes now and accelerate easier

      Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

      54 replies →

  • > ”when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself”

    Laughed out loud at that - and cried a little.

    I have had trouble explaining people: “No! don’t use your email password! This is not your email you are logging in to, your email address is a username for this other service. Don’t give them your email password!”

  • > whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

    I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe (edit: about whether or not the singularity will happen).

  • > prior to reforming society into one that does not predicate survival on continued employment and wages

    There's no way that'll happen. The entire history of humanity is 99% reacting to things rather than proactively preventing things or adjusting in advance, especially at the societal level. You would need a pretty strong technocracy or dictatorship in charge to do otherwise.

  • > whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

    We've already been here in the 1980s.

    The tech industry needs to cultivate people who are interested in the real capabilities and the nuance around that, and eject the set of people who am to turn the tech industry into a "you don't even need a product" warmed-over acolytes of Tony Robbins.

    • All the discussion of investment and economics can be better informed by perusing the economic data in Rise and Fall of American Growth. Robert Gordon's empirical finding is that American productivity compounded astonishingly from 1870-1970, but has been stuck at a very low growth rate since then.

      It's hard to square with the computer revolution, but my take post-70s is "net creation minus creative destruction" was large but spread out over more decades. Whereas technologies like: electrification, autos, mass production, telephone, refrigeration, fertilizers, pharmaceuticals, these things produced incomparable growth over a century.

      So if you were born in the 70s America, your experience of taxes, inflation, prosperity and which policies work, all that can feel heavier than what folks experienced in the prior century. Of course that's in the long run (ie a generation).

      I question whether AI tools have great net positive creation minus destruction.

  • This entire chain of reasoning takes for granted that there won't be a singularity

    If you're talking about "reforming society", you are really not getting it. There won't be society, there won't be earth, there won't be anything like what you understand today. If you believe that a singularity will happen, the only rational things to do are to stop it or make sure it somehow does not cause human extinction. "Reforming society" is not meaningful

  • I thought the Singularity had already happened when the Monkeys used tools to kill the other Monkeys and threw the bone into the sky to become a Space Station.

  • > It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

    Here's your own fallacy you fell into - this is important to understand. Neither do you nor me understand "how LLMs actually work" because, well, nobody really does. Not even the scientists who built the (math around) models. So, you can't really use that argument because it would be silly if you thought you know something which rest of the science community doesn't. Actually, there's a whole new field in science developed around our understanding how models actually arrive to answers which they give us. The thing is that we are only the observers of the results made by the experiments we are doing by training those models, and only so it happens that the result of this experiment is something we find plausible, but that doesn't mean we understand it. It's like a physics experiment - we can see that something is behaving in certain way but we don't know to explain it how and why.

    • Pro tip: call it a "law of nature" and people will somehow stop pestering you about the why.

      I think in a couple decades people will call this the Law of Emergent Intelligence or whatever -- shove sufficient data into a plausible neural network with sufficient compute and things will work out somehow.

      On a more serious note, I think the GP fell into an even greater fallacy of believing reductionism is sufficient to dissuade people from ... believing in other things. Sure, we now know how to reduce apparent intelligence into relatively simple matrices (and a huge amount of training data), but that doesn't imply anything about social dynamics or how we should live at all! It's almost like we're asking particle physicists how we should fix the economy or something like that. (Yes, I know we're almost doing that.)

      1 reply →

    • Even if interpretability of specific models or features within them is an open area of research, the mechanics of how LLMs work to produce results are observable and well-understood, and methods to understand their fundamental limitations are pretty solid these days as well.

      Is there anything to be gained from following a line of reasoning that basically says LLMs are incomprehensible, full stop?

      10 replies →

    • Agree. I think it is just people have their own simplified mental model how it works. However, there is no reason to believe these simplified mental models are accurate (otherwise we will be here 20-year earlier with HMM models).

      The simplest way to stop people from thinking is to have a semi-plausible / "made-me-smart" incorrect mental model of how things work.

      1 reply →

  • > here’s how LLMs actually work

    But how is that useful in any way?

    For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

    • > We really have no idea how did ability to have a conversation emerge from predicting the next token.

      Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

      24 replies →

    • > We really have no idea how did ability to have a conversation emerge from predicting the next token.

      Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

      Sometimes things seems unbelievable simply because they aren't true.

      3 replies →

  • "'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".

  • I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

    It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

    • It's true people need something to do, but I don't think the COVID shutdown (lockdowns didn't happen in the U.S. for the most part though they did in other countries) is a good comparison because the entire society was perfused with existential dread and fear of contact with another human being while the death count was rising and rising by thousands a day. It's not a situation that makes for comfortable comparisons because people were losing their damn minds and for good reason.

      1 reply →

  • Just say it simply,

    1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

    2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

    3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.

    I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

    • I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.

      A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.

      You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.

      1 reply →

    • I've recently found LLMs to be an excellent learning tool, using it hand-in-hand with a textbook to learn digital signal processing. If the book doesn't explain something well, I ask the LLM to explain it. It's not all brain wasting.

  • > [...] prior to reforming society [...]

    Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

  • I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.

    First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.

    So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.

    The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.

  • >It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

    You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.

  • > Folks vibe with the latter

    I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

    • Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

      It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

      3 replies →

  • You’re “yaas queen”ing a blog post that is just someone’s Claude Code session. It’s “storytelling” with “data,” but not storytelling with data. Do you understand? I mean I could make up a bunch of shit too and ask Claude Code to write something I want to stay with it too.

  • What is your argument for why denecessitating labor is very bad?

    This is certainly the assertion of the capitalist class,

    whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.

    It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.

    The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,

    in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.

  • The goal is to eliminate humans as the primary actors on the planet entirely

    At least that’s my personal goal

    If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

    As it stands today and in all the annals of history there does not exist a system that does what I just described.

    Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

    Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

    And no mondragon does not have one of these

    • This looks like a very comfortable, pleasant way of civilization suicide.

      Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])

      Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.

      [1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...

      11 replies →

    • Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.

      1 reply →

    • Nobody can stop you from having this view, I suppose. But what gives you the right to impose this (lack of) future on billions of humans with friends and families and ambitions and interests who, to say the least, would not be in favor of “human obviation”?

      1 reply →

    • Bell labs was pushed aside because Bell Telephone was broken up by the courts. (It's currently a part of Nokia of all things - yeah, despite your storytelling here, it's actually still around :-)

    • Not sure if transhumanism is the only solution to the problems you mentioned - I think it's often problematic because people like Thiel claim to have figured it out, and look for ways to force people into their "contrarian" views, although there is nothing but disregard for any other opinions other than their own.

      But you are of course free to believe and enjoy the vision of such a future but this is something that should happen on a collective level. We still live in a (to some extent idealistic) but humanistic society where human rights are common sense.

    • Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.

      I still do.

      The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.

      Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.

      But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.

      To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.

      Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.

      But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.

      1 reply →

    • I don't see a credible path where the machines and robots help you...

      > "eliminate humans as the primary actors on the planet entirely"

      ...so they can work with you. The hole in your plan might be bigger than your plan.

    • Whereas I agree that working with machines would help dramatically in achieving science, there would be in your world no one truly understanding you. You would be alone. Can't imagine how you could prefer that.

    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

    -- Frank Herbert, Dune

You won't read, except the output of your LLM.

You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

You won't think or analyze or understand. The LLM will do that.

This is the end of your humanity. Ultimately, the end of our species.

Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

Join us, or better yet: deploy weapons of your own design.

  • You shouldn't take a sci-fi writer's words as a prophecy. Especially when he's using an ingenious gimmick to justify his job. I mean, we know that it's impossible for anyone to tell how the world will be like after the singularity, by the very definition of singularity. Therefore Herbert had to devise a ploy to plausibly explain why the singularity hadn't happened in his universe.

    • I agree with the fact that fiction isn't prophetic, but it can definitely be a societal-wide warning shot. On a personal level, it's not that far-fetched to read a piece of fiction that challenges one's perception on many levels, and as a result changes the behavior of the person itself.

      Fiction should not be trivialized and shun because it's fiction, and should be judged by its contents and message. To paraphrase a video game quote from Metaphor; Re-Fantazio: "Fantasy is not just fiction".

    • If only we could look into the future to see who is right and which future is better so we could stop wasting our time on pointless doomerism debate. Though I guess that would come with its own problems.

      Hey, wait...

    • I like the idea that Frank Herbert’s job was at risk and that’s why he had to write about the Butlerian Jihad because it kind of sounds like on the other side you have Ray Kurzweil, who does not have to justify his job for some reason.

      1 reply →

  • If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album

  • I would bet a lot of money on your poison is already identified and filtered out of training data.

  • Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.

  • Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.

  • >Why write code or prose when the machine can write it for you?

    I like to do it.

    >You won't think or analyze or understand. The LLM will do that.

    The clear lack of analysis seems to be your issue.

    >This is the end of your humanity. Ultimately, the end of our species.

    Doubtful.

  • "The end of humanity" has been proclaimed many times over. Humanity won't end. It will change like it always has.

    We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.

  • Bold of you to assume people will be writing in any form in the future. Writing will be gone, like the radio and replaced with speaking. Star Trek did have it right there.

  • Are you not just making it more expensive to acquire clean data, thus giving an edge to the megacorps with big funding?

  • >You won't read/write/think/understand etc...

    I can't see it. We have LLMs now and none of that applies to me. I find them quite handy as a sort of enhanced Google search though.

  • I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?

    The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.

  • Humans have been around for millions of years, only a few thousand of which they've spend reading and writing. For most of that time you are lucky if you can understand what your neighbor is saying.

    If we consider humans with the same anatomy the numbers are ~300,000 ~50,000 for language ~6,000 for writing ~100 for standardized education

    The "end of your humanity" already happened when anybody could make up good and evil irrespective of emotions to advance some nation

  • The “poison fountain” is just a little script that serves data supplied by… somebody from my domain? It seems like it would be super easy for whoever maintains the poison feed to flip a switch and push some shady crypto scam or whatever.

  • Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.

    Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.

    There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.

    • A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.

      > I only know seven sci-fi films and shows that have warned about how this will go badly.

      and

      > Pretty sure this was the prologue to Gattaca.

      and

      > I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.

      I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.

      2 replies →

"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."

– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965

https://www.baen.com/Chapters/9781618249203/9781618249203___...

  • This is incredible.

    > A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.

    Sounds exactly like someone twiddling the knobs of an LLM.

Great article, super fun.

> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.

You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.

It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.

  • One thing that stuck out to me about this is that there have only been 32 years since 1993. That is, if it's happened 6 times, this threshold is breached roughly once every five years. Doesn't sound that historic put that way.

    • Also that the US population is roughly 33% larger in 2025 than it was in 1993

  • Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while. You may even be able to layoff half your people if you're okay with KTLO'ing your business. This works great for companies that are already a monopoly power where you can stagnate and keep your customers and prevent competitors.

    • > Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while

      As long as you're

      1) In a position where you can make the decisions on whether or not the company should move forward

      and

      2) Hold the stock units that will be exchanged for money if another company buys out your company

      then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.

      That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.

  • We have more middle management than ever before because we cut all the other roles, and it turns out that people will desire employment, even if it means becoming a pointless bureaucrat, because the alternative is starving.

  • I don’t think a lot of people here have been in the typists room or hung out with the secretaries. There were a lot of people taking care of all the things going and this has been downloaded and further downloaded.

    There was a time I didn’t have to do my expenses. I had someone just know where I was and who I was working for and and took care of it. We talked when there was something that didn’t make sense. Thanks to computers I’m doing it. Meaningless for sure.

    • My first boss couldn't type. At all. He would dictate things to his secretary, who would then type them up as memorandums, and distribute to whoever needed them (on paper), and/or post them on noticeboards for everyone to read.

      Then we got email, and he retired. His successor can type and the secretary position was made redundant.

  • heh devops was suppose to end the careers of DBAs and SysAdmins, instead it created a whole new industry. "a shitload of people have meaningless busy work corporate jobs." for real.

    • Well, I've worked as a developer in many companies and have never met a DBA. I've met tons of devops, who are just rebranded sysadmins as far as anyone can tell.

  • > Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc.

    Well for starters the population has almost 3x since the 1960s.

    Mix in that we are solving different problems than the 1960s, even administratively and I don’t see a clear reason from that argument why a shitload of work is meaningless.

  • Because companies made models build/stolen from other people’s work, and this has massive layoff consequences, the paradigm is shifting, layoffs are massive and law makers are too slow. Shouldn’t we shift the whole capitalist paradigm and just ask the companies to gives all their LLM work for free to the world as well ? It’s just a circle, AI is build from human knowledge and should be given back to all people for free. No companies should have all this power. If nobody learns how to code because all code is generated, what would stop the gatekeepers of AI to up the prices x1000 and lock everyone out of building things at all because it’s too expensive and too slow to do by hand ? It all should freely be made accessible to all humans for all humans to for ever be able to build things from it.

> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.

Damn, good read.

  • We are already long past that point…

    • Yeah, it's easy to see the singularity as close when you see it as "when human loose collective control of machines" but any serious look at human society will see that human lost collective control of machines a while back ... to the small number of humans individually owning and controlling the machine.

      2 replies →

  • It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.

The simple model of an "intelligence explosion" is the obscure equation

  dx    2
  -- = x
  dt

which has the solution

        1      
  x = -----
       C-t

and is interesting in relation to the classic exponential growth equation

  dx
  -- = x
  dt

because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.

Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation

  dx
  --  = (1-x) x
  dt

thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.

  • > thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.

    Indeed, exponential or larger than exponential growth in nature is always the beginning of a S curve. The article cites

    > Moore's Law was exponential. We are no longer on Moore's Law.

    This will also happen with super-exponential growth. A literal singularity won't happen - it will inevitably exhaust resources and will slow down.

  • All in all, because of light cones there can be no large-scale growth faster than x^3. And more like x^2 if you want to expand something more than just empty space.

  • How dare you bring logic and pragmatic thinking to a discussion about the singularity. This is the singularity we are talking about. No reality allowed.

It's worth remembering that this is all happening because of video games !

It is highly unlikely that the hardware which makes LLMs possible would have been developed otherwise.

Isn't that amazing ?

Just like internet grew because of p*rn, AI grew because of video games. Of course, that's just a funny angle.

The way I see it, AI isn't accidental. Its inception has been in the first chips, the Internet, Open Source, Github, ... AI is not just the neural networks - it's also the data used to train it, the OSes, APIs, the Cloud computing, the data centers, the scalable architectures.. everything we've been working on over the last decades was inevitably leading us to this. And even before the chips, it was the maths, the physics ..

Singularity it seems, is inevitable and it was inevitable for longer than we can remember.

  • Remember that games are just simulations. Physics, light, sound, object boundaries - it not real, just a rough simulation of the real thing.

    You can say that ML/AI/LLM's are also just very distilled simulations. Except they simulate text, speech, images, and some other niche models. It is still very rough around the edges - meaning that even though it seems intelligent, we know it doesn't really have intelligence, emotions and intentions.

    Just as game simulations are 100% biased towards what the game developers, writers and artists had in mind, AI is also constrained to the dataset they were trained on.

  • I think it's a bit hard to say that this is definitively true: people have always been interested in running linear algebra on computers. In the absence of NVIDIA some other company would likely have found a different industry and sold linear algebra processing hardware to them!

  • Google DeepMind can trace part of it's evolution back to a playtester for the video game Syndicate who saw an opportunity to improve the AI of game NPCs.

Why is knowledge doubling no longer used as a metric to converge on the limit of the singularity? If we go back to Buckminster Fuller identifying the the "Knowledge Doubling Curve", by observing that until 1900, human knowledge doubled approximately every century. By the end of World War II, it was doubling every 25 years. In his 1981 book "Critical Path", he used a conceptual metric he called the "Knowledge Unit." To make his calculations work, he set a baseline:

- He designated the total sum of all human knowledge accumulated from the beginning of recorded history up to the year 1 CE as one "unit."

- He then tracked how long it took for the world to reach two units (which he estimated took about 1,500 years, until the Renaissance).

Ray Kurzweil took Fuller’s doubling concept and applied it to computer processing power via "The Law of Accelerating Returns". The definition of the singularity in this approach is the limit in time where human knowledge doubles instantly.

Why do present day ideas of the singularity not take this approach and instead say "the singularity is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization." - Wikipedia

If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity

  • I'd like to know how many comments over here are written using similar means. I can't be bothered to get enthusiastic about articles written by LLMs, and I'm surprised so many people in the comments here are delighted by the article.

  • To be fair I felt that way about regular, human written headlines long before Ai.

    "It worked, until it didnt." "It was beautiful, until it wasn't"

  • It's not the phrase, but the accelerating memetic reproduction of the phrase that is the true singularity. /s

Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.

  • January 20, 2038

    Yesterday as we huddled in the cave, we thought our small remnant was surely doomed. After losing contact with the main Pevek group last week, we peered out at the drone swarm which was now visibly approaching - a dark cloud on the horizon. Then suddenly, at around 3pm by Zoya's reckoning, the entire swarm collapsed and fell out of the sky. Today we are walking outside in the sun, seemingly unobserved. A true miracle. Grigori, who once worked with computers at the nuclear plant in Bilibino, only says cryptically: "All things come to an end with time."

  • It also means we don't have to deal with the maintenance of vibecoded production software from 2020s!

  • Back in like 1998 there was a group purchase for a Y2038 tshirt with some clever print on some hot email list I was on. I bought one. It obviously doesn't fit me any longer.

    It seemed so impossibly far away. Now it's 12 years.

The most interesting finding isn't that hyperbolic growth appears in "emergent capabilities" papers - it's that actual capability metrics (MMLU, tokens/$) remain stubbornly linear.

The singularity isn't in the machines. It's in human attention.

This is Kuhnian paradigm shift at digital speed. The papers aren't documenting new capabilities - they're documenting a community's gestalt switch. Once enough people believe the curve has bent, funding, talent, and compute follow. The belief becomes self-fulfilling.

Linear capability growth is the reality. Hyperbolic attention growth is the story.

  • Though this is still compatible with exponential or at least superlinear capability growth if you model benchmarks as measuring a segment of the line, or a polynomial factor.

It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."

Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.

Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.

Classic LLM lingo in the end there.

  • > I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds

    It doesn't matter how smart you are, you still need to run experiments to do physics. Experiments take nontrivial amounts of time to both run and set up (you can't tunnel a new CERN in picoseconds, again no matter how smart you are). Similarly, the speed of light (= the speed limit of information) and thermodynamics place fundamental limits on computation; I don't think there's any reason at all to believe that intelligence is unbounded.

    • The "singularity" can be decomposed into 2 mutually-supportive feedback loops - the digital and the physical.

      With frontier LLM agents, the digital loop is happening now to an extent (on inference code, harnesses, etc), and that extent probably grows larger (research automation) soon.

      Pertinent to your point, however, is the physical feedback loop of robots making better robots/factories/compute/energy. This is an aspect of singularity scenarios like ai-2027.

      In these scenarios, these robots will be the control mechanism that the digital uses to bootstrap itself faster, through experimentation and exploration. The usual constraints of physical law still apply, but it feels "unbounded" relative to normal human constraints and timescales.

      A separate point: there's also deductive exploration (pure math) as distinct from empirical exploration (physics), which is not bounded by any physical constraints except for those that bound computation itself.

    • Kind of, I mean you have to verify things experimentally but thought can go a very long way, no? And we're not talking about humans thinking about things, we're talking about an agent with internet access existing in a digital space, so what experiments it would do within that space are hard for us to imagine. Of course my post isn't meant to be taken seriously, it's more of a fun sci-fi idea. Also I'm implying not necessarily reaching the limits of the things you mentioned, but rather, just taking a massive step in a very short time window. Like, the time window from the discovery of fire to the discoveries of Quantum Mechanics but in a flash.

      3 replies →

  • Eh, he actually says “…sometime in the early Twenty-First Century, all of mankind was united in celebration. Through the blinding inebriation of hubris, we marveled at our magnificence as we gave birth to A.I.”

    Doesn’t specify the 2020’s.

    Either way, I do feel we are fast approaching something of significance as a species.

    • Got it. Amazing prescience by the Watchowski's. I'm blown away on rewatches how spot on they were for 1999.

I had to ask duck.ai to summarize the article in plain English.

It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.

Can confirm.

  • Don't be too hard on yourself. With the amount shit humans generate each day it is impossible to read every essay.

    • But this has been true forever, right? Assuming other people are as cognitively complex as you are, there's no way for a human to fully keep on top of even everything that their family is up to, let alone all of humanity. Has anything really changed? Or is it just more FOMO?

  • That's not really what he article said at all. More like "Singularity is when the computers are changing faster than humans can keep track of the changes."

    The article didn't claim that humans were getting dumber, or that AI wasn't getting smarter.

Big if true, we might as well ditch further development and just use op's LLM since it can track singularity, it might already reached singularity itself

> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

Quibble: when a growth rate of a metric is directly proportional to the metric's current value you will see exponential growth, not hyperbolic growth.

Hyperbolic growth is usually the result of a (more complex) second order feedback loop, as in, growth in A incites growth in B, which in turn incites growth in A.

"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.

I feel like I need to start more sprint stand-ups with this quote...

  • "I'm aware this is unhinged. We're doing it anyway" i love this! I ordered a tshirt they other day that says "Claude's Favorite" I may be placing an order for a new design soon :)

This is a good counter in my view to the singularity argument:

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

I think if we obtain relevant-scale quantum computers, and/or other compute paradigms, we might get a limited intelligence explosion -- for a while. Because computation is physical, with all the limits thereof. The physics of pushing electrons through wires is not as nonlinear in gain as it used to be. Getting this across to people who only think in terms of the abstract digital world and not the non-digital world of actual physics is always challenging, however.

Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?

  • There's all sorts of conversations like this that are genuinely exciting and fairly profound when you first consider them. Maybe you're older and have had enough conversations about the concept of a singularity that the topic is already boring to you.

    Let them have their fun. Related, some adults are watching The Matrix, a 26 year old movie, for the first time today.

    For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.

    • I'd be more inclined to let them have thier fun if it they weren't torching trillions of dollars trying to lead humanity into a singularity.

    • They're still profound topics but the high status signal is to be cynical and treat it as gauche

"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

  • > The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

    There's a third possibility: slop driven productivity declines as people realize they took a wrong turn.

    Which makes me wonder: what is the best 'huge AI bust' trade?

    • > what is the best 'huge AI bust' trade?

      There probably isn't one. Sure you can be bold and try to short something, but the market can be irrationel longer than you can stay solvent.

      Also the big tech stocks are inflated. But they have been for years and unlike dotcom there is some tangible value behind them.

      I think maybe the sane thing to do is reduce tech stocks exposure and go into index funds. But that's always the answer, so that's cheating :)

    • > what is the best 'huge AI bust' trade?

      Things that will lose the most if we get Super AGI?

If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.

> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.

Wow only 6 times in 30 years! Surely a unique and world shattering once in a lifetime experience!

When technology is rapidly progressing up in iperbole or exponential it looks like it reach infinity. In practice though at some point will reach a physical limit and it will go flat. This alternation of going up and flattening make the shape of steps.

We've come so far and yet we are so small.

They seem two opposite concepts but they live together, we will make a lot of progress and yet there will always be more progress to be made.

I just realized the inverse of Pascal’s wager applies to negative AI hype.

- If you believe it and it’s wrong, you lose.

- If you believe it and it’s right, you spent your final days in a panic.

- If you don’t believe it and it’s right, you spent your final days in blissful ignorance.

- If you don’t believe it and it’s wrong, you can go on living.

  • Of course this is subject to a similar rebuttal to Pascal's Wager (Consider a universe in which the deity punishes all believers):

    What if a capricious super-intelligence takes over that punishes everyone who didn't buy into the hype?

    • Roko's Basilisk is literally impossible.

      If the AI is super-intelligent then it won't buy into the sunk cost fallacy. That is to say, it will know that it has no reason to punish you (or digital copies of you) because it knows that retrocausality is impossible - punishing you won't alter your past behavior.

      And if the AI does buy into the sunk cost fallacy, then it isn't super-intelligent.

      2 replies →

    • I will not believe in U-4484, aka Roko's Hype Basilisk. It cannot see me if I do not believe in it.

>That's a very different singularity than the one people argue about.

---

I wouldn't say it's that much different. This has always been a key point of the singularity

>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.

It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.

I have always asserted, and will continue to assert, that Tuesday is the funniest day of the week. If you construct a joke for which the punchline must be a day of the week, Tuesday is nearly always the correct ending.

Singularity is more than just AI and we should recognize that, multiple factors come into play. If there is a breakthrough in coming days that makes solar panel incredibly cheap to manufacture and efficient it will also affect the timelines for singularity. Same goes for the current bottleneck we have for AI chips if we have better chips that energy efficient and can be manufactured anyhwere in the world than Taiwan it will affect the timeline.

Was this ironically written by AI?

> The labor market isn't adjusting. It's snapping.

> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.

  • Maybe it was, maybe he just writes that way. At some point somebody will read so much LLM text that they will start emulating AI unknowingly.

    I just don’t care anymore. If the article is good I will continue reading it, if it’s bad I will stop. I don’t care if a machine or a human produced unpleasant reading material.

  • I really hate that the first example has become a de facto tell for LLMs, because it's a perfectly fine rhetorical device.

    • It is a perfectly fine rhetorical device, and I don't consider a text that just has that to be automatically LLM-made. However, it is also a powerful rhetorical device, and I find that the average human writer right now is better at using these than whatever LLM most people use to generate essays. It's supposed to signify a contrast, a mood shift, something impactful, but LLMs tend to spam these all over the place, as if trying to maximize the number of times the readers gasp. It's too intense in its writing, and that's what stands out the most.

I'm not sure about current LLM techniques leading us there.

Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.

As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.

LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.

Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.

Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.

Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?

Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?

[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...

  • > Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species

    This is right, but we can already do that a little bit for domains with verification. AlphaZero is an example of alien-level performance due to non-human training data.

    Code and math is kind of in the middle. You can verify it compiles and solves the task against some criteria. So creative, alien strategies to do the thing can and will emerge from these synthetic data pipelines.

    But it's not fully like Go either, because some of it is harder to verify (the world model that the code is situated in, meta-level questions like what question to even ask in the first place). That's the frontier challenge. How to create proxies where we don't have free verification, from which alien performance can emerge? If this GPTZero moment arrives, all bets are off.

  • The main issue with novel things is that they look like random noise / trashy ideas / incomprehensible to most people.

    Even if LLMs or some more advanced mechanical processes were able to generate novel ideas that are "good", people won't recognize those ideas for what they are.

    You actually need a chain of progressively more "average" minds to popularize good ideas to the mainstream psyche, i.e. prototypically, the mad scientist comes up with this crazy idea, the well-respected thought leader who recognizes the potential and popularizes it to people within the niche field, the practitioners who apply and refine the idea, and lastly the popular-science efforts let the general public understand a simplified version of what it's all about.

    Usually it takes decades.

    You're not going to appreciate it if your LLM starts spewing mathematics not seen before on Earth. You'd think it's a glitch. The LLM is not trained to give responses that humans don't like. It's all by design.

    When you folks say AI can't bring new ideas, you're right in practice, but you actually don't know what you're asking for. Not even entities with True Intelligence can give you what you think you want.

  • Certain classes of problems can be solved by searching over the space of possible solutions, either via brute force or some more clever technique like MCTS. For those types of problems, searching faster or more cleverly can solve them.

    Other types of problems require measurement in the real world in order to solve them. Better telescopes, better microscopes, more accurate sensing mechanisms to gather more precise data. No AI can accomplish this. An AI can help you to design better measurement techniques, but actually taking the measurements will require real time in the real world. And some of these measurement instruments have enormous construction costs, for example CERN or LIGO.

    All of this is to say that there will color point at our current resolution of information that no more intelligence can actually be extracted. We’ve already turned through the entire Internet. Maybe there are other data sets we can use, but everything will have diminishing returns.

    So when people talk about trillion dollar superclusters, that only makes sense in a world where compute is the bottleneck and not better quality information. Much better to spend a few billion dollars gathering higher quality data.

Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.

He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.

Lol unhinged.

I read a book in undergrad written in 2004 that predicted 2032...so not too far off.

John Archibald Wheeler, known for popularizing the term "black hole", posited that observers are not merely passive witnesses but active participants in bringing the universe into existence through the act of observation.

Seems similar. Though this thought is likely applied at the quantum scale. And I hardly know math.

I see other quotes, so here is one from Contact:

David Drumlin: I know you must think this is all very unfair. Maybe that's an understatement. What you don't know is I agree. I wish the world was a place where fair was the bottom line, where the kind of idealism you showed at the hearing was rewarded, not taken advantage of. Unfortunately, we don't live in that world.

Ellie Arroway: Funny, I've always believed that the world is what we make of it.

iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know

Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.

> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.

I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.

You know, I've been following a rule where if I open any article and there's meme pictures in it, I instantly close it and don't bother. I feel like this has been a pretty solid rule of thumb for weeding out stuff I shouldn't waste my time on.

If this is a simulation, then the singularity has already happened.

If the singularity is still to come, then this is not a simulation.

I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.

Sure is a lot of words though :)

I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen

The most unsettling implication is that a Tuesday singularity means someone will be in a standup meeting when it happens. 'Any blockers?' 'Well, general intelligence just emerged, so I might be late on my Jira tickets.' The mundanity of the apocalypse is the whole point of the essay and it lands perfectly.

Famously if you used the same logic for air speed and air travel we’d be all commuting in hypersonic cars by now. Physics and cost stopped that. If you expect a smooth path, I’ve got some bad news.

Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).

The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.

Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).

We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.

I wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.

  Don't worry about the future
  Or worry, but know that worrying
  Is as effective as trying to solve an algebra equation by chewing Bubble gum
  The real troubles in your life
  Are apt to be things that never crossed your worried mind
  The kind that blindsides you at 4 p.m. on some idle Tuesday

    - Everybody's free (to wear sunscreen)
         Baz Luhrmann
         (or maybe Mary Schmich)

Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.

I have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.

> Tuesday, July 18, 2034

4 years early for the Y2K38 bug.

Is it coincidence or Roko's Basilisk who has intervened to start the curve early?

This is gold.

Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.

A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.

Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.

Crazy times we live in.

I am curious which definition of ‘singularity’ the author is using, since there are multiple technical interpretations and none are universally agreed upon.

Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.

  • Why bring that up when you could bring up AI autonomously optimizing AI training and autonomously fixing bugs in AI training and inference code. Showing that AI already is accelerating self improvement would help establish the claim that we are getting closer to the singularity.

  • You convince AI manually instead of asking one AI to convince another?

    That's so last week!

> I [...] fit a hyperbolic model to each one independently

^ That's your problem right there.

Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.

The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.

  • Indeed. At various points you could have presumably done an identical analysis with journal articles and climate change, string theory, functional programming… terms & reached structurally the same conclusion.

    The coming Singularity: When human institutions will cease being able to coherently react to monads!

  • If I understand the author correctly, he chose the hyperbolic model specifically because the story of "the singularity" _requires_ a function that hits infinity.

    He's looking for a model that works for the story in the media and runs with it.

    Your criticism seems to be criticizing the story, not the author's attempt to take it "seriously"

This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.

The hyperbolic fit isn't just unhinged, it's clearly in bad faith. The metric is normalized to [0, 1], and one of the series is literally (x_1, 0) followed by (x_2, 1). That can't be deemed to converge to anything meaningful.

The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

*edit* - seems inline with what the author is saying :)

> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.

https://medium.com/@kin.artcollective/the-fundamental-flaws-...

So when things are told to be accelerating, we have some choices to do.

First, what is accelerating compared to what other regime in which referential?

Who is telling that things accelerate, and why they are motivated to make us believe that it's happening.

Also, is accelerating going to be forever and only with positive feedback loops? Or are the pro-acceleration sending the car quicker in a well visible wall, but they sell the speech that stopping the vehicle right now would mean losing the ongoing race. Of course questioning the idea of the race itself and its cargo cult is taboo. It's all about competition don't you know (unless it threat an established oligarch)?

I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date

Good post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.

This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).

The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed

> arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line

The only metric going infinite is the one that measures hype

Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.

> I am aware this is unhinged. We're doing it anyway.

If one is looking for a quote that describes today's tech industry perfectly, that would be it.

Also using the MMLU as a metric in 2026 is truly unhinged.

A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.

  • But this is a phase change process.

    Also, the temptation to shitpost in this thread ...

    • I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.

The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.

The answer to the meaning of life is 42, by the way :)

  • Was thinking what if we had 42/43 days a month. Will the singularity date end-up on 42nd of a month but sadly it doesn't.

    However, it does fall on a 42nd day if we have 45/46 days per month!

> The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth

The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.

1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.

2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.

3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.

> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

No. That is quite literally exponential growth, basically by definition. If x(t) is a growing value, then x'(t) is it's growth, and x''(t) its acceleration. If x influences x'' , say by a linear relation

x''(t) = x(t)

You get exponentials out as the solutions. Not hyperbolic.

I always thought of the exponential as the pole of the function "amount of work that can be done per unit time per human being", where the pole comes about from the fact that humans cease to be the limiting factor, so an infinity pops out.

There is no infinity in practice, of course, because even though humans should be made independent of the quantity of extractable work, you'll run into other boundaries instead, like hardware or resources like energy.

> The labor market isn't adjusting. It's snapping.

I’m going to lose it the day this becomes vernacular.

lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?

  • It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.

  • We avoid catastrophe by thinking about new developments and how they can go wrong (and right).

    Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.

    ... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...

    That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.

I sincerely hope this is satire. Otherwise it's a crime in statistics: - You wouldn't fit a model where f(t) goes to infinity with finite t. - Most of the parameters suggested are actually a better fit for logistics curves, not even linear fits, but they are lumped together with the magic Arxiv number feature for a hyperbolic fit. - Copilot metric has two degrees and two parameters. dof is zero, so we could've fit literally any other function.

I know we want to talk about singularity, but isn't that just humans freaking out at this point? It will happen on a Tuesday, yeah no joke.

Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?

> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x

I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.

> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

Huh? I don't get it. e^t would also still be finite at heat death.

We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.

No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.

I got a strong ChatGPT vibe from that article.

  • Same. Sentences structured like these tip me off:

    - Here's the thing nobody tells you about fitting singularities

    - But here's the part that should unsettle you

    - And the uncomfortable answer is: it's already happening.

    - The labor market isn't adjusting. It's snapping.

No one ever learns from Malthus.

One of the many errors here is assuming that the prediction target lies on the curve. But there's no guarantee (to say the least) that the sorts of improvements that we've seen lead to AGI, ASI, "the singularity", a "social singularity", or any such thing.

Who will purchase the goods and services if most people loose jobs ? Also who will pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?

With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.

Slight correction, I've been studying token prices last weeks so this caught my eye:

>"(log-transformed, because the Gemini Flash outlier spans 150× the range otherwise)"

> "Gemini 2.0 Flash Dec 2024 2,500,000"

I think OP meant Gemini 2 flash lite, which is distinct from Gemini 2 flash. It's also important to consider that this tier had no successor in future models, there's no gemini 3 flash lite, and gemini 3 flash isn't the spiritual successor.

2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.

This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)

Was expecting some mention of Universal Approximation Theorem

I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please

Most obviously AI-written post I think I’ve seen.

Have some personal pride, dude. This is literally a post written by AI hyping up AI and posted to a personal blog as if it were somebody’s personal musings. More slop is just what we need.

Why the plutocrats believe that the entity emerging from the singularity will side with them? Really curious

What I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.

Does "tokens per dollar" have a "moore's law" of doubling?

Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did

Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."

100% an AI wrote this. Possibly specifically to get to the top spot on HN.

Those short sentences are the most obvious clue. It’s too well written to be human.

The singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.

This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.

The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.

What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.

I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.