Comment by Der_Einzige

13 days ago

The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.

That fact that people see "the singularity happening" based on LLM results, is why most people are the kind of ignorant cheerleaders of tech that predicted robot servants, flying cars, and space colonies by 2000 in 1950.

  • This feels different. In the 1950s rapid technological progress had been driven by the pressures of the second world war and produced amazing things that held a lot of promise, but few appreciated the depth of complexity of what lay before them. A lot of that complexity had to be solved with software which expanded the problem set rather than solving it. If we have a general solution to the problem of software, we don't know that there are other barriers that would slow progress so much.

  • Tomy made the Dustbot robot vacuum in 1985, Electrolux made the Trilobite robot vacuum in 1996, and then washing machines, dishwashers, tumble-dryers, microwaves, microwave meals, disposable diapers, fast fashion, and plug-in vacuums, floor steamers, carpet washers, home automation for lights and curtains, central heating instead of coal/wood fires and ash buckets, fridge-freezers and supermarkets (removing the need for canning, pickling, jamming, preserving), takeaways and food delivery, people having 1-2 children instead of 6-12 children. The amount of human labour in housework has plummetted since 1900.

    Plenty of flying cars existed through the 1900s, including commercial ones: https://en.wikipedia.org/wiki/Flying_car

    The International Space Station was launched in 1998.

> the singularity is happening

[Citation needed]

No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.

LLMs aren't AGI, and aren't a path to AGI.

The Singularity is the Rapture for techbros.

  • You are right about "LLM improving themselves" is impossible. In fact, LLM make themselves worse, it is called knowledge collapse [1].

    [1] https://arxiv.org/abs/2404.03502

    • That paper again.

      LLMs have been trained on synthetic outputs for quite a while since then and they do get better.

      Turns out there's more to it than that.

  • LLMs aren't AGI and maybe aren't a path to AGI, but step back and look at the way the world is changing. Hard disks were invented by IBM in 1953 and now less than a hundred years later there's an estimated million terabytes a year of hard disks made and sold, and a total sold of Mega, Giga, Tera, Peta, Exa, Zetta ... 1.36 Zettabytes.

    In 2000, webcams were barely a thing, audio was often recorded to dictaphone tapes, and now you can find a recorded photo or video of roughly anyone and anything on Earth. Maybe a tenth of all humans, almost any place, animal, insect, or natural event, almost any machine, mechanism, invention, painting, and a large sampling of "indoors" both public and private, almost any festival or event or tradition, and a very large sampling of "people doing things" and people teaching things for all kinds of skills. And tons of measurements of locations, temperatures, movements, weather, experiment results, and so on.

    The ability of computers to process information jumped with punched card readers, with electronic computers in the 1950s, again with transistors in the 1970s, semiconductors in the 1980s, commodity computer clusters (Google) in the 1990s, maybe again with multi-core desktops for everyone in the 2000s, with general purpose GPUs in the 2010s, and with faster commodity networking from 10Mbit to 100Gbit and more, and with SATA, then SAS, then RAID, then SSDs.

    It's now completely normal to check Google Maps to look at road traffic and how busy stores are (picked up in near realtime from the movement of smartphones around the planet), to do face and object recognition and search in photos, to do realtime face editing/enhancement while recording on a smartphone, to track increasing amounts of exercise and health data from increasing numbers of people, to call and speak to people across the planet and have your voice transcribed automatically to text, to realtime face-swap or face-enhance on a mobile chip, to download gigabytes of compressed Wikipedia onto a laptop and play with it in a weekend in Python just for fun.

    "AI" stuff (LLMs, neural networks and other techniques, PyTorch, TensorFlow, cloud GPUs and TPUs), the increase in research money, in companies competing to hire the best researchers, the increase in tutorials and numbers of people around the world wanting to play with it and being able to do that ... do you predict that by 2030, 2035, 2040, 2045, 2050 ... 2100, we'll have manufactured more compute power and storage than has ever been made, several times over, and made it more and more accessible to more people, and nothing will change, nothing interesting or new will have been found deliberately or stumbled upon accidentally, nothing new will have been understood about human brains, biology, or cognition, no new insights or products or modelling or AI techniques developed or become normal, no once-in-a-lifetime geniuses having any flashes of insight?

    • I mean, what you're describing is technological advancement. It's great! I'm fully in favor of it, and I fully believe in it.

      It's not the singularity.

      The singularity is a specific belief that we will achieve AGI, and the AGI will then self-improve at an exponential rate allowing it to become infinitely more advanced and powerful (much moreso than we could ever have made it), and it will then also invent loads of new technologies and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)

      4 replies →

  • If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.

    Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...

    This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.

    • If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.

      2 replies →

    • > If you look at the rapid acceleration of progress

      I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)

      Why is previous progress a guaranteed indicator of future progress?

      1 reply →

    • What rapid acceleration?

      I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.

      The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.

      I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.

      If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.

      2 replies →

    • >If you look at the rapid acceleration of progress and conclude this way

      There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.

      Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.

      7 replies →

    • > rapid acceleration

      Who was it who stated that every exponential was just a sigmoid in disguise?

      > most humans are really stupid.

      Statistically, don't we all sort of fit somewhere along a bell curve?

      1 reply →

    • Yes, and that's why surpassing it doesn't lead to a singularity except over an infinite timeframe. This whole thing was stupid in the first place.

  • In my opinion, LLMs provide one piece of AGI. The only intelligence I’ve directly experienced is my own. I don’t consciously plan what I’m saying (or writing right now).

    Instead, a subconscious process assembles the words to support my stream of consciousness. I think that LLMs are very similar, if not identical.

    Stream of thought is accomplishing something superficially similar to consciousness, but without the ability to be innovative.

    At any rate, until there’s an artificial human level stream of consciousness in the mix for each AI, I doubt we’ll see a group of AIs collaborating to produce a significantly improved new generation of AI hardware and software minus human involvement.

    Once that does happen, the Singularity is at hand.

    • So, I was downvoted twice, yet neither of you doubtless brilliant individuals bothered to refute my points.

      I’ll emphasize my main point further, LLMs have no ability to innovate much beyond the vast data they’ve scraped. They’re almost entirely derivative.

      They are a definite improvement over traditional search engines though!

      1 reply →

You’re delusional if you think singularity is happening.

  • That's like saying "you're delusional if you think we're affected by The Sun's gravity when it's a hundred million miles away".

    A hundred million years ago, every day on Earth was much like every other day and you could count on that. As you sweep forwards in time you cross things like language, cooperation, villages, control of fire, and the before/after effects are distinctly different. The nearer you get to the present, the more of those changes happen and the closer they happen, like ripples on a pond getting closer to the splash point, or like the whispers of gravity turning into a pull and then a crunch. "Singularity" as an area closer to the splash point where models from outside can't make good predictions keeps happening - a million years ago, who would have predicted nations and empires and currency stamped with a human face? Fifty thousand years ago, who could have predicted skyscrapers with human-made train tunnels underground beneath them, or even washing bleached white bedsheets made from cotton grown overseas? Ten thousand years ago, who could have predicted container shipping through the human-made Panama canal? A thousand years ago who could have predicted Bitcoin? Five hundred years ago, who could have predicted electric motors? Three hundred years ago who could have predicted satellite weather mapping of the entire planet or trans-Atlantic undersea dark fibre bundles? Two hundred years ago, who could have predicted genetic engineering? A hundred and fifty years ago, who could have predicted MRI scanners? A hundred years ago, who could have predicted a DoorDash rider following GPS from a satellite using a map downloaded over a cellular data link to a wirelessly charging smartphone the size of a large matchbox bringing a pizza to your house coordinated by an internet-wide app?

    In 2000 with Blackberry and Palm Treo and HP Journada and PalmPilot and Windows Phone and TomTom navigation, who was expecting YouTube, Google Maps with satellite photos, Google StreetView, Twitch, Discord, Vine, TikTok, Electron, Amazon Kindle with worldwide free internet book delivery, or the dominance of Python or the ubiquity of bluetooth headphones?

    Fifty years ago is 1975, batteries were heavy and weak, cameras were film based, bulbs were incandescent, betamax and VHS and semiconductors were barely a thing - who was predicting micro-electromechanical timing devices, computer controlled LED Christmas lights playing tunes in greetings cards, DJI camera drones affordable to the population, Network Time Protocol synchronising the planet, the normality of video calling from every laptop or smartphone, or online shopping with encrypted credit card transactions hollowing out the highstreets and town centers?

    The strange attractor at the end of history might be a long way away, but it's pulling us towards it nonetheless and its ripples go back millions of years in time. It's not like there's (all of history) and then at one point (the singularity where things get weird). Things have been getting weird for thousands and thousands of years in ways that the people before that wouldn't or couldn't have predicted.

  • If your every other animal on the planet other than humans, the singularity already happened.

    Your species would have watched humans go from hairless mammals that basically did the same set of actions and need that your species had to an alien that might as well have landed from another planet (other than you don't even know other planets even exist). Now forests disappear in an instant. Lakes appear and disappear. Weird objects cover the ground and fill the sky. The paradigms that worked for eons are suddenly broken.

    But you, you're a human, you're smart. The same thing couldn't possibly happen to you, right?