Comment by chrisldgk

2 years ago

Not to be too pessimistic here, but why are we talking about things like this? I get that it’s a fun thing to think about, what we will do when a great artificial superintelligence is achieved and how we deal with it, feels like we’re living in a science fiction book.

But, all we’ve achieved at this point is making a glorified token predicting machine trained on existing data (made by humans), not really being able to be creative outside of deriving things humans have already made before. Granted, they‘re really good at doing that, but not much else.

To me, this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders) by Altman and company, that I’m just baffled people are still going with it.

> why are we talking about things like this?

> this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders)

Ilya believes transformers can be enough to achieve superintelligence (if inefficiently). He is concerned that companies like OpenAI are going to succeed at doing it without investing in safety, and they're going to unleash a demon in the process.

I don't really believe either of those things. I find arguments that autoregressive approaches lack certain critical features [1] to be compelling. But if there's a bunch of investors caught up in the hype machine ready to dump money on your favorite pet concept, and you have a high visibility position in one of the companies at the front of the hype machine, wouldn't you want to accept that money to work relatively unconstrained on that problem?

My little pet idea is open source machines that take in veggies and rice and beans on one side and spit out hot healthy meals on the other side, as a form of mutual aid to offer payment optional meals in cities, like an automated form of the work the Sikhs do [2]. If someone wanted to pay me loads of money to do so, I'd have a lot to say about how revolutionary it is going to be.

[1] https://www.youtube.com/watch?v=1lHFUR-yD6I

[2] https://www.youtube.com/watch?v=qdoJroKUwu0

EDIT: To be clear I’m not saying it’s a fools errand. Current approaches to AI have economic value of some sort. Even if we don’t see AGI any time soon there’s money to be made. Ilya clearly knows a lot about how these systems are built. Seems worth going independent to try his own approach and maybe someone can turn a profit off this work even without AGI. Tho this is not without tradeoffs and reasonable people can disagree on the value of additional investment in this space.

There's a chance that these systems can actually out perform their training data and be better than the sum of their parts. New work out Harvard talks about this idea of "transcendence" https://arxiv.org/abs/2406.11741

While this is a new area, it would be naive to write this off as just science fiction.

  • It would be nice if authors wouldn't use a loaded-as-fuck word like "transcendence" for "the trained model can sometimes achieve better performance than all [chess] players in the dataset" because while certainly that's demonstrating an impressive internalization of the game, it's also something that many humans can also do. The machine, of course, can be scaled in breadth and performance, but... "transcendence"? Are they trying to be mis-interpreted?

  • "In chess" for AI papers == "in mice" for medical papers. Against lichess levels 1, 2, 5, which use a severely dumbed down Stockfish version.

    Of course it is possible that SSI has novel, unpublished ideas.

    • Also it's possible that human intelligence already reached the most general degree of intelligence, since we can deal with every concept that could be generated, unless there are concepts that are uncompressible and require more memory and processing than our brains could support. In such case being "superintelligent" can be achieved by adding other computational tools. Our pocket calculators make us smarter, but there is no "higher truth" a calculator could let us reach.

I'm pretty sure "Altman and company" don't have much to do with this — this is Ilya, who pretty famously tried to get Altman fired, and then himself left OpenAI in the aftermath.

Ilya is a brilliant researcher who's contributed to many foundational parts of deep learning (including the original AlexNet); I would say I'm somewhat pessimistic based on the "safety" focus — I don't think LLMs are particularly dangerous, nor do they seem likely to be in the near future, so that seems like a distraction — but I'd be surprised if SSI didn't contribute something meaningful nonetheless given the research pedigree.

  • I actually feel that they can be very dangerous. Not because of the fabled AGI, but because

    1. they're so good at showing the appearance of being right;

    2. their results are actually quite unpredictable, not always in a funny way;

    3. C-level executives actually believe that they work.

    Combine this with web APIs or effectors and this is a recipe for disaster.

    • I got into an argument with someone over text yesterday and the person said their argument was true because ChatGPT agreed with them and even sent the ChatGPT output to me.

      Just for an example of your danger #1 above. We used to say that the internet always agrees with us, but with Google it was a little harder. ChatGPT can make it so much easier to find agreeing rationalizations.

    • The ‘plausible text generator’ element of this is perfect for mass fraud and propaganda.

  • The word transformer nor LLM appear anywhere in their announcement

    It’s like before the end of WWII the world sees the US as a military super power , and THEN we unleash the atomic bomb they didn’t even know about

    That is Ilya. He has the tech. Sam had the corruption and the do anything power grab

  • > I don't think LLMs are particularly dangerous

    “Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.

    I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?

    To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.

    • That's what Safe Super intelligence misses. Superintelligence isn't practically more dangerous. Super stupidity is already here, and bad enough.

    • They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc

      But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.

      3 replies →

I actually do doubt that LLMs will create AGI but when these systems are emulating a variety of human behaviors in a way that isn't directly programmed and is good enough to be useful, it seems foolish to not take notice.

The current crop of systems is a product of the transformers architecture - an innovation that accelerated performance significantly. I put the odds another changing everything but I don't think we can entirely discount the possibility. That no one understands these systems cuts both ways.

> Not to be too pessimistic here, but why are we talking about things like this

I also think that we merely got a very well compressed knowledge base, therefore we are far from super intelligence, and so-called safety sounds more Orwellian than having any real value. That said, I think we should take the literal meaning of what Ilya says. His goal is to build a super intelligence. Given that, albeit a lofty goal, SSI has to put safety in place. So, there, safe super intelligence

  • An underappreciated feature of a classical knowledge base is returning “no results” when appropriate. LLMs so far arguably fall short on that metric, and I’m not sure whether that’s possibly an inherent limitation.

    So out of all potential applications with current-day LLMs, I’m really not sure this is a particularly good one.

    Maybe this is fixable if we can train them to cite their sources more consistently, in a way that lets us double check the output?

Likewise, i'm baffled by intelligent people [in such denial] still making the reductionist argument about token prediction being a banal ability. It's not. It's not very different than how our intelligence manifest.

AlphaGo took us from mediocre engines to outclassing the best human players in the world within a few short years. Ilya contributed to AlphaGo. What makes you so confident this can't happen with token prediction?

  • I'm pretty sure Ilya had nothing to do with AlphaGo, which came from DeepMind. He did work for Google Brain for a few years before OpenAI, but that was before Brain and DeepMind merged. The AlphaGo lead was David Silver.

  • If solving chess already created the Singularity, why do we need token prediction?

    Why do we need computers that are better than humans at the game of token prediction?

We already have limited "artificial superintelligences". A pocket calculator is better at calculating than the best humans, and we certainly put calculators to good use. What we call AIs are just more generic versions of tools like pocket calculators, or guns.

And that's the key, it is a tool, a tool that will give a lot of power to whoever is controlling it. And that's where safety matters, it should be made so that it helps good guys more than it helps bad guys, and limit accidents. How? I don't know. Maybe people at SSI do. We already know that the 3 laws of robotics won't work, Asimov only made them to write stories about how broken they are :)

Current-gen AIs are already cause for concern. They are shown to be good at bullshitting, something that bad people are already taking advantage of. I don't believe in robot apocalypse, technological singularities, etc... but some degree of control, like we do with weapons is not a bad thing. We are not there yet with AI, but we might be soon.

Too many people are extrapolating the curve to exponential when it could be a sigmoid. Lots of us got too excited and too invested in where "AI" was heading about ten years ago.

But that said, there are plenty of crappy, not-AGI technologies that deserve consideration. LLMs can still make for some very effective troll farms. GenAI can make some very convincing deepfakes. Drone swarms, even without AI, represent a new dimension of capabilities for armies, terrorist groups or lone wolves. Bioengineering is bringing custom organisms, prions or infectious agents within reach of individuals.

I wish someone in our slowly-ceasing-to-function US government was keeping a proper eye on these things.

Even if LLM-style token prediction is not going to lead to AGI (as it very likely won't) it is still important to work on safety. If we wait until we are at the technology that will for sure lead to AGI then it is very likely that we won't have sufficient safety before we realize that it is important.

Agree up til last paragraph: how's Altman involved? Otoh Sutskever is a true believer so that explains his Why

  • To be clear I was just bunching high profile AI founders and CEOs that can’t seem to stop talking about how dangerous the thing they’re trying to build is together. I don’t know (nor care) about Ilyas and Altmans current relationship.

> But, all we’ve achieved at this point is making a glorified token predicting machine trained on existing data (made by humans), not really being able to be creative outside of deriving things humans have already made before. Granted, they‘re really good at doing that, but not much else.

Remove token, and that's what we humans do.

Like, you need to realize that neural networks came to be because someone had the idea to mimic our brains' functionality, and see where that lead to.

Many skeptics at the beginning like you discredited the inventor, but he was proved wrong. LLMs shown how much more than your limited description they can achieve.

We mimicked birds with airplanes, and we can outdo them. It's actually in my view very short sighted to say we can't just mimic brains and outdo them. We're there. ChatGPT is the initial little planes that flew close to the ground and barely stayed up

  • Except it really, actually, isn’t.

    People don’t ‘think’ the same way, even if some part of how humans think seems to be somewhat similar some of the time.

    That is an important distinction.

    This is the hype cycle.

I’m a miserable cynic at a much higher level. This is top level grifting. And I’ve made a shit ton of money out of it. That’s as far as reality goes.

  • lol same. Are you selling yet?

    • When QQQ and SMH close under the 200 day moving average I'll sell my TQQQ and SOXL repectively. Until then, party on! It's been a wild ride.

    • Mostly holding on still. Apple just bumped the hype a little more and gave it a few more months despite MSFT’s inherent ability to shaft everything they touch.

      I moved about 50% of my capital back into ETFs though before WWDC in case they dumped a turd on the table.

> glorified token predicting machine trained on existing data (made by humans)

sorry to disappoint, but human brain fits the same definition

  • Sure.

    > Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

    > To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.

    https://aeon.co/essays/your-brain-does-not-process-informati...

  • What are you talking about? Do you have any actual cognitive neuroscience to back that up? Have they scanned the brain and broken it down into an LLM-analogous network?

  • If you genuinely believe your brain is just a token prediction machine, why do you continue to exist? You're just consuming limited food, water, fuel, etc for the sake of predicting tokens, like some kind of biological crypto miner.

    • Genetic and memetic/intellectual immortality, of course. Biologically there can be no other answer. We are here to spread and endure, there is no “why” or end-condition.

      If your response to there not being a big ending cinematic to life with a bearded old man and a church choir, or all your friends (and a penguin) clapping and congratulating you is that you should kill yourself immediately, that’s a you problem. Get in the flesh-golem, shinzo… or Jon Stewart will have to pilot it again.

      8 replies →

    • Well, yes. I won't commit suicide though, since it is an evolutionarily developed trait to keep living and reproducing since only the ones with that trait survive in the first place.

      2 replies →

  • It's a cute generalization but you do yourself a great disservice. It's somewhat difficult to argue given the medium we have here and it may be impossible to disprove but consider that in first 30 minutes of your post being highly visible on this thread no one had yet replied. Some may have acted in other ways.. had opinions.. voted it up/down. Some may have debated replying in jest or with a some related biblical verse. I'd wager a few may have used what they could deduce from your comment and/or history to build a mini model of you in their heads, and using that to simulate the conversation to decide if it was worth the time to get into such a debate vs tending to other things.

    Could current LLM's do any of this?

    • I’m not the OP, and I genuinely don’t like how we’re slowly entering the “no text in internet is real” realm, but I’ll take a stab at your question.

      If you made an LLM to pretend to have a specific personality (e.g. assume you are a religious person and you’re going to make a comment in this thread) rather than “generic catch-all LLM”, they can pretty much do that. Part of Reddit is just automated PR LLMs fighting each other, making comments and suggesting products or viewpoints, deciding on which comment to reply and etc. You just chain bunch of responses together with pre-determined questions like “given this complete thread, do you think it would look organic if we responded with a plug to a product to this comment?”.

      It’s also not that hard to generate these type of “personalities”, since you can use a generic one to suggest you a new one that would be different from your other agents.

      There are also Discord communities that share tips and tricks for making such automated interactions look more real.

      1 reply →

  • See, this sort of claim I am instantly skeptical of. Nobody has ever caught a human brain producing or storing tokens, and certainly the subjective experience of, say, throwing a ball, doesn't involve symbols of any kind.

    • > Nobody has ever caught a human brain producing or storing tokens

      Do you remember learning how to read and write?

      What are spelling tests?

      What if "subjective experience" isn't essential, or is even just a distraction, for a great many important tasks?

      1 reply →

    • Any output from you could be represented as a token. It is a very generic idea. Ultimately whatever you output is because of chemical reactions that follow from the input.

      7 replies →

It's no mystery, AI has attracted tons of grifters trying to cash out before the bubble pops, and investors aren't really good at filtering.

  • Well said.

    There is a mystery though still - how many people fall for it and then stay fell, and how long that goes on for. People who've followed directly a similar pattern play itself out often many times, and still, they go along.

    It's so puzzlingly common amongst very intelligent people in the "tech" space that I've started to wonder if there isn't a link to this ambient belief a lot of people have that tech can "change everything" for the better, in some sense. As in, we've been duped again and again, but then the new exciting thing comes along... and in spite of ourselves, we say: "This time it's really the one!"

    Is what we're witnessing simply the unfulfilled promises of techno-optimism crashing against the shores of social reality repeatedly?

  • Why are you assigning moral agency where there may be none? These so called "grifters" are just token predictors writing business plans (prompts) with the highest computed probability of triggering $ + [large number] token pair from venture capital token predictors.

  • Are you claiming Ilya Sutskever is a grifter?

    • I personally wouldn’t go that far, but I would say he’s at least riding the hype wave to get funding for his company, which, let’s be honest, nobody would care about if we weren’t this deep into the AI hypecycle.

Because it's likely soon LLMs will be able to teach themselves and surpass humans. No consciousness, no will. But somebody will have their power. Dark government agencies and questionable billionaires. Who knows what will it enable them to do.

https://en.wikipedia.org/wiki/AlphaGo_Zero

  • Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?

    Not sure how a Go engine really applies. Do you consider cars superintelligent because they can move faster than any human?

    • I'm with you here, but it should be noted that while the combustion engine has augmented our day to day lives for the better and our society overall, it's actually a great example of a technology that has been used to enable the killing of 100s of millions of people by those exact types of shady institutions and individuals the commenter made reference to. You don't need something "super intelligent" to cause a ton of harm.

      1 reply →

    • > Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?

      We're just past the Chicago pile days of LLMs [1]. Sutsever believes Altman is running a private Manhattan project in OpenAI. I'd say the evidence for LLMs having superintelligence capability is on shakier theoretical ground today than nuclear weapons were in 1942, but I'm no expert.

      Sutsever is an expert. He's also conflicted, both in his opposition to OpenAI (reputationally) and pitching of SSI (financially).

      So I'd say there appears to be a disputed but material possibility of LLMs achieving something that, if it doesn't pose a threat to our civilisation per se, does as a novel military element. Given that risk, it makes sense to be cautious. Paradoxically, however, that risk profile calls for strict regulation approaching nationalisation. (Microsoft's not-a-taker takeover of OpenAI perhaps providing an enterprising lawmaker the path through which to do this.)

      [1] https://en.wikipedia.org/wiki/Chicago_Pile-1

Well, an entire industry of researchers, which used to be divided, is now uniting around calls to slow development and emphasize safety (like, “dissolve companies” emphasis not “write employee handbooks” emphasis). They’re saying, more-or-less in unison, that GPT3 was an unexpected breakthrough in the Frame Problem, based on Judea Pearl’s prescient predictions. If we agree on that, there are two options:

1. They’ve all been tricked/bribed by Sam Altman and company (which btw this is a company started against those specific guys, just for clarity). Including me, of course.

2. You’re not as much of an expert in cognitive science as you think you are, and maybe the scientists know something you don’t.

With love. As much love as possible, in a singular era

  • Are they actually united? Or is this the ai safety subfaction circling the wagons due to waning relevance in the face of not-actually-all-that-threatening ai?

    • I personally find that summary of things to be way off the mark (for example, hopefully "the face" you reference isn't based on anything that appears in a browser window or in an ensemble of less than 100 agents!) but I'll try to speak to the "united" question instead.

      1. The "Future of Life" institute is composed of lots of very serious people who recently helped get the EU "AI Act" passed this March, and they discuss the "myriad risks and harms AI presents" and "possibly catastrophic risks". https://newsletter.futureoflife.org/p/fli-newsletter-march-2...

      2. Many researchers are leaving large tech companies, voicing concerns about safety and the downplaying of risks in the name of moving fast and beating vaguely-posited competitors. Both big ones like Hinton and many, many smaller ones. I'm a little lazy to scrape the data together, but it's such a wide phenomenon that a quick Google/Kagi should be enough for a vague idea. This is why Anthropic was started, why Altman was fired, why Microsoft gutted their AI safety org, and why Google fired the head of their AI ethics team. We forgot about that one cause it's from before GPT3, but it doesn't get much clearer than this:

      > She co-authored a research paper which she says she was asked to retract. The paper had pinpointed flaws in AI language technology, including a system built by Google... Dr Gebru had emailed her management laying out some key conditions for removing her name from the paper, and if they were not met, she would "work on a last date" for her employment. According to Dr Gebru, Google replied: "We respect your decision to leave Google... and we are accepting your resignation."

      3. One funny way to see this happening is to go back to seminal papers from the last decade and see where everyone's working now. Spoiler alert: not a lot of the same names left at OpenAI, or Anthropic for that matter! This is the most egregious I've found -- the RLHF paper: see https://arxiv.org/pdf/2203.02155

      3. Polling of AI researchers shows a clear and overhelming trend towards AGI timelines being moved up significantly. It's still a question deeply wrapped up in accidental factors like religious belief, philosophical perspective, and general valence as a person, so I think the sudden shift here should tell you a lot. https://research.aimultiple.com/artificial-general-intellige...

      The article I just linked actually has a section where they collect caveats, and the first is this Herbert Simon quote from 1965 that clearly didn't age well: "Machines will be capable, within twenty years, of doing any work a man can do.” This is a perfect example of my overall point! He was right. The symbolists were right, are right, will always be right -- they just failed to consider that the connectionists were just as right. The exact thing that stopped his prediction was the frame problem, which is what we've now solved.

      Hopefully that makes it a bit clearer why I'm anxious all the time :). The End Is Near, folks... or at least the people telling you that it's definitely not here have capitalist motivations, too. If you count the amount of money lost and received by each "side" in this "debate", I think it's clear the researcher side is down many millions in lost salaries and money spent on thinktank papers and Silicon Valley polycule dorms (it's part of it, don't ask), and the executive side is up... well, everything, so far. Did you know the biggest privately-funded infrastructure project in the history of humanity was announced this year? https://www.datacenterdynamics.com/en/opinions/how-microsoft...

  • I would read the existence of this company as evidence that the entire industry is not as united as all that, since Sutskever was recently at another major player in the industry and thought it worth leaving. Whether that's a disagreement between what certain players say and what they do and believe, or just a question of extremes... TBD.

    • He didn't leave because of technical reasons, he left because of ethical ones. I know this website is used to seeing this whole thing as "another iPhone moment" but I promise you it's bigger than that. Either that or I am way more insane than I know!

      E: Jeez I said "subreddit" maybe I need to get back to work

  • I‘d say there’s a third option - anyone working in the space realized they can make a fuckton of money if they just say how „dangerous“ the product is, because not only is it great marketing to talk do that, but you might also get literal trillions of dollars from the government if you do it right.

    I don’t have anything against researchers, and I agree I know a lot less about AI than they do. I do however know humans, and not assuming they’re going to take a chance to get filthy rich by doing something so banal is naive.

    • This is well reasoned, and certainly happens, but I definitely think there’s strong evidence that there are, in fact, true believers. Yudkowsky and Hinton for one, but in general the shape of the trend is “rich engineers leave big companies because of ethical issues”. As you can probably guess, that is not a wise economic decision for the individual!

  • We don't agree on that. They're just making things up with no real scientific evidence. There are way more than 2 options.

    • What kind of real scientific evidence are you looking for? What hypotheses have they failed to test? To the extent that we're discussing a specific idea in the first place ("are we in a qualitatively new era of AI?" perhaps), I'm struggling to imagine what your comment is referencing.

      You're of course right that there are more than two options in an absolute sense, I should probably limit the rhetorical flourishes for HN! My argument is that those are the only supportable narratives that answer all known evidence, but it is just an argument.