← Back to context

Comment by bbor

2 years ago

Well, an entire industry of researchers, which used to be divided, is now uniting around calls to slow development and emphasize safety (like, “dissolve companies” emphasis not “write employee handbooks” emphasis). They’re saying, more-or-less in unison, that GPT3 was an unexpected breakthrough in the Frame Problem, based on Judea Pearl’s prescient predictions. If we agree on that, there are two options:

1. They’ve all been tricked/bribed by Sam Altman and company (which btw this is a company started against those specific guys, just for clarity). Including me, of course.

2. You’re not as much of an expert in cognitive science as you think you are, and maybe the scientists know something you don’t.

With love. As much love as possible, in a singular era

Are they actually united? Or is this the ai safety subfaction circling the wagons due to waning relevance in the face of not-actually-all-that-threatening ai?

  • I personally find that summary of things to be way off the mark (for example, hopefully "the face" you reference isn't based on anything that appears in a browser window or in an ensemble of less than 100 agents!) but I'll try to speak to the "united" question instead.

    1. The "Future of Life" institute is composed of lots of very serious people who recently helped get the EU "AI Act" passed this March, and they discuss the "myriad risks and harms AI presents" and "possibly catastrophic risks". https://newsletter.futureoflife.org/p/fli-newsletter-march-2...

    2. Many researchers are leaving large tech companies, voicing concerns about safety and the downplaying of risks in the name of moving fast and beating vaguely-posited competitors. Both big ones like Hinton and many, many smaller ones. I'm a little lazy to scrape the data together, but it's such a wide phenomenon that a quick Google/Kagi should be enough for a vague idea. This is why Anthropic was started, why Altman was fired, why Microsoft gutted their AI safety org, and why Google fired the head of their AI ethics team. We forgot about that one cause it's from before GPT3, but it doesn't get much clearer than this:

    > She co-authored a research paper which she says she was asked to retract. The paper had pinpointed flaws in AI language technology, including a system built by Google... Dr Gebru had emailed her management laying out some key conditions for removing her name from the paper, and if they were not met, she would "work on a last date" for her employment. According to Dr Gebru, Google replied: "We respect your decision to leave Google... and we are accepting your resignation."

    3. One funny way to see this happening is to go back to seminal papers from the last decade and see where everyone's working now. Spoiler alert: not a lot of the same names left at OpenAI, or Anthropic for that matter! This is the most egregious I've found -- the RLHF paper: see https://arxiv.org/pdf/2203.02155

    3. Polling of AI researchers shows a clear and overhelming trend towards AGI timelines being moved up significantly. It's still a question deeply wrapped up in accidental factors like religious belief, philosophical perspective, and general valence as a person, so I think the sudden shift here should tell you a lot. https://research.aimultiple.com/artificial-general-intellige...

    The article I just linked actually has a section where they collect caveats, and the first is this Herbert Simon quote from 1965 that clearly didn't age well: "Machines will be capable, within twenty years, of doing any work a man can do.” This is a perfect example of my overall point! He was right. The symbolists were right, are right, will always be right -- they just failed to consider that the connectionists were just as right. The exact thing that stopped his prediction was the frame problem, which is what we've now solved.

    Hopefully that makes it a bit clearer why I'm anxious all the time :). The End Is Near, folks... or at least the people telling you that it's definitely not here have capitalist motivations, too. If you count the amount of money lost and received by each "side" in this "debate", I think it's clear the researcher side is down many millions in lost salaries and money spent on thinktank papers and Silicon Valley polycule dorms (it's part of it, don't ask), and the executive side is up... well, everything, so far. Did you know the biggest privately-funded infrastructure project in the history of humanity was announced this year? https://www.datacenterdynamics.com/en/opinions/how-microsoft...

I would read the existence of this company as evidence that the entire industry is not as united as all that, since Sutskever was recently at another major player in the industry and thought it worth leaving. Whether that's a disagreement between what certain players say and what they do and believe, or just a question of extremes... TBD.

  • He didn't leave because of technical reasons, he left because of ethical ones. I know this website is used to seeing this whole thing as "another iPhone moment" but I promise you it's bigger than that. Either that or I am way more insane than I know!

    E: Jeez I said "subreddit" maybe I need to get back to work

I‘d say there’s a third option - anyone working in the space realized they can make a fuckton of money if they just say how „dangerous“ the product is, because not only is it great marketing to talk do that, but you might also get literal trillions of dollars from the government if you do it right.

I don’t have anything against researchers, and I agree I know a lot less about AI than they do. I do however know humans, and not assuming they’re going to take a chance to get filthy rich by doing something so banal is naive.

  • This is well reasoned, and certainly happens, but I definitely think there’s strong evidence that there are, in fact, true believers. Yudkowsky and Hinton for one, but in general the shape of the trend is “rich engineers leave big companies because of ethical issues”. As you can probably guess, that is not a wise economic decision for the individual!

We don't agree on that. They're just making things up with no real scientific evidence. There are way more than 2 options.

  • What kind of real scientific evidence are you looking for? What hypotheses have they failed to test? To the extent that we're discussing a specific idea in the first place ("are we in a qualitatively new era of AI?" perhaps), I'm struggling to imagine what your comment is referencing.

    You're of course right that there are more than two options in an absolute sense, I should probably limit the rhetorical flourishes for HN! My argument is that those are the only supportable narratives that answer all known evidence, but it is just an argument.