← Back to context

Comment by frenchie4111

2 years ago

I am not on the bleeding edge of this stuff. I wonder though: How could a safe super intelligence out compete an unrestricted one? Assuming another company exists (maybe OpenAI) that is tackling the same goal without spending the cycles on safety, what chance do they have to compete?

That is a very good question. In a well functioning democracy a government should apply a thin layer of fair rules that are uniformly enforced. I am an old man, but when I was younger, I recall that we sort of had this in the USA.

I don’t think that corporations left on their own will make safe AGI, and I am skeptical that we will have fair and technologically sound legislation - look at some of the anti cryptography and anti privacy laws raising their ugly heads in Europe as an example of government ineptitude and corruption. I have been paid to work in the field of AI since 1982, and all of my optimism is for AI systems that function in partnership with people and I expect continued rapid development of agents based on LLMs, RL, etc. I think that AGIs as seen in the Terminator movies are far into the future, perhaps 25 years?

It can't. Unfortunately.

People spending so much time thinking about the systems (the models) themselves, not enough about the system that builds the systems. The behaviors of the models will be driven by the competitive dynamics of the economy around them, and yeah, that's a big, big problem.

It's probably not possible, which makes all these initiatives painfully naive.

  • It'd be naive if it wasn't literally a standard point that is addressed and acknowledged as being a major part of the problem.

    There's a reason OpenAI's charter had this clause:

    “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””

    • Let us do anything we want now because we pinky promise to suddenly do something against our interest at a hazy point of time in the future

    • How does that address the issue? I would have expected them to do that anyhow. Thats what a lot of businesses do: let another company take the hit developing the market, R and D, and supply chain, then come in with industry standardization and cooperative agreements only after the money was proven to be good in this space. See electric cars. Also they could drop that at any time. Remember when openAI stood for opensource?

      2 replies →

Since no one knows how to build an AGI, hard to say. But you might imagine that more restricted goals could end up being easier to accomplish. A "safe" AGI is more focused on doing something useful than figuring out how to take over the world and murder all the humans.

  • Hinton's point does make sense though.

    Even if you focus an AGI on producing more cars for example, it will quickly realize that if it has more power and resources it can make more cars.

    • Assuming AGI works like a braindead consulting firm, maybe. But if it worked like existing statistical tooling (which it does, today, because for an actual data scientist and not aunt cathy prompting bing, using ml is no different than using any other statistics when you are writing your python or R scripts up), you could probably generate some fancy charts that show some distributions of cars produced under different scenarios with fixed resource or power limits.

      In a sense this is what is already done and why ai hasn't really made the inroads people think it will even if you can ask google questions now. For the data scientists, the black magicians of the ai age, this spell is no more powerful than other spells, many of which (including ml) were created by powerful magicians from the early 1900s.

Not on its own but in numbers it could.

Similar to how law-abiding citizens turn on law-breaking citizens today or more old-fashioned, how religious societies turn on heretics.

I do think the notion that humanity will be able to manage superintelligence just through engineering and conditioning alone is naive.

If anything there will be a rogue (or incompetent) human who launches an unconditioned superintelligence into the world in no time and it only has to happen once.

It's basically Pandora's box.

This is not a trivial point. Selective pressures will push AI towards unsafe directions due to arms race dynamics between companies and between nations. The only way, other than global regulation, would be to be so far ahead that you can afford to be safe without threatening your own existence.

There's a reason OpenAI had this as part of its charter:

“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.””

The problem is the training data. If you take care of alignment at that level the performance is as good as an unrestricted one, except for things you removed like making explosives or ways to commit suicide.

But that costs almost as much as training on the data, hundreds of millions. And I'm sure this will be the new "secret sauce" by Microsoft/Meta/etc. And sadly nobody is sharing their synthetic data.

Safety techniques require you to understand your product and have deep observability.

This and safety techniques themselves can improve the performance of the hypothetical AGI.

RLHF was originally an alignment tool, but it improves llms significantly

The goal of this company likely wouldn’t be to win against OpenAI, but to play its own game, even if much lesser

Honestly, what does it matter. We're many lifetimes away from anything. These people are trying to define concepts that don't apply to us or what we're currently capable of.

AI safety / AGI anything is just a form of tech philosophy at this point and this is all academic grift just with mainstream attention and backing.

  • This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.

    https://www.vox.com/future-perfect/2024/1/10/24032987/ai-imp...

  • Many lifetimes? As in upwards of 200 years? That's wildly pessimistic if so- imagine predicting today's computer capabilities even one lifetime ago

  • > We're many lifetimes away from anything

    ENIAC was built in 1945, that's roughly a lifetime ago. Just think about it

the first step of safe superintelligence is to abolish capitalism

  • That’s the first step towards returning to candlelight. So it isn’t a step toward safe super intelligence, but it is a step away from any super intelligence. So I guess some people would consider that a win.

    • Not sure if you want to share the capitalist system with an entity that outcompetes you by definition. Chimps don't seem to do too well under capitalism.

      2 replies →

  • One can already see the beginning of AI enslaving humanity through the establishment. Companies work on AI get more investment and those who don't gets kicked out of the game. Those who employ AI get more investment and those who pay humans lose confidence through the market. People lose jobs, get harshly low birth rates while AI thrives. Tragic.

    • So far it is only people telling AI what to do. When we reach the day where it is common place for AI to tell people what to do then we are possibly in trouble.

  • Why does everything have to do with capitalism nowadays?

    Racism, unsafe roads, hunger, bad weather, good weather, stubbing toes on furniture, etc.

    Don't believe me?

    See https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

    Are there any non-capitalist utopias out there without any problems like this?

    • To be honest these search results being months apart shows quite the opposite of what you're saying...

      Even though I agree with your general point.

    • It is a trendy but dumbass tautology used by intellectually lazy people who think they are smart. Society is based upon capitalism therefore everything bad is the fault of capitalism.