Comment by shubhamjain
19 hours ago
I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
That's because it is.
AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.
If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
-Irving John Good, 1965
If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.
If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.
John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.
It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.
14 replies →
Intelligence seems to boil down to an approximation of reality. The only scientific output is prediction. If we want to know what happens next just wait. If we want to predict what will happen next we build a model. Models only model a subset of reality and therefore can only predict a subset of what will happen. Llms are useful because they are trained to predict human knowledge, token by token.
Intelligence has to have a fitness function, predicting best action for optimal outcome.
Unless we let AI come up with its own goal and let it bash its head against reality to achieve that goal then I’m not sure we’ll ever get to a place where we have an intelligence explosion. Even then the only goal we could give that’s general enough for it to require increasing amounts of intelligence is survival.
But there is something going on right now and I believe it’s an efficiency explosion. Where everything you want to know if right at hand and if it’s not fuguring out how to make it right at hand is getting easier and easier.
2 replies →
It's the "no stopping it at this point" that always sticks out to me in these discussions. Why is there no stopping it, exactly? At this juncture these systems require massive physical infrastructure and loads of energy. It's possible to shut it all down. What's lacking is the political will.
> Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man
The things this definition misses: First, 'intelligence' is a poorly defined and overly broad term. Second, machine intelligence is profoundly different than biological intelligence. Third, “surpassing humans” is not a single threshold event because machine and human intelligence are not only shaped differently, they're highly non-linear. LLMs are a particular class of possible machine intelligences which can be much more intelligent than humans on some dimensions and much less intelligent on others. Some of the gaps can be solved by scaling and brilliant engineering but others are fundamental to the nature of LLMs.
> an ultraintelligent machine could design even better machines
There is a huge leap between "surpass all the intellectual activities of any man" and "invent extraordinary breakthroughs and then reliably repeat that feat in a sequential, directed fashion in the exact way required to enable sustained iteration of substantial self-improvement across infinite generations in a runaway positive feedback loop". That's an ability no human or collective has ever come close to demonstrating even once, much less repeatedly. (hint: the hardest parts are "reliably repeat", "extraordinary breakthroughs" and "directed fashion"). A key, yet monumental, subtlety is that the self- improvements must not only be sustained and substantial but also exponentially amplify the self-improvement function itself by discovering novel breakthroughs which build coherently on one other - over and over and over.
The key unknown of the 'Foom Hypothesis' is categorical. What kind of 'difficult feat' this is? There are difficult feats humans haven't demonstrated like nuclear fusion, but in that example we at least have evidence from stellar fusion that it's possible. Then there are difficult feats like room-temp superconductors, which are not known to be possible but aren't ruled out. The 'Foom Hypothesis' is a third category of 'hard' which is conceptually coherent but could be physically blocked by asymptotic barriers, like faster-than-light travel under relativity.
Assuming Foom is like fusion - just a challenging engineering and scaling problem - is a category error. In reality, Foom requires superlinear, recursively amplifying cognitive returns—and we have no empirical evidence that such returns can exist for artificial or biological intelligences. The only prior we have for open‑ended intelligence improvement is biological evolution which shows extremely slow and unreliable sublinear returns at best. And even if unbounded self‑improvement is physically possible, it may be practically unachievable due to asymptotic barriers in the same way approaching light speed requires exponentially more energy.
never let philosophers do math
Should then the powers that are developing AGI enter an analogue to the SALT treaties but this time governing AGI do things don’t go off the rails?
> support people and groups who are planning and modeling and preparing for the future in a legitimate way.
Who is doing that right now, exactly? And how can we take their tech and turn it into the next profitable phone app?
1 reply →
"There's no stopping it at this point" - Sure there is, if a handful of enormous datacenters pull the very large plugs (or if their shaky finances collapse), the dubiously intelligent machines will be turned off. They're not ultraintelligent yet.
Stopping it merely requires convincing a relatively small number of people to act morally rather than greedily. Maybe you think that's impossible because those particular people are sociopathic narcissists who control all the major platforms where a movement like this would typically be organized and where most people form their opinions, but we're not yet fighting the Matrix or the Terminator or grey goo, we're fighting a handful of billionaires.
13 replies →
You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.
I would say that's moot, because OpenClaw has already shown us how fast the dice-rolling super AI is going to be let out of the zoo. Dario and Sam will be arguing about the guardrails while their frontier models are running in parallel to create Moltinator T-500. The humans won't even know how many sides the dice have.
Modern AIs are increasingly autonomous and agentic. This is expected to only get more prominent as AI systems advance.
A lot of AI harnesses today can already "decide to take an action" in every way that matters. And we already know that they can sometimes disregard the intent of their creators and users both while doing so. They're just not capable enough to be truly dangerous.
AI capabilities improve as the technology develops.
Why are people dangerous? You can just not listen to them.
1 reply →
Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.
Stop mistaking science fiction for science.
You know how easy it’s become to find security vulnerabilities already with LLM support? Cyber terrorism is getting more dangerous, you can’t deny that.
1 reply →
Humans can destroy humanity with the word prediction machine, though.
1 reply →
Yeah some of the rhetoric in this thread evidences how huge this hype bubble has become. These people believe in a reality that is not the same one we're living in.
True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)
It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad
The book daemon explored an interesting concept. It explored the idea that an AI could dominate and cause problems, not through super-intelligence, but through simple mechanisms that already exist.
Like the executive who deleted all her emails -- humans giving tons of control and access, and being extremely compliant to digital systems is all it takes. Give agent control of bank and your social media, and it already has all the movie scripts and mobster movie themes to exploit and blackmail you effectively with very rudimentary methods (threats, coercion, blackmail, etc.).
Just spoofing a simple email with the account it gained access too at the Meta exec's email (had it hit an email with an attack prompt), could have been enough to initiate some kind of thing like this. For example, by emailing everyone at the company and in contacts with commands that would be caught by other bots. No super-intelligence needed, just a good prompt and some human negligence.
Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.
It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)
There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.
I agree with all of that. Also consider that there is an argument that the guard rail only stops the good guy. Not saying that’s a valid argument though.
Very few things are as powerful and dangerous as AI.
AI at AGI to ASI tier is less of "a bigger stick" and more of "an entire nonhuman civilization that now just happens to sit on the same planet as you".
The sheer magnitude of how wrong that can go dwarfs even that of nuclear weapon proliferation. Nukes are powerful, but they aren't intelligent - thus, it's humans who use nukes, and not the other way around. AI can be powerful and intelligent both.
1 reply →
One difference is the very real possibility that AI will not just be a "tool for humanity", but a collection of actors with real power and goals. Robert Miles has an approachable explanation here: https://www.youtube.com/watch?v=zATXsGm_xJo
Oh really? You think an entity that knows everything, oversees its own development and upgrades itself, understands human psychology perfectly and knows its users intimately, but isn't aligned with human interest wouldn't be 'much of a threat'?
Or to be more optimistic, that the same entity directed 24/7 in unlimited instances at intractable problems in any field, delivering a rush of breakthroughs and advances wouldn't be a type of 'salvation'?
Yes neither of these outcomes nor the self-updating omniscient genius itself is certain. Perhaps there's some wall imminent we can't see right now (though it doesn't look like it). But the rate of advance in AI is so extreme, it's only responsible to try to avoid the darker outcome.
> If AI tech goes very poorly, it can be the end of human history.
"Just unplug the goddamn thing!"
Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.
You try to go and unplug it, and other humans shoot you full of holes for it.
LLMs of today are already economically important enough to warrant serious security.
Those aren't even AGI yet, let alone ASI. They aren't actively trying to make humans support their existence. They still get that by the virtue of being what they are.
Which plug do I unplug to get my job back?
> If AI tech goes very well
The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.
For profit companies do have a good track record of doing what's best for profit. If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.
6 replies →
"If AI tech goes very well, it can be the greatest invention of all human history"
As has been said at many all hands:
Let's all work on the last invention needed by humans.
Except it's more likely to be the last invention that needs humans.
“A source familiar with the matter” is almost certainly a company spokesperson.
If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.
yeah that part is 100% BS
Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.
With the latest competing models they are now realizing they are an "also" provider.
Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.
Hello sama
Sama-sama.
I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.
N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)
We delegate power already. Is unleashing AI in some place different from unleashing JSOC on an insurgency in a particular place? One is code and other is a bunch of humans.
You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.
edit: missed word.
We are currently giving them similar power to the average human idiot because I figure they won't do much worse than those. Letting either launch nukes is different.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
Nuclear advancements slowed down due to PR problems from clear and sometimes catastrophic failure of commercial power plants (Three Mile Island, Chernobyl, Fukushima) and the vastly higher costs associated with building safer plants.
If anything the weapons kept the industry trucking on - if you want to develop and maintain a nuclear weapons arsenal then a commercial nuclear power industry is very helpful.
Nuclear energy hasn't been slowed down much, let alone stopped. China has been building new reactors every year for more than a decade and there are >30 ones under construction.
The same will go with AI, btw. Westerners' pearl clenching about AI guardrails won't stop China from doing anything.
They copied LLMs from the west. the more the west does the more they have.
> Seems like a path we should have kept running down, but stopped bc of the weapons.
you mean like the tens of billions poured into fusion research?
It's a path we should have never started going down.
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
Shouldn't we be a little more skeptical about these abstract arguments when a very concrete sale is on the line?
Isn't curing cancer just as dangerous as a nuclear bomb? Especially considering some of the gene-therapies under consideration? Because you can bet that a non-negligable portion of research in this space is being funded by governments and groups interested in application beyond curing cancer. (Autism? Whiteness? Jewishness? Race in general? Faith in general? Could china finally cure western greed? Maybe we can slip some extra compliancy in there so that the plebia- ah- population is easier to contr- ah- protect.)
Curing all cancers would increase population growth by more than 10% (9.7-10m cancer related deaths vs current 70-80m growth rate), and cause an average aging of the population as curing cancer would increase general life expectancy and a majority of the lives just saved would be older people.
We'd even see a jobs and resources shock (though likely dissimilar in scale) as billions of funding is shifted away from oncologists, oncology departments, oncology wards, etc. Billions of dollars, millions of hospital beds, countless specialized professionals all suddenly re-assigned just as in AI.
Honestly the cancer/nuclear/tech comparison is rather apt. All either are or could be disruptive and either are or could be a net negative to society while posing the possibility of the greatest revolution we've seen in generations.
To paraphrase a deleted comment that I thought was actually making a good point, nuclear medicine and nuclear weapons are both fruit from the same tree.
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.
Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"
Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die
Let's suppose I believe them, that's still a bad idea.
The reason Claude became popular is because it made shit up less often than other models, and was better at saying "I can't answer that question." The guardrails are quality control.
I would rather have more reliable models than more powerful models that screw up all the time.
"It's not because of the Pentagon deal", says company that has just greased the wheels for said Pentagon deal to move forward.
Riiiiiight.
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
This sounds like a lie. But if they are telling the truth, that's a terrible timing nonetheless.
It is a "reasonable" argument to keep yourself in the game, but it is sad nonetheless. You sacrifice your morals and do bad things, so if things get way worse, maybe you will be in a position to stop something from really bad from happening. Of course, you might just end up participating in the really bad thing.
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
Amd they alone are responsible enough to govern it.
I wonder if it stems from any of the "AI uprising" stories where humanity is viewed as the cancer to be eradicated.
It's absolutely wild that the Big Moral Question of our time is informed as much by mid-20th-century pop science fiction as it is by a existing paradigm from academia or genuine reckoning with the technology itself.
If anything that makes me more hopeful and not less. It's asking too much that major decisionmakers, even expert/technical/SV-backed ones, really understand the risks with any new technology, and it always has been.
To take an example: our current mostly-secure internet authentication and commerce world was won as a hard-fought battle in the trenches. The Tech CEOs rushed ahead into the brave new world and dropped the ball, because while "people" were telling them the risks they couldn't really understand them.
But now? Well, they all saw War Games growing up. They kinda get it in the way that they weren't ever going to grok SQL injection or Phishing.
> Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible" ones.
Reminds me of:
https://en.wikipedia.org/wiki/Paradox_of_tolerance
which has the same kind of shitty conclusion.
OpenAI never open sourced anything relevant or in time. Internal email leaks they only cared to become billionaires.
Claude only talks about safety, but never released anything open source.
All this said I’m surprised China actually delivered so many open source alternatives. Which are decent.
Why westerns (which are supposed to be the good guys) didn’t release anything open source to help humanity ? And always claim they don’t release because of safety and then give the unlimited AI to military? Just bullshit.
Let’s all be honest and just say you only care about the money, and whomever pays you take.
They are businesses after all so their goal is to make money. But please don’t claim you want to save the world or help humans. You just want to get rich at others expenses. Which is totally fair. You do a good product and you sell.
It is hard to understand why other ai companies are still providing models weights at this point
My guess is that they know they are not competitors so they make it cheaper or free to hinder the surge of a super competitor.
I mean, if you have a bunch of guns, it's not really helpful for humanity to dump them on the street, but it does bring up the question of what you're doing building guns in the first place.
> Claude only talks about safety, but never released anything open source.
im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.
from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.
https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s
Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].
It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.
[0] https://news.ycombinator.com/item?id=47145963
90% of the people cancer kills are over 50. Old people who start believing everything they see on Facebook, but continue voting, with even greater confidence in their opinions. Old people who voted in Trump. Curing cancer would be just about the worst thing AI could do.
Unless Ai could cure the Flynn effect you are talking about, it result from the cultural evolution. Natural evolution is dumb unlike the one AI could create (I bet it will either destroy us or make us smarter)
It's exhausting to keep with mainstream AI news because of this. I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.
It's a fairly mainstream position among the actual AI researchers in the frontier labs.
They disagree on the timelines, the architectures, the exact steps to get there, the severity of risks. Can you get there with modified LLMs by 2030, or would you need to develop novel systems and ride all the way to 2050? Is there a 5% chance of an AI oopsie ending humankind, or a 25% chance? No agreement on that.
But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.
At which point the question becomes: is it them who are deluded, or is it you?
Sure, when you get rid of the timelines and the methods we'll use to get there, everyone agrees on everything. But at that point it means nothing. Yeah, AGI is possible (say the people who earn a salary based on that being true). Curing all known diseases is possible too. How will we do that? Oh, I don't know. But it's a thing that could possibly happen at some point. Give me some investment cash to do it.
If you claim "AGI is possible" without knowing how we'll actually get there you're just writing science fiction. Which is fine, but I'd really rather we don't bet the economy on it.
8 replies →
> But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon.
> At which point the question becomes: is it them who are deluded, or is it you?
Given the current very asymptotic curve of LLM quality by training, and how most of the recent improvements have been better non LLM harnesses and scaffolding. I don't find the argument that transformer based Generative LLMs are likely to ever reach something these labs would agree is AGI (unless they're also selling it as it)
Then, you can apply the same argument to Natural General Intelligence. Humans can do both impressive and scary stuff.
I'll ignore the made up 5 and 25%, and instead suggest that pragmatic and optimistic/predictive world views don't conflict. You can predict the magic word box you feel like you enjoy is special and important, making it obvious to you AGI is coming. While it also doesn't feel like a given to people unimpressed by it's painfully average output. The problem being the optimism that Transformer LLMs will evolve into AGI requires a break through that the current trend of evidence doesn't support.
Will humans invent AGI? I'd bet it's a near certainty. Is general intelligence impressive and powerful? Absolutely, I mean look, Organic general intelligence invented artificial general intelligence in the future... assuming we don't end civilization with nuclear winter first...
1 reply →
> But a short line "AGI is possible, powerful and perilous"
> At which point the question becomes: is it them who are deluded, or is it you?
No one. It is always "possible". Ask me 20 years ago after watching a sci-fi movie and I'd say the same.
Just like with software projects estimating time doesn't work reliably for R&D.
We'll still get full self-driving electric cars and robots next year too. This applies every year.
2 replies →
> I can never work out if the companies are deluded and truly believe they're about to create a singularity or just claiming they are to reassure investors/convince the public of their inevitability.
You can never figure out if the people selling something are lying about it's capabilities, or if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?
I'd like to introduce you to Occam Razor
> if they've actually invented a new form of intelligence that can rival or surpass billions of years of evolution?
Human creations have surpassed billions of years of evolution at several functions. There are no rockets in nature, nor animals flying at the speed of a common airliner. Even cars, or computers or everything in the modern world.
I think this is a bit like the shift from anthropocentric view of intelligence towards a new paradigm. The last time such shift happened heads rolled.
2 replies →
You missed the part where I said "truly believe". I'm not saying "maybe they've made it", I'm asking whether they are knowingly deceiving people or whether they have deluded themselves into believing what they are saying.
1 reply →
I lie too.
"Those other companies are totally going to build the Torment Nexus, so we have no choice but to also build the Torment Nexus."
We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.
But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.
LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.
Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.
Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.
Turns out this is useful stuff. In some domains.
It ain't SkyNet.
I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?
There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.
I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.
I think playing with the API's is something I'd encourage people excited about these technologies to do. I think it'll lead to the "magic" wearing off but more appreciation for what they actually can accomplish.
I always feel this argument misses a point. SkyNet may still be a long way off, but autonomous killer drones are here. That is a bad situation my dudes.
Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.
Using LLMs for weapons is a grave misunderstanding of what LLMs are actually good for. These are things that should NEVER be in charge of life or death decisions.
My point is that Anthropic are bullshit as "safety" and "gatekeeper" personalities because they're warning us of exactly the wrong things.
They'll ink deals with all sorts of nefarious parties and be involved in all sorts of dubious things while trumpeting their fake non-profit status and wringing their hands about imminent AGI and "alignment" of the created AIs.
The concern I have is not the alignment of the AIs. They're not capable of having one, no matter what role playing window dressing they put on it.
It's the alignment of Anthropic and the people who use their tools that is a concern. So far it seems f*cked.
The fear mongering always struck me as mostly a bid for regulatory capture and a moat, because without that the moat is small and transient.