Comment by txrx0000
1 day ago
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
> This is why you can't gatekeep AI capabilities.
What is why?
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
Nationalisation is an option worse than the advantage of having the companies at their whim and command while keeping them around as a separate entities for blame-gaming and convenience based distancing.
What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.
Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.
Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.
> Scaling has hit a wall and will not get us to AGI.
That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.
> human-level to run on a single 16GB GPU before the end of the decade.
That's apparently about 6k books' worth of data.
2 replies →
> Open-source models are only a couple of months behind closed models
Oh, come on, surely not just a couple months.
Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.
> hardware to run them
Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.
You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?
1 reply →
I run local models on Mac studios and they are more than capable. Don’t spread fud.
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
2 replies →
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
> bioweapons convention was successful
Was it successful? The jury is still out.
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.
I think that's a key difference as well.
And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.
All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
The last part of your post doesn’t necessarily follow or support your argument; the corollary is “It’s hard to outlaw rna”.
Don't compare general intelligence to bioweapons. A bioweapon cannot defend against or reverse the effects of another bioweapon.
I don’t see why you think that AGI can reverse the effects of another AGI?
1 reply →
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.
Checked the DDR5 price lately?
1 reply →
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.
OK, maybe someone will build a bioweapon that does that for real. :P
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385
I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.
We live in a free society. AI should be democratized like any other technology.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
3 replies →
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
2 replies →
If it's taken by force, it will stagnate. It makes no sense at all.
The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.
We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.
I am certain that there exist people who are 1) capable of advancing the state of the art in AI, and 2) free of the hubris that lets them believe that their making AI somehow gives them a veto over the fates of nations.
Is TikTok stagnating in the US?
When have US corporations (or simply "the US" really) ever done the right thing for humanity?
"What have the Romans ever done for us?" (https://www.youtube.com/watch?v=Qc7HmhrgTuQ)
Donating the first polio vaccine to humanity.
Funding the majority of HIV prevention in Africa.
The list is long, but you knew that.
This letter and all of this is meaningless.
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
It’s meaningless. Utterly meaningless.
Get what you pay for, I suppose.
What are you talking about? Google employees and the corporation itself in particular overwhelmingly donated to the Harris campaign.
https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...
The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
They control the compute.
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.