Comment by stego-tech
9 days ago
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.
The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.
The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.
No one's gonna solve anything. "Our" world is based on greedy morons concentrating power through hands of just morons who are happy to hit you with a stick. This system doesn't think about what "we" should or allowed to do, and no one's here is at the reasonable side of it either.
lest we run the very real risk of societal collapse or species extinction
Our part is here. To be replaced with machines if this AI thing isn't just a fart advertised as mining equipment, which it likely is. We run this risk, not they. People worked on their wealth, people can go f themselves now. They are fine with all that. Money (=more power) piles in either way.
No encouraging conclusion.
https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Thanks for the read. One could think that the answer is to simply stop being a part of it, but then again you're from the genus that outcompeted everyone else in staying alive. Nature is such a shitty joke by design, not sure how one is supposed to look at the hypothetical designer with warmth in their heart.
3 replies →
I read for such a long time, and I still couldn’t get through that, even though it never got boring.
I like that it ends with a reference to Kushiel and Elua though.
Don't think it's correct to blame the fact that AI acceleration is the only viable self-protecting policy on "greedy morons".
> even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.
I agree that it's good science fiction, but this is still taking it too seriously. All of these "projections" are generalizing from fictional evidence - to borrow a term that's popular in communities that push these ideas.
Long before we had deep learning there were people like Nick Bostrom who were pushing this intelligence explosion narrative. The arguments back then went something like this: "Machines will be able to simulate brains at higher and higher fidelity. Someday we will have a machine simulate a cat, then the village idiot, but then the difference between the village idiot and Einstein is much less than the difference between a cat and the village idiot. Therefore accelerating growth[...]" The fictional part here is the whole brain simulation part, or, for that matter, any sort of biological analogue. This isn't how LLMs work.
We never got a machine as smart as a cat. We got multi-paragraph autocomplete as "smart" as the average person on the internet. Now, after some more years of work, we have multi-paragraph autocomplete that's as "smart" as a smart person on the internet. This is an imperfect analogy, but the point is that there is no indication that this process is self-improving. In fact, it's the opposite. All the scaling laws we have show that progress slows down as you add more resources. There is no evidence or argument for exponential growth. Whenever a new technology is first put into production (and receives massive investments) there is an initial period of rapid gains. That's not surprising. There are always low-hanging fruit.
We got some new, genuinely useful tools over the last few years, but this narrative that AGI is just around the corner needs to die. It is science fiction and leads people to make bad decisions based on fictional evidence. I'm personally frustrated whenever this comes up, because there are exciting applications which will end up underfunded after the current AI bubble bursts...
If you gather up a couple million of the smartest people on earth along with a few trillion dollars, and you add in super ambitious people eager to be culturally deified, you significantly increase the chance for breakthroughs. It's all probabiliities, though. But, right now there's no better game to bet on.
>There is no evidence or argument for exponential growth
I think the growth you are thinking of, self improving AI, needs the AI to be as smart as a human developer/researcher to get going and we haven't got there yet. But we quite likely will at some point.
and the article specifically mentions the fictional company (clearly designed to generalize the Google/OpenAI's of the world) are supposedly (according to the article) working on building that capability. First by augmenting human researchers, later by augmenting itself.
> Someday we will have a machine simulate a cat, then the village idiot... This isn't how LLMs work.
I think you misunderstood that argument. The simulate the brain thing isn't a "start from the beginning" argument, it's an "answer a common objection" argument.
Back around 2000, when Nick Bostrom was talking about this sort of thing, computers were simply nowhere near powerful enough to come even close to being smart enough to outsmart a human, except in very constrained cases like chess; we did't even have the first clue how to create a computer program to be even remotely dangerous to us.
Bostrom's point was that, "We don't need to know the computer program; even if we just simulate something we know works -- a biological brain -- we can reach superintelligence in a few decades." The idea was never that people would actually simulate a cat. The idea is, if we don't think of anything more efficient, we'll at least be able to simulate a cat, and then an idiot, and then Einstein, and then something smarter. And since we almost certainly will think of something more efficient than "simulate a human brain", we should expect superintelligence to come much sooner.
> There is no evidence or argument for exponential growth.
Moore's law is exponential, which is where the "simulate a brain" predictions have come from.
> It is science fiction and leads people to make bad decisions based on fictional evidence.
The only "fictional evidence" you've actually specified so far is the fact that there's no biological analog; and that (it seems to me) is from a misunderstanding of a point someone else was making 20 years ago, not something these particular authors are making.
I think the case for AI caution looks like this:
A. It is possible to create a superintelligent AI
B. Progress towards a superintelligent AI will be exponential
C. It is possible that a superintelligent AI will want to do something we wouldn't want it to do; e.g., destroy the whole human race
D. Such an AI would be likely to succeed.
Your skepticism seems to rest on the fundamental belief that either A or B is false: that superintelligence is not physically possible, or at least that progress towards it will be logarithmic rather than exponential.
Well, maybe that's true and maybe it's not; but how do you know? What justifies your belief that A and/or B are false so strongly, that you're willing to risk it? And not only willing to risk it, but try to stop people who are trying to think about what we'd do if they are true?
What evidence would cause you to re-evaluate that belief, and consider exponential progress towards superintelligence possible?
And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?
> Moore's law is exponential, which is where the "simulate a brain" predictions have come from.
To address only one thing out of your comment, Moore's law is not a law, it is a trend. It just gets called a law because it is fun. We know that there are physical limits to Moore's law. This gets into somewhat shaky territory, but it seems that current approaches to compute can't reach the density of compute power present in a human brain (or other creatures' brains). Moore's law won't get chips to be able to simulate a human brain, with the same amount of space and energy as a human brain. A new approach will be needed to go beyond simply packing more transistors onto a chip - this is analogous to my view that current AI technology is insufficient to do what human brains do, even when taken to their limit (which is significantly beyond where they're currently at).
> The idea is, if we don't think of anything more efficient, we'll at least be able to simulate a cat, and then an idiot, and then Einstein, and then something smarter. And since we almost certainly will think of something more efficient than "simulate a human brain", we should expect superintelligence to come much sooner.
The problem with this argument is that it's assuming that we're on a linear track to more and more intelligent machines. What we have with LLMs isn't this kind of general intelligence.
We have multi-paragraph autocomplete that's matching existing texts more and more closely. The resulting models are great priors for any kind of language processing and have simple reasoning capabilities in so far as those are present in the source texts. Using RLHF to make the resulting models useful for specific tasks is a real achievement, but doesn't change how the training works or what the original training objective was.
So let's say we continue along this trajectory and we finally have a model that can faithfully reproduce and identify every word sequence in its training data and its training data includes every word ever written up to that point. Where do we go from here?
Do you want to argue that it's possible that there is a clever way to create AGI that has nothing to do with the way current models work and that we should be wary of this possibility? That's a much weaker argument than the one in the article. The article extrapolates from current capabilities - while ignoring where those capabilities come from.
> And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?
This is essentially https://plato.stanford.edu/entries/pascal-wager/
It might make sense to consider, but it doesn't make sense to invest non-trivial resources.
This isn't the part that bothers me at all. I know people who got grants from, e.g., Miri to work on research in logic. If anything, this is a great way to fund some academic research that isn't getting much attention otherwise.
The real issue is that people are raising ridiculous amounts of money by claiming that the current advances in AI will lead to some science fiction future. When this future does not materialize it will negatively affect funding for all work in the field.
And that's a problem, because there is great work going on right now and not all of it is going to be immediately useful.
4 replies →
>All of these "projections" are generalizing from fictional evidence - to borrow a term that's popular in communities that push these ideas.
This just isn't correct. Daniel and others on the team are experienced world class forecasters. Daniel wrote another version of this in 2021 predicting the AI world in 2026 and was astonishingly accurate. This deserves credence.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
>he arguments back then went something like this: "Machines will be able to simulate brains at higher and higher fidelity.
Complete misunderstanding of the underlying ideas. Just in not even wrong territory.
>We got some new, genuinely useful tools over the last few years, but this narrative that AGI is just around the corner needs to die. It is science fiction and leads people to make bad decisions based on fictional evidence.
You are likely dangerously wrong. The AI field is near universal in predicting AGI timelines under 50 years. With many under 10. This is an extremely difficult problem to deal with and ignoring it because you think it's equivalent to overpopulation on mars is incredibly foolish.
https://www.metaculus.com/questions/5121/date-of-artificial-...
https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predicti...
I respect the forecasting abilities of the people involved, but I have seen that report described as "astonishingly accurate" a few times and I'm not sure that's true. The narrative format lends itself somewhat to generous interpretation and it's directionally correct in a way that is reasonably impressive from 2021 (e.g. the diplomacy prediction, the prediction that compute costs could be dramatically reduced, some things gesturing towards reasoning/chain of thought) but many of the concrete predictions don't seem correct to me at all, and in general I'm not sure it captured the spiky nature of LLM competence.
I'm also struck by the extent to which the first series from 2021-2026 feels like a linear extrapolation while the second one feels like an exponential one, and I don't see an obvious justification for this.
1 reply →
>2025:...Making models bigger is not what’s cool anymore. They are trillions of parameters big already. What’s cool is making them run longer, in bureaucracies of various designs, before giving their answers.
Dude was spot on in 2021, hot damn.
1 reply →
https://www.wikihow.com/Leave-a-Cult
> there are exciting applications which will end up underfunded after the current AI bubble bursts
Could you provide examples? I am genuinely interested.
I'm personally very excited about the progress in interactive theorem proving. Before the current crop of deep learning heuristics there was no generally useful higher-order automated theorem proving system. Automated theorem proving could prove individual statements based on existing lemmas, but that only works in extremely restricted settings (propositional or first-order logic). The problem is that in order to apply a statement of the form "for all functions with this list of properties, ..." you need to come up with a function that's related to what you're trying to prove. This is equivalent to coming up with new lemmas and definitions, which is the actually challenging part of doing mathematics or verification.
There has finally been progress here, which is why you see high-profile publications from, e.g., Deepmind about solving IMO problems in Lean. This is exciting, because if you're working in a system like Coq or Lean your progress is monotone. Everything you prove actually follows from the definitions you put in. This is in stark contrast to, e.g., using LLMs for programming, where you end up with a tower of bugs and half-working code if you don't constantly supervise the output.
---
But well, the degree of excitement is my own bias. From other people I spoke to recently: - Risk-assessment diagnostics in medicine. There are a bunch of tests that are expensive and complex to run and need a specialist to evaluate. Deep learning is increasingly used to make it possible to do risk assessments with cheaper automated tests for a large population and have specialists focus on actual high-risk cases. Progress is slow for various reasons, but it has a lot of potential. - Weather forecasting uses a sparse set of inputs: atmospheric data from planes, weather baloons, measurements at ground stations, etc. This data is then aggregated with relatively stupid models to get the initial conditions to run a weather simulation. Deep learning is improving this part, but while there has been some encouraging initial progress this needs to be better integrated with existing simulations (purely deep learning based approaches are apparently a lot worse at predicting extreme weather events). Those simulations are expensive, they're running on some of the largest supercomputers in the world, which is why progress is slow.
There is no need to simulate Einstein to transform the world with AI.
A self-driving car would already be plenty.
And a self driving car is not even necessary if we’re thinking about solving transportation problems. Train and bus are better at solving road transportation at scale.
You don’t just beat around the bush here. You actually beat the bush a few times.
Large corporations, governments, institutionalized churches, political parties, and other “corporate” institutions are very much like a hypothetical AGI in many ways: they are immortal, sleepless, distributed, omnipresent, and possess beyond human levels of combined intelligence, wealth, and power. They are mechanical Turk AGIs more or less. Look at how humans cycle in, out, and through them, often without changing them much, because they have an existence and a weird kind of will independent of their members.
A whole lot, perhaps all, of what we need to do to prepare for a hypothetical AGI that may or may not be aligned consists of things we should be doing to restrain and ensure alignment of the mechanical Turk variety. If we can’t do that we have no chance against something faster and smarter.
What we have done over the past 50 years is the opposite: not just unchain them but drop any notion that they should be aligned.
Are we sure the AI alignment discourse isn’t just “occulted” progressive political discourse? Back when they burned witches philosophers would encrypt possibly heretical ideas in the form of impenetrable nonsense, which is where what we call occultism comes from. You don’t get burned for suggesting steps to align corporate power, but a huge effort has been made to marginalize such discourse.
Consider a potential future AGI. Imagine it has a cult of followers around it, which it probably would, and champions that act like present day politicians or CEOs for it, which it probably would. If it did not get humans to do these things for it, it would have analogous functions or parts of itself.
Now consider a corporation or other corporate entity that has all those things but replace the AGI digital brain with a committee or shareholders.
What, really, is the difference? Both can be dangerously unaligned.
Other than perhaps in magnitude? The real digital AGI might be smarter and faster but that’s the only difference I see.
Great comment, and I love the thought process. My answer to the question: What is the difference? Humans and corporations are exceedingly predictable. We know what they both want, generally. We also rely on human issues as a limiting factor.
For an AI controlled corporation, I don't know what it wants or what to expect. And if decision making happens at the speed of light, by the time we have any warning it may be too late to react. Usually with human concerns, we get lots of warnings but wait longer than we should to respond.
I looked but I couldn’t find any evidence that “occultism” comes from encryption of heretical ideas. It seems to have been popularized in renaissance France to describe the study of hidden forces. I think you may be hallucinating here.
Where exactly did you look?
2 replies →
> The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture).
Can you point to the data that suggests these evil corporations are ruining the planet? Carbon emissions are down in every western country since 1990s. Not down per-capita, but down in absolute terms. And this holds even when adjusting for trade (i.e. we're not shipping our dirty work to foreign countries and trading with them). And this isn't because of some regulation or benevolence. It's a market system that says you should try to produce things at the lowest cost and carbon usage is usually associated with a cost. Get rid of costs, get rid of carbon.
Other measures for Western countries suggests the water is safer and overall environmental deaths have decreased considerably.
The rise in carbon emissions is due to Chine and India. Are you talking about evil Chinese and Indians corporations?
https://ourworldindata.org/co2-emissions
https://ourworldindata.org/consumption-based-co2
Emissions are trending downward because of shift from coal to natural gas, growth in renewable energy, energy efficiencies, among other things. Major oil and gas companies in the US like Chevron and ExxonMobil have spent millions on lobbying efforts to resist stricter climate regulations and fight against the changes that led to this trend, so I'd say they are the closest to these evil corporations OP described. Additionally, the current administration refers to doing anything about climate change a "climate religion", so this downward trend will likely slow.
The climate regulations are still quite weak. Without a proper carbon tax, a US company can externalize the costs of carbon emissions and get rich by maximizing their own emissions.
Thanks for letting us know everything is fine, just in case we get confused and think the opposite.
You're welcome. I know too many upper middle class educated people that don't want to have kids because they believe the earth will cease to be inhabitable in the next 10 years. It's really bizarre to see and they'll almost certainly regret it when they wake up one day alone in a nursing home, look around and realize that the world still exists.
And I think the neuroticism around this topic has led young people into some really dark places (anti-depressants, neurotic anti social behavior, general nihilism). So I think it's important to fight misinformation about end of world doomsday scenarios with both facts and common sense.
2 replies →
He must be talking about the good, benevolent Western corporations that have outsourced their carbon emissions to the evil and greedy Chinese and Indian corporations.
As addressed in my original comment, it's down even adjusting for trade
https://ourworldindata.org/consumption-based-co2
> Can you point to the data that suggests these evil corporations are ruining the planet?
Can you point to data that this is 'because' of corporations rather than despite them.
Burden of proof lies on you, since you mentioned corporations first
1 reply →
I think a healthy amount of skepticism is warranted when reading about the "reduction" of carbon emissions by companies. Why should we take them at their word when they have a vested interest in fudging the numbers?
Carbon emissions are monitored by dozens of independent agencies in many different ways over decades. It would be a giant scale coordination of suppression. Do you have a source that suggests carbon emissions from Western nations is rising?
The most amusing thing about is the unshakable belief that any part of humanity will be able to build a single nuclear reactor by 2027 to power datacenters, let alone a network of them.
According to Wikipedia, China had 22 under construction as of 2023 for 24 GW of power. They have a goal of 150 by 2035.
I think they'll probably be able to finish at least 1-2 by 2027.
> very real risk of societal collapse or species extinction
No, there is no risk of species extinction in the near future due to climate change and repeating the line will just further the divide and make the people not care about other people's and even real climate scientist's words.
Don’t say the things people don’t want to hear and everything will be fine?
That sounds like the height of folly.
Don't say false things. Especially if it is political and there isn't any way to debate it.
The risk is a quantifiable 0.0%? I find that hard to believe. I think the current trends suggest there is a risk that continued environmental destruction could annihilate society.
Risk can never be zero, just like certainty can never be 100%.
There is a non-zero chance that the ineffable quantum foam will cause a mature hippopotamus to materialize above your bed tonight, and you’ll be crushed. It is incredibly, amazingly, limits-of-math unlikely. Still a non-zero risk.
Better to think of “no risk” as meaning “negligible risk”. But I’m with you that climate change is not a negligible risk; maybe way up in the 20% range IMO. And I wouldn’t be sleeping in my bed tonight if sudden hippos over beds were 20% risks.
1 reply →
It's hard to produce a quantifiable chance of human extinction in the absence of any model by which climate change would lead to it. No climate organization I'm aware of evaluates the end of humanity as even a worst-case risk; the idea simply doesn't exist outside the realm of viral Internet misinformation.
bingo. many don't realize superintelligence exists today already, in the form of human super intelligence. artificial super intelligence is already here too, but just as hybrid human machine workloads. Fully automated super intelligence is no different from a corporation, a nation state, a religion. When does it count as ASI? when the chief executive is an AI? Or when they use AI to make decisions? Does it need to be at the board level? We are already here, all this changes is what labor humans will do and how they do it, not the amount.
You said it right, science fiction. Honestly is exactly the tenor I would expect from the AI hype: this text is completely bereft of any rigour while being dressed up in scientific language. There's no evidence, nothing to support their conclusions, no explanation based on data or facts or supporting evidence. It's purely vibes based. Their promise is unironically "the CEOs of AI companies say AGI is 3 years away"! But it's somehow presented as this self important study! Laughable.
But it's par on course. Write prompts for LLMs to compete? It's prompt engineering. Tell LLMs to explain their "reasoning" (lol)? It's Deep Research Chain Of Thought. Etc.
Did you see the supplemental material that explains how they arrived at their timelines/capabilities forecasts? https://ai-2027.com/research
It's not at all clear that performance rises with compute in a linear way, which is what they seem to be predicting. GPT-4.5 isn't really that much smarter than 2023's GPT-4, nor is it at all smarter than DeepSeek.
There might be (strongly) diminishing returns past a certain point.
Most of the growth in AI capabilities has to do with improving the interface and giving them more flexibility. For e.g., uploading PDFs. Further: OpenAI's "deep research" which can browse the web for an hour and summarize publicly-available papers and studies for you. If you ask questions about those studies, though, it's hardly smarter than GPT-4. And it makes a lot of mistakes. It's like a goofy but earnest and hard-working intern.
I fail to see how corporations are responsible for the climate crisis: Politicians won't tax gas because they'll get voted out.
We know that Trump is not captured by corporations because his trade policies are terrible.
If anything, social media is the evil that's destroying the political center: Americans are no longer reading mainstream newspapers or watching mainstream TV news.
The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
If you put a knife in someone’s heart, you’re the one who did it and ultimately you’re responsible. If someone told you to do it and you were just following orders… you still did it. If you say there were no rules against putting knives in other people’s hearts, you still did it and you’re still responsible.
If it’s somehow different for corporations, please enlighten me how.
The oil companies are saying their product is vital to the economy and they are not wrong. How else will we get food from the farms to the store ? Ambulances to the hospitals ? And many, many other things.
Taxes are the best way to change behaviour (smaller cars driving less. Less flying etc). So government and the people who vote for them is to blame.
6 replies →
> Politicians won't tax gas because they'll get voted out.
Have you seen gas tax rates in the EU?
> We know that Trump is not captured by corporations because his trade policies are terrible.
Unless you think it's a long con for some rich people to be able to time the market by getting him to crash it.
> The EU is saying the elections in Romania was manipulated through manipulation of TikTok accounts and media.
More importantly, Romanian courts say that too. And it was all out in the open, so not exactly a secret
Romainan courts say all kinds of things, many of them patently false. It's absurd to claim that since romanian courts say something, it must be true. It's absurd in principle, because there's nothing in the concept of a court that makes it infallible, and it's absurd in this precise case, because we are corrupt as hell.
I'm pretty sure the election was manipulated, but the court only said so because it benefits the incumbents, which control the courts and would lose their power.
It's a struggle between local thieves and putin, that's all. The local thieves will keep us in the EU, which is much better than the alternative, but come on. "More importantly, Romanian courts say so"? Really?
1 reply →
> Politicians won't tax gas because they'll get voted out.
I wonder if that's corporations' fault after all: shitty working conditions and shitty wages, so that Bezos can afford to send penises into space. What poor person would agree to higher tax on gas? And the corps are the ones backing politicians who'll propagandize that "Unions? That's communism! Do you want to be Chaina?!" (and spread by those dickheads on the corporate-owned TV and newspaper, drunk dickheads who end up becoming defense secretary)
When people have more money, they tend to buy larger cars that they drive further. Flying is also a luxury.
So corporations are involved in the sense that they pay people more than a living wage.
Whatever the future is, it is not American, not the United States. The US's cultural individualism has been Capitalistically weaponized, and the educational foundation to take the country forward is not there. The US is kaput, and we are merely observing the ugly demise. The future is Asia, with all of western culture going down. Yes, it is not pretty, The failed experiment of American self rule.
I agree but see it as less dire. All of western culture is not ending; it will be absorbed into a more Asia-dominated culture in much he was Asian culture was subsumed into western for the past couple of hundred years.
And if Asian culture is better educated and more capable of progress, that’s a good thing. Certainly the US has announced loud and clear that this is the end of the line for us.
> it will be absorbed into a more Asia-dominated culture in much he was Asian culture was subsumed into western for the past couple of hundred years.
Was Asian culture dominated by the west to any significant degree? Perhaps in countries like India where the legal and parliamentary system installed by the British remained intact for a long time post-independence.
Elsewhere in East and Southeast Asia, the legal systems, education, cultural traditions, and economic philosophies have been very different from the "west", i.e. post-WWII US and Western Europe.
The biggest sign of this is how they developed their own information networks, infrastructure and consumer networking devices. Europe had many of these regional champions themselves (Phillips, Nokia, Ericsson, etc) but now outside of telecom infrastructure, Europe is largely reliant on American hardware and software.
Of course it will not end, western culture just will no longer lead. Despite the sky falling perspective of many, it is simply an attitude adjustment. So one group is no longer #1, and the idea that I was part of that group, ever, was an illusion of propaganda anyway. Life will go on, surprisingly the same.
1 reply →
People said the same thing about Japan but they ran into their own structural issues. It's going to happen to China as well. They've got demographic problems, rule of law problems, democracy problems, and on and on.
I really don't understand this : us vs them viewpoint. Here's a fictional scenario. Imagine Yellowstone erupts tomorrow and whole of America becomes inhabitable but Africa is unscathed. Now think about this, if America had "really" developed African continent, wouldn't it provide shelter to scurrying Americans. Many people forget, the real value of money is in what you can exchange it for. Having skilled people and associated RnD and subsequent products / services is what should have been encouraged by the globalists instead of just rent extraction or stealing. I don't understand the ultimate endgame for globalists. Do each of them desire to have 100km yacht with helicopter perched on it to ferry them back and forth?
Perhaps but on the AI front most of the leading research has been in the US or UK, with China being a follower.