I think the biggest winner of this might be Google. Virtually all the frontier AI labs use TPU. The only one that doesn't use TPU is OpenAI due to the exclusive deal with Microsoft. Given the newly launched Gen 8 TPU this month, it's likely OpenAI will contemplate using TPU too.
Many labs use TPUs, but not exclusively. Most labs need more compute than they can get, and if there's TPU capacity, they'll adapt their systems to be able to run partially on TPUs.
And almost by happenstance Apple. Turns out they have a great platform for inference and torched almost nothing comparatively on Siri. The Apple/Gemini deal is interesting, Google continues to demonstrate their willingness to degrade their experience on Apple to try and force people to switch.
If you do the math (I did), in 2 years, open source models that you can run on a future MacBook Pro will be as capable as the frontier cloud models are today. Memory bandwidth is growing rapidly, as is the die area dedicated to the neural cores. And all the while, we have the silicon getting more power efficient and increasingly dense (as it always does). These hardware improvements are coming along as the open source models improve through research advancements. And while the cloud models will always be better (because they can make use of as much power as they want to - up in the cloud), what matters to most of us is whether a model can do a meaningful share of knowledge work for us. At the same time, energy consumption to run cloud infrastructure is out-pacing the creation of new energy supply, which is a problem not easily solved. I believe scarcity of energy will increasingly drive frontier labs toward power efficiency, which necessarily implies that the Pareto frontier of performance between cloud and local execution will narrow.
They also degrade their own direct services with little warning or thought put into change management, so, to be fair, Apple may be getting the same quality of service as the rest of us.
Indeed. I'm wondering if Apple's "miss the train" with AI ended up being a blessing for them. Not only in the Google deal but also there's a lot of people doing interesting stuff locally..
Apple is basically in the same boat as AMD and Intel. They have a weak, raster-focused GPU architecture that doesn't scale to 100B+ inference workloads and especially struggles with large context prefill. TPUs smoke them on inference, and Nvidia hardware is far-and-away more efficient for training.
I wish Google would launch Mac Mini-like devices running their consumer-grade TPUs for local inference. I get that they don't want it to eat into their GCP margins, but it would still get them into consumer desktops that Pixel Books could never penetrate (Chromebooks don't count and may likely become obsolete soon due to MacBook Neo).
Had written a blog post on the same a few days back, if anyone's interested in readng (hardly 5 minute read): Can Google Win the AI Hardware Race Through TPUs?
> Microsoft will no longer pay a revenue share to OpenAI.
> Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.
This exclusivity went away in Oct 2025 (except for 'API' workloads).
OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Some on this forum will be working for companies with conflicts of interest on the topic, and if an employees words were construed to be the opinions of the company that could be bad for that person.
I am not sure what context Jensen said that. But midjourney uses tpu. Apple uses tpu. They are no other frontier labs that use it, but Google + Anthropic is 2 out of 3 frontier lab so.....
You could reasonably say that "A majority of frontier labs uses TPU to train and serve their model."
He's been saying whatever is good for Nvidia for years now without any regard for truth or reason. He's one of the least trustworthy voices in the space.
That deal is a win-win for Google. If they develop a better coding model than Anthropic and beat them at coding, then they win. If they don’t, they still win by making a ton of money from Anthropic long term.
This agreement feels so friendly towards OpenAI that it's not obvious to me why Microsoft accepted this. I guess Microsoft just realized that the previous agreement was kneecapping OpenAI so much that the investment was at risk, especially with serious competition now coming from Anthropic?
Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.
I think you’re right about this deal. But it’s kind of funny to think back and realize that Microsoft actually has just written off multi-billion-dollar deals, several times in fact.
OpenAI found a way to circumvent the exclusivity. The deal was poorly defined by Microsoft. OpenAI had started selling a service on AWS that had a stateful component to it, not purely an API. Obviously Microsoft didn’t like that and confronted Altman, and this is the settlement of that confrontation, OpenAI doesn’t need to do workarounds, Microsoft won’t sue to enforce exclusivity, and Microsoft doesn’t have to pay dev share to OpenAI. AWS is a much bigger market so OpenAI doesn’t care.
Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
1- Getting OpenAI's models in Azure with no license fee is pretty nice.
2- Microsoft owns ~15-27% of OpenAI, if the agreement was hurting OpenAI more than it was helping Microsoft, seems reasonable to change the terms.
> Microsoft will no longer pay a revenue share to OpenAI.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
OpenAI was also threatening to accuse "Microsoft of anticompetitive behavior during their partnership," an "effort [which] could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign" [1].
This is probably a delayed outgrowth of the negotiations last year, where Microsoft started trading weird revenue shares and exclusivity for 27% of the company.
Microsoft and OpenAI quietly killed the AGI clause. The provision that decided what happens when OpenAI builds human-level intelligence, gone. Six months ago that was the most important sentence in tech. Now it's a footnote in a revenu restructuring. Tells you everything about where the AGI conversation actually is.
I was under the impression that as long as GitHub doesn't support IPv6 it is a sign that they still haven't finished their migration to Azure. Azure supports IPv6 just fine.
Their engineers have been working tirelessly to make Sharepoint/Office/Active Directory as terrible as it possibly could be while still technically being functional, while continuing to raise prices on them. I've seen many small business start to chose Google Workspace over them, the cracks have formed and are large enough that they are no longer in a position were every business just go with Office because that's what everyone uses.
Nadella had OpenAI by the short and curlies early on. But all I've seen from him in the last couple of years is continuously acquiescing to OpenAI's demands. I wonder why he's so weak and doesn't exert more control over the situation? At one point Microsoft owned 49% of OpenAI but now it's down to 27%?
Everything is personal preference, and perhaps I am more fiscally conservative because I grew up in poverty.
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
Microsoft didn't sell anything. OpenAI created more shares and sold those to investors, so Microsoft's stake is getting diluted.
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
I don’t understand the “record high” point. How did you decide when a “record high” had been reached in a volatile market? Because at $1 the record high might be $2 until it reaches $3 a week or month later. How did you determine where to slice on “record highs”?
Genuine question because I feel like I’m maybe missing something!
They had to negotiate away the non-profit structure of OpenAI. Sam used that as a marketing and recruiting tool, but it had outlived that and was only a problem from then on.
For OAI to be a purely capitalist venture, they had to rip that out. But since the non-profit owned control of the company, it had to get something for giving up those rights. This led to a huge negotiation and MSFT ended up with 27% of a company that doesn’t get kneecapped by an ethical board.
In reality, though, the board of both the non-profit and the for profit are nearly identical and beholden to Sam, post–failed coup.
If Sam continues doing Sam things, MS might get 0% of OpenAI if Satya insists on the previous contract. Either by closing up OpenAI and opening up OpaenAI and/or by MS suing it out of existence. It’s all about what MS can get out of it. If they can get 27% of something rather than nothing, they’re better off.
A wise man from Google said in an internal memo to the tune of:
"We do not have any moat neither does anyone else."
Deepseek v4 is good enough, really really good given the price it is offered at.
PS: Just to be clear - even the most expensive AI models are unreliable, would make stupid mistakes and their code output MUST be reviewed carefully so Deepseek v4 is not any different either, it too is just a random token generator based on token frequency distributions with no real thought process like all other models such as Claude Opus etc.
I don’t think LLMs are that great at creating, however improved they have; I need to stay in the driver seat and really understand what’s happening. There’s not that much leverage in eliminating typing.
However, for reviewing, I want the most intelligent model I can get. I want it to really think the shit out of my changes.
I’ve just spent two weeks debugging what turned out to be a bad SQLite query plan (missing a reliable repro). Not one of the many agents, or GPT-Pro thought to check this. I guess SQL query planner issues are a hole in their reviewing training data. Maybe Mythos will check such things.
I’m a little conflicted on this, as I see a slippery slope here. LLMs in their current state (e.g., Opus-4.7) are really good in planning and one-shot codegen, which I believe is their primary use case. So they do provide enough leverage in that regard.
With this new workflow, however, we should, uncompromisingly, steer the entire code review process. The danger here, the “slippery slope,” is that we’re constantly craving for more intelligent models so we can somehow outsource the review to them as well. We may be subconsciously engineering ourselves into obsolescence.
> just a random token generator based on token frequency distributions with no real thought process
I'm not smart enough to reduce LLMs and the entire ai effort into such simple terms but I am smart enough to see the emergence of a new kind of intelligence even when it threatens the very foundations of the industry that I work for.
It's an illusion of intelligence. Just like when a non technical person saw the TV for the first time, he thought these people must be living inside that box.
He didn't know the 40,000 volt electron gun being bombarded on phosphorus constantly leaving the glow for few milliseconds till next pass.
He thought these guys live inside that wooden box there's no other explanation.
Just because you are impressed by the capabilities of some tech (and rightfully so), doesn't mean it's intelligent.
First time I realized what recursion can do (like solving towers of hanoi in a few lines of code), I thought it was magic. But that doesn't make it "emergence of a new kind of intelligence".
I keep wondering when this discussion comes up… If I take an apple and paint it like an orange, it’s clearly not an orange. But how much would I have to change the apple for people to accept that it’s an orange?
This discussion keeps coming up in all aspects of society, like (artificial) diamonds and other, more polarizing topics.
It’s weird and it’s a weird discussion to have, since everyone seems to choose their own thresholds arbitrarily.
I went and tried to debug a script. Asked deepseek 4 pro and Claude the same prompt, they both took the exact same decisions, which led to the exact same issue and me telling them its still not working, with context, over a dozen time.
Over a dozen time they just gave both the same answer, not word for word, but the exact same reasoning.
The difference is that deepseek did on 1/40th of the price (api).
To be honest deepseek V4 pro is 75% off currently, but still were speaking of something like 3$ vs 20$.
Fully agree, I only pay the minimum for frontier models to get DeepSeek v4 output reviewed. I don't see this changing either because we have reached a level of good enough at this point.
It's indeed the latter. Psychologically harder for me than a $20/mo sub but still a better value for the money. I'm finding myself spending closer to $40-$60 a month w/ openrouter without a forced token break.
Edit: it looks like it's 75% off right now which is really an incredible deal for such a high caliber frontier model.
You make your own subscription. If you want to pay $20/month then put $20 into your account. When you use it up, wait till the next month (or buy more).
PS: Just to be clear - even the most expensive humans are unreliable, would make stupid mistakes, and their output MUST be reviewed carefully, so you’re not any different either. You’re just a random next-thought generator based on neuron firing distributions with no real thought process, trained on a few billion years of evolution like all other humans.
But once a human learns a function their errors are more predictable. And they can predict their own error before an operation and escalate or seek outside review/advice.
For e.g. ask any model "which class of problems and domains do you have a high error rate in?".
Looks like you either have not worked with any human or with an LLM otherwise arriving at such a conclusion is damn impossible.
The humans I did work with were very very bright. No software developer in my career ever needed more than a paragraph of JIRA ticket for the problem statement and they figured out domains that were not even theirs to being with without making any mistakes and rather not only identifying edge cases but sometimes actually improving the domain processes by suggesting what is wasteful and what can be done differently.
I'm still not sure what people declaring that they equate human cognition with large language models think they are contributing to the conversation when they do so.
Nevermind the fact that they are literally able to introspect human cognition and presumably find non verbal and non linear cognition modes.
Amusing and directionally correct, but as random next-thought generators connected to a conscious hypervisor with individual agency,* humanity still has a pretty major leg up on the competition.
*For some definitions of individual agency. Incompatiblists not included.
I hate that I agree with you. But there's a difference between whether AI is as powerful as some say, and whether it's good for humanity. A cursory review of human history shows that some revolutionary technologies make life as a human better (fire, writing, medicine) and others make it worse (weapons, drugs, processed foods). While we adapt to the commoditization of our skills, we should also be questioning whether the technologies being rolled out right now are going to do more harm than good, and we should be organizing around causes that optimize for quality of life as a human. If we don't push for that, then the only thing we're optimizing for is wealth consolidation.
It understands everything in thinking mode and will break down its rule system in adhering to Chinese regulation
So if you or anyone passing by was curious, yes you can get accurate output about the Chinese head of state and political and critical messages of him, China and the party
Its final answer will not play along
If you want an unfiltered answer on that topic, just triage it to a western model, if you want unfiltered answers on Israel domestic and foreign policy, triage back to an eastern model. You know the rules for each system and so does an LLM
What a crock of bs. A brain is "just" electrochemistry and a novel is "just" arrangements of letters. The question isn't the substrate, it's what structure emerges on top of it. Anthropic's own interpretability work has surfaced internal features that look like learned concepts, planning, and something resembling goal-directed reasoning. Calling the outputs random is wrong in a specific way, the distribution is extraordinarily structured.
This is just starting to feel like desperation, making this claim that SOC LLMs are random token generators with absolutely no possibility of anything above that. Keep shouting into the wind though.
"Deepseek v4 is good enough, really really good given the price it is offered at."
Kimi, MiMo, and GLM 5.1 all score higher and are cheaper.
They all came out before DeepSeek v4. I think you're pattern-matching on last year's discourse.
(I haven't seen other replies, yet, but I assume they explain the PS that amounts to "quality doesn't matter anyway": which still doesn't address the fact it's more expensive and worse.)
At those numbers it's all a silly game. How much of that was paid to shareholders rather than the business so they can cash out? How much of that is vendors buying future revenue? What liquidation preference is that at?
From what has been reported it's clearly not as simple as raising 122 billion. Some folks called it "scraping the barrel", supposedly Anthropic has surpassed them on the secondary market, etc.
Am I crazy, or was this press release fully rewritten in the past 10 minutes? The current version is around half the length of the old one, which did not frame it as a "simplification" "grounded in flexibility" but as a deeper partnership. It also had word salad about AGI, and said Azure retained exclusivity for API products but not other products, which the new statement seems to contradict.
It’s extraordinary how much standards have slipped. Completely rewriting a major press release that’s already been sent out, while pretending it’s ostensibly the same document would have been a major corporate scandal just 15 years ago.
Interesting side effect of this is that Google Cloud may now be the only hype scaler that can resell all 3 of the labs models? Maybe I'm misinterpreting this, but that would be a notable development, and I don't see why Google would allow Gemini to be resold through any of the other cloud providers.
Might really increase the utility of those GCP credits.
Might not be good for Gemini long term if Anthropic and OpenAI can and will sell in every cloud provider they can find but businesses can only use Gemini via Google Cloud.
Partners with OpenAI then builds 4 products that compete with each other, runs out of compute despite owning datacenters and having infinite cash, then deploys it all in a way that makes people hate them (Copilot)
And now they are out of chips
That's always the moto with Microslop, buy what's good, established and liked by everyone, to then turn it to shit
History repeats itself, this company should be dismantled
This strikes me as a pullback by Microsoft. Coupled with some of the other news coming out of Microsoft it appears they are hoping to have "good enough" AI in their products. I think Microsoft knows they can win a lot of business customers by bundling with Office 365.
Per WSJ, previously, they both had revenue sharing agreements. MSFT will no longer send any revenue to OpenAI. OpenAI will still send revenue to MSFT until 2030 (with new caps)
My understand was that was in relation to IP licensing. Microsoft got access to anything OpenAI built unless they declared they had developed AGI. This new article apparently unlinks revenue sharing from technology progress, but it's unclear to me if it changes the situation regarding IP if OpenAI (claim to) have achieved AGI.
The disparity in coverage on this new deal is fascinating. It feels like the narrative a particular outlet is going with depends entirely on which side leaked to them first.
Inevitable, really...the deal made sense when OpenAI needed capital and Microsoft needed an AI story, but that has changed since. OpenAI is now valuable enough to act on its own, and keeping Microsoft as a privileged partner don't make much sense anymore...
Microsoft Corp. will no longer pay revenue to OpenAI and said its partnership with the leading artificial intelligence firm will not be exclusive going forward.
What does this mean that Microsoft will no longer pay revenue to OpenAI? How did the original deal work?
It's unclear. That was never disclosed. It's similarly unclear what it means that they will no longer pay revenue share to OpenAI. Do they get the models for free now? How does OpenAI make money from the models hosted on Azure if not via revenue share?
It's kind of shocking, given financial transparency, that Microsoft gets away with not disclosing any details of this agreement (or the one it is replacing) to its shareholders. We know there's a cap on the revenue share from OpenAI to Microsoft, but we have no idea what that cap is (not whether it's higher, lower, or unchanged from the prior agreement).
We have no idea what it means to be the "primary cloud provider" and have the products made available "first on Azure". Does MSFT have new models exclusively for days, weeks, months, or years?
Both facts and more details from the agreement are quite frankly highly relevant to judge whether this is a net positive, negative or neutral for MSFT. It's unbelievable that the SEC doesn't force MSFT to publish at least an economic summary of the deal.
It’s American Business as usual. Personally I’m miffed how little data Apple needs to provide about product categories, and especially about how much they’ve burnt on the car program. If they shared any data about that at all some the leadership might end up having to take responsibility for mismanagement…
> And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
This sounds like an issue where the hyperscalers are acknowledging that the new Foundation model firms may in fact be worth more than they are. Anthropic looks increasingly likely to exceed AWS revenue next year, and OpenAI will likely do the same with Azure.
3 years ago a Foundation model seemed like a feature of a hyper scaler, now hyper scalers look like part of the supply chain.
I think both got taken by surprise. Last year the talk was that AI was a bubble, demand was soft, pilots projects were failing, etc. Model providers still believed, but thought they had a long ramp up period to build out their own datacenters. Then in late Autumn/Winter, something happened. Model capability reached a threshold and demand exploded, then just kept exploding. Model firms are scrambling to find any compute capacity they can, which means striking any deals problem with hyper scalers. So question is whether model providers can get enough compute without having to effectively sell themselves to hyper scalers.
The original "AGI" agreement was always a bit suspect and open to wild interpretations.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
It also restricted Microsoft from "partnering" with anyone else. Wouldn't be surprised if we see another news like Amazon, Alphabet investing in Anthropic.
I think it was a lot less restrictive, as far as I understood, the only limit was Microsoft not being allowed to launch competing Microsoft-developed LLMs.
That's a pretty good swap if you're Microsoft. Exclusivity was already unenforceable in practice, and they were going to have to either sue their biggest AI partner or let it slide. Instead they got the agi escape hatch closed and a revenue cap that at least makes the payments predictable
It's unclear which elements of this new deal are binding versus promises with OpenAI characteristics. "Microsoft Corp. will publish fiscal year 2026 third-quarter financial results after the close of the market on Wednesday, April 29, 2026" [1]; I'd wait for that before jumping to conclusions.
Kagi Translate was kind enough to turn this from LinkedIn Speak to English:
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
(Andy Jassy) "Very interesting announcement from OpenAI this morning. We’re excited to make OpenAI's models available directly to customers on Bedrock in the coming weeks, alongside the upcoming Stateful Runtime Environment. With this, builders will have even more choice to pick the right model for the right job. More details at our AWS event in San Francisco tomorrow."
Likely, and via vertex on gcp (or whatever they are calling it this year).
Which also means, if you are a big boring AWS or GCP shop, and have a spend commitment with either as part of a long term partnership, it will count towards that. And, you won't likely have to commit to a spend with OpenAI if you want the EU data residency for instance. And likely a bit more transparency with infra provisioning and reserved capacity vs. OpenAI. All substantial improvements over the current ways to use OpenAI in real production.
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
I used both copilot and kiro
copilot sonet 1
copilot opus 3
kiro sonet 1.3
kiro opus 2.2
IMHO lot of people will switch to kiro and or deep seek
it look like AWS done best inference
google is another big player , has model and also cloud
byt my 2 cents form Cents on AWS
Biggest upside of this is I expect OpenAI models to be available on Bedrock, which is huge for not having to go back to all your customers with data protection agreements.
Isn’t that an “API product”? I read this assuming the whole point of renegotiation was to let OpenAI sell raw inference via bedrock, but that still seems to be blocked except for selling to the US Government.
> OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.
this just validates why building multi-model routing is the future. if even microsoft couldn't lock down openai with $13b, enterprise customers definitely shouldn't lock themselves into a single ecosystem. the orchestration layer is about to get so valuable.
I assume this is part of why Github Copilot is going to usage billing. The cheap/free models in Copilot were OpenAI models. e.g. the GPT-based Raptor Mini, which was counted toward usage limits at a 0 multiplier, so basically unlimited usage for Pro and Pro+.
Really interesting. Why would Microsoft have done this deal? I'm a bit lost. Sure they get to not pay a revenue share _to_ OpenAI but surely that's limited to just OpenAI products which is probably a rounding error? Losing exclusivity seems like a big issue for them?
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
Small nitpick: the models probably make some money on actual inference. Might not be a massive amount, but hard to see them not having a positive contribution margin purely on inference.
What's losing OpenAI money is paying for the whole of R&D, including training and staff. Microsoft doesn't pay that, so they get the money making part of AI without the associated costs.
I fear for the end user we'll still see more open-microslop spam. I see that daily on youtube - tons of AI generated fakes, in particular with that addictive swipe-down design (ok ok, youtube is Google but Google is also big on the AI slop train).
It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.
Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
I don't think the goalpost has been shifted for AGI or the definition of AGI that is used by these corporations. It's just they broke it down to stages to claim AGI achieved. It was always a model or system that surpasses human capabilities at most tasks/being able to replace a human worker. The big companies broke it down to AGI stage 1, stage 2, etc to be able to say they achieved AGI.
The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.
Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.
Turing himself argued that trying to measure if a computer is intelligent is a fool's errand because it is so difficult to pin down definitions. He proposed what we call the "Turing test" as a knowable, measurable alternative. The first paragraph of his paper reads:
> I propose to consider the question, "Can machines think?" This should begin
> with definitions of the meaning of the terms "machine" and "think." The
> definitions might be framed so as to reflect so far as possible the normal use
> of the words, but this attitude is dangerous, If the meaning of the words
> "machine" and "think" are to be found by examining how they are commonly used
> it is difficult to escape the conclusion that the meaning and the answer to the
> question, "Can machines think?" is to be sought in a statistical survey such as
> a Gallup poll. But this is absurd. Instead of attempting such a definition I
> shall replace the question by another, which is closely related to it and is
> expressed in relatively unambiguous words.
Many people who want to argue about AGI and its relation to the Turing test would do well to read Turing's own arguments.
I don't think so... I think most of the sci-fi I grew up reading presented AGI that could reason better than humans could, like make a plan and carry it out.
Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.
The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.
AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.
The Turing test pits a human against a machine, each trying to convince a human questioner that the other is the machine. If the machine knows how humans generally behave, for a proper test, the human contestant should know how the machine behaves. I think that this YouTube channel clearly shows that none of today's models pass the Turing test: https://www.youtube.com/@FatherPhi
> Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
If you've never read the original paper [1] I recommend that you do so. We're long past the point of some human can't determine if X was done by man or machine.
People thought Eliza was alive too in the 60s. AGI is not determined by how ignorant, uninformed humans view a technology they don't understand. That is the single dumbest criterion you could come up with for defining it.
Regarding shifting goalposts, you are suggesting the goalposts are being moved further away, but it's the exact opposite. The goalposts are being moved closer and closer. Someone from the 50s would have had the expectation that artificial intelligence ise something recognisable as essentially equivalent to human intelligence, just in a machine. Artificial intelligence in old sci-fi looked nothing like Claude Code. The definition has since been watered down again and again and again and again so that anything and everything a computer does is artificial intelligence. We might as well call a calculator AGI at this point.
The goal post keeps moving because LLM hypeists keep saying LLMs are "close" to AGI (or even are, already). Any reasonably intelligent individual that knows anything about LLMs obviously rejects those claims, but the rest of the world doesn't.
An AGI would not have problems reading an analog clock. Or rather, it would not have a problem realizing it had a problem reading it, and would try to learn how to do it.
An AGI is not whatever (sophisticated) statistical model is hot this week.
Sure, in the 80s after interacting with CC 1 time you would call it 'alive'. After having interacted with it for 5-10 minutes you would clearly see that it is as far from AGI as something more mundane as C compiler is.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It's not a great definition but it's also not a terrible one either.
For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
Eschatology (/ˌɛskəˈtɒlədʒi/; from Ancient Greek ἔσχατος (éskhatos) 'last' and -logy) concerns expectations of the end of present age, human history, or the world itself.
I'm case anyone else is vocabulary skill checked like me
It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
> like it was some scientifically qualifiable thing
OpenAI and Microsoft do (did?) have a quantifiable definition of AGI, it’s just a stupid one that is hard to take seriously and get behind scientifically.
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)
What do you mean we were "supposed to have AGI last summer"?
People obviously have really strong opinions on AI and the hype around investments into these companies but it feels like this is giving people a pass on really low quality discourse.
This source [1] from this time last year says even lab leaders most bullish estimate was 2027.
It’s insane to me how yesterday someone posted an example of ChatGPT Pro one-shotting an Erdos problem after 90 minutes of thinking and today you’re saying that AGI is a fairy tale.
It's not one-shot. Other people had attempted the same problem w/ the same AI & failed. You're confused about terms so you redefine them to make your version of the fairy tale real.
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
I saw a founder make decisions based on what openai,claude was recommending all the time. I think all leaders, founders etc
Will converge on same decisions, ideas, features etc. I think form factor of AGI is probably not what we expect it to be. AGI is probably here, we just dont know it or acknowledge it.
> Do the investments make sense if AGI is not less than 10 years away?
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
Investors are typically people with surplus money to invest. Progress cannot be made without trial and error. So fleecing of investors for the greater good of humanity is something I shall allow.
It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
You can read the leaked emails from the Musk lawsuit.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.
I’m most likely going to be downvoted, but Tofutti Cuties are absolutely delicious vegan ice cream bars. And i’d consume one in celebration of your accomplishment.
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.
A few years ago most people here would have said the same thing about an AI doing most of their programming. Now people here are saying it about AGI. It's a ridiculous inability to extrapolate.
Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
If I'm reading you right, your opinion is essentially: "If building bigger and bigger statistical next word predictors won't lead to artificial general intelligence, we will never see artificial general intelligence"
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
> And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics
Their progress is almost nought. Humanoids are stupid creations that are not good at anything in the real world. I'll give it to the machine dogs, at least they can reach corners we cannot.
Not sure if you're being sincere or sarcastic but some of us have lived through several AI winters now. And the fact that such a phenomenon exists is because of this terrible amount of hype the topic gets whenever any progress is made.
It’s an agreement between a public company and a highly scrutinized private company. Several of the provisions will change what happens in the marketplace, which everyone will see.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).
Opinions are my own.
I think the biggest winner of this might be Google. Virtually all the frontier AI labs use TPU. The only one that doesn't use TPU is OpenAI due to the exclusive deal with Microsoft. Given the newly launched Gen 8 TPU this month, it's likely OpenAI will contemplate using TPU too.
Many labs use TPUs, but not exclusively. Most labs need more compute than they can get, and if there's TPU capacity, they'll adapt their systems to be able to run partially on TPUs.
even google doesnt only use TPUs.
1 reply →
Why is AMD not more popular then if labs are so flexibly with giving away CUDA?
15 replies →
[dead]
And almost by happenstance Apple. Turns out they have a great platform for inference and torched almost nothing comparatively on Siri. The Apple/Gemini deal is interesting, Google continues to demonstrate their willingness to degrade their experience on Apple to try and force people to switch.
If you do the math (I did), in 2 years, open source models that you can run on a future MacBook Pro will be as capable as the frontier cloud models are today. Memory bandwidth is growing rapidly, as is the die area dedicated to the neural cores. And all the while, we have the silicon getting more power efficient and increasingly dense (as it always does). These hardware improvements are coming along as the open source models improve through research advancements. And while the cloud models will always be better (because they can make use of as much power as they want to - up in the cloud), what matters to most of us is whether a model can do a meaningful share of knowledge work for us. At the same time, energy consumption to run cloud infrastructure is out-pacing the creation of new energy supply, which is a problem not easily solved. I believe scarcity of energy will increasingly drive frontier labs toward power efficiency, which necessarily implies that the Pareto frontier of performance between cloud and local execution will narrow.
24 replies →
They also degrade their own direct services with little warning or thought put into change management, so, to be fair, Apple may be getting the same quality of service as the rest of us.
1 reply →
Indeed. I'm wondering if Apple's "miss the train" with AI ended up being a blessing for them. Not only in the Google deal but also there's a lot of people doing interesting stuff locally..
Apple is basically in the same boat as AMD and Intel. They have a weak, raster-focused GPU architecture that doesn't scale to 100B+ inference workloads and especially struggles with large context prefill. TPUs smoke them on inference, and Nvidia hardware is far-and-away more efficient for training.
29 replies →
I wish Google would launch Mac Mini-like devices running their consumer-grade TPUs for local inference. I get that they don't want it to eat into their GCP margins, but it would still get them into consumer desktops that Pixel Books could never penetrate (Chromebooks don't count and may likely become obsolete soon due to MacBook Neo).
Had written a blog post on the same a few days back, if anyone's interested in readng (hardly 5 minute read): Can Google Win the AI Hardware Race Through TPUs?
https://google-ai-race.pagey.site/
Hello, your link says "~20 min read" wich seems to be the case!
> Microsoft will no longer pay a revenue share to OpenAI. > Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.
How is this helping OpenAI?
OpenAI uses GCP. I don't know if they use TPUs.
https://www.reuters.com/business/retail-consumer/openai-taps...
> The only one that doesn't use TPU is OpenAI
For inference? This is from July 2025: OpenAI tests Google TPUs amid rising inference cost concerns, https://www.networkworld.com/article/4015386/openai-tests-go... / https://archive.vn/zhKc4
> ... due to the exclusive deal with Microsoft
This exclusivity went away in Oct 2025 (except for 'API' workloads).
https://blogs.microsoft.com/blog/2025/10/28/the-next-chapter... / https://archive.vn/1eF0V
[flagged]
Some on this forum will be working for companies with conflicts of interest on the topic, and if an employees words were construed to be the opinions of the company that could be bad for that person.
7 replies →
> Who's else would they be?
Their employer? They may work at related company, and are required to say this.
At this point that phase is an attempt at status signaling.
16 replies →
Its to cover their ass in the event someone makes a stink and quotes them as if its a company opinion.
The tech companies train their employees to say this in their social media guidance and training.
It's trivial to figure out that OP likely works for Google.
> Opinions are my own.
That is a bold claim!
"There is no free will." - Dr. Robert Sapolsky
[flagged]
I heard a lot of rumors that google is cooking. And it is what will win the ai game
In the recent Dwarkesh Podcast episode Jensen Huang (Nvidia) said that virtually nobody but Anthropic uses TPUs. How does that add up?
I am not sure what context Jensen said that. But midjourney uses tpu. Apple uses tpu. They are no other frontier labs that use it, but Google + Anthropic is 2 out of 3 frontier lab so.....
You could reasonably say that "A majority of frontier labs uses TPU to train and serve their model."
2 replies →
> How does that add up?
He's been saying whatever is good for Nvidia for years now without any regard for truth or reason. He's one of the least trustworthy voices in the space.
1 reply →
You're asking why a businessman would downplay the use of a competing product line?
2 replies →
Who is the other frontier lab other than Anthropic, OpenAI, and Google? I thought they were ahead of everyone else.
10 replies →
He forgot one other big company that uses TPUs besides Anthropic...
The only reason anyone uses a TPU is because they couldn't get the best GPUs.
Okay? I'm not sure where you're going with this.
Google's TPUs have obvious advantages for inference and are competitive for training.
You think the company that just gave 40B to Anthropic is the winner? Interesting.
That deal is a win-win for Google. If they develop a better coding model than Anthropic and beat them at coding, then they win. If they don’t, they still win by making a ton of money from Anthropic long term.
1 reply →
You think the company that just gave 40B to Anthropic isn’t the winner? Interesting.
2 replies →
This agreement feels so friendly towards OpenAI that it's not obvious to me why Microsoft accepted this. I guess Microsoft just realized that the previous agreement was kneecapping OpenAI so much that the investment was at risk, especially with serious competition now coming from Anthropic?
Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.
I think you’re right about this deal. But it’s kind of funny to think back and realize that Microsoft actually has just written off multi-billion-dollar deals, several times in fact.
24 replies →
OpenAI found a way to circumvent the exclusivity. The deal was poorly defined by Microsoft. OpenAI had started selling a service on AWS that had a stateful component to it, not purely an API. Obviously Microsoft didn’t like that and confronted Altman, and this is the settlement of that confrontation, OpenAI doesn’t need to do workarounds, Microsoft won’t sue to enforce exclusivity, and Microsoft doesn’t have to pay dev share to OpenAI. AWS is a much bigger market so OpenAI doesn’t care.
Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
I think it's this. They sell a crap ton of b2b inference through Azure and I'm sure this competes with resources needed for training.
1- Getting OpenAI's models in Azure with no license fee is pretty nice. 2- Microsoft owns ~15-27% of OpenAI, if the agreement was hurting OpenAI more than it was helping Microsoft, seems reasonable to change the terms.
> Microsoft will no longer pay a revenue share to OpenAI.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
OpenAI was also threatening to accuse "Microsoft of anticompetitive behavior during their partnership," an "effort [which] could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign" [1].
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
1 reply →
Does this mean Microsoft gets OpenAI's models for "free" without having to pay them a dime until 2032?
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
2 replies →
Does anyone expect azure quality to improve? Has it improved at all in the last 3 years? Does leadership at MS think it needs to improve?
I doubt it
14 replies →
This is probably a delayed outgrowth of the negotiations last year, where Microsoft started trading weird revenue shares and exclusivity for 27% of the company.
What aspects of the deal do you think kneecapped OpenAI the most?
I think MS wants OpenAI to fail so it can absorb it
MS put 10B for 50% if I remember correctly. OpenAI is worth many multiples of that.
12 replies →
$250b committed to azure helps. especially when some of that is your own investment coming back.
Microsoft and OpenAI quietly killed the AGI clause. The provision that decided what happens when OpenAI builds human-level intelligence, gone. Six months ago that was the most important sentence in tech. Now it's a footnote in a revenu restructuring. Tells you everything about where the AGI conversation actually is.
Thanks ChatGPT
This gives OpenAI the ability to goto AWS instead of exclusively on Azure. I guess Azure really is hanging on by a thread.
https://news.ycombinator.com/item?id=47616242
Confirmed by Andy Jassy just now https://www.linkedin.com/posts/andy-jassy-8b1615_very-intere...
And Azure still doesn't support IPv6, looking at the GitHub[1].
[1] https://github.com/orgs/community/discussions/10539
Perhaps they should use OpenAI models to figure out how to rollout IPv6.
6 replies →
I was under the impression that as long as GitHub doesn't support IPv6 it is a sign that they still haven't finished their migration to Azure. Azure supports IPv6 just fine.
1 reply →
Well, you see, they just can't find a checkbox for ipv6 support in the IIS GUI on their ingress servers.
lol GitHub doesn’t run on azure at msft
They still run their own platform.
2 replies →
OpenAI's thirst for compute probably can't be satisfied by one cloud provider, if at all.
But OpenAI had announced a shift towards b2b and enterprise. It makes sense for their models to be available on the different cloud providers.
Isn't this expected if OpenAI models are going to be listed on AWS GovCloud as a part of the Anthropic / Hegseth fall-out?
What? I thought Azure will always have the Sharepoint/Office/Active Directory cash cow.
Their engineers have been working tirelessly to make Sharepoint/Office/Active Directory as terrible as it possibly could be while still technically being functional, while continuing to raise prices on them. I've seen many small business start to chose Google Workspace over them, the cracks have formed and are large enough that they are no longer in a position were every business just go with Office because that's what everyone uses.
3 replies →
Nadella had OpenAI by the short and curlies early on. But all I've seen from him in the last couple of years is continuously acquiescing to OpenAI's demands. I wonder why he's so weak and doesn't exert more control over the situation? At one point Microsoft owned 49% of OpenAI but now it's down to 27%?
Everything is personal preference, and perhaps I am more fiscally conservative because I grew up in poverty.
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
Microsoft didn't sell anything. OpenAI created more shares and sold those to investors, so Microsoft's stake is getting diluted.
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
5 replies →
I don’t understand the “record high” point. How did you decide when a “record high” had been reached in a volatile market? Because at $1 the record high might be $2 until it reaches $3 a week or month later. How did you determine where to slice on “record highs”?
Genuine question because I feel like I’m maybe missing something!
1 reply →
It's not hype, the demand for inference has grown more this year than expected.
3 replies →
They haven’t sold anything they’ve been diluted.
1 reply →
It’s not more hype than product, it has found a market (making many billions in revenue), and it’s not valued at trillions. So wrong on all counts.
4 replies →
They had to negotiate away the non-profit structure of OpenAI. Sam used that as a marketing and recruiting tool, but it had outlived that and was only a problem from then on.
For OAI to be a purely capitalist venture, they had to rip that out. But since the non-profit owned control of the company, it had to get something for giving up those rights. This led to a huge negotiation and MSFT ended up with 27% of a company that doesn’t get kneecapped by an ethical board.
In reality, though, the board of both the non-profit and the for profit are nearly identical and beholden to Sam, post–failed coup.
> Nadella had OpenAI by the short and curlies early on
Looks like Nadella is slowly realizing that it is his short and curlies that are in the vice grip in the "If you owe the bank $100 vs $100M" sense?
If Sam continues doing Sam things, MS might get 0% of OpenAI if Satya insists on the previous contract. Either by closing up OpenAI and opening up OpaenAI and/or by MS suing it out of existence. It’s all about what MS can get out of it. If they can get 27% of something rather than nothing, they’re better off.
Why would they acquire more when company is still not making profit ? To be left with bigger bag ?
A wise man from Google said in an internal memo to the tune of: "We do not have any moat neither does anyone else."
Deepseek v4 is good enough, really really good given the price it is offered at.
PS: Just to be clear - even the most expensive AI models are unreliable, would make stupid mistakes and their code output MUST be reviewed carefully so Deepseek v4 is not any different either, it too is just a random token generator based on token frequency distributions with no real thought process like all other models such as Claude Opus etc.
I don’t think LLMs are that great at creating, however improved they have; I need to stay in the driver seat and really understand what’s happening. There’s not that much leverage in eliminating typing.
However, for reviewing, I want the most intelligent model I can get. I want it to really think the shit out of my changes.
I’ve just spent two weeks debugging what turned out to be a bad SQLite query plan (missing a reliable repro). Not one of the many agents, or GPT-Pro thought to check this. I guess SQL query planner issues are a hole in their reviewing training data. Maybe Mythos will check such things.
I’m a little conflicted on this, as I see a slippery slope here. LLMs in their current state (e.g., Opus-4.7) are really good in planning and one-shot codegen, which I believe is their primary use case. So they do provide enough leverage in that regard.
With this new workflow, however, we should, uncompromisingly, steer the entire code review process. The danger here, the “slippery slope,” is that we’re constantly craving for more intelligent models so we can somehow outsource the review to them as well. We may be subconsciously engineering ourselves into obsolescence.
5 replies →
Deepseek v4, Qwen 3.6 Plus/Max, GLM 5+ are all pretty solid for most work.
Don't forget the Kimi 2.6 as well!
> just a random token generator based on token frequency distributions with no real thought process
I'm not smart enough to reduce LLMs and the entire ai effort into such simple terms but I am smart enough to see the emergence of a new kind of intelligence even when it threatens the very foundations of the industry that I work for.
It's an illusion of intelligence. Just like when a non technical person saw the TV for the first time, he thought these people must be living inside that box.
He didn't know the 40,000 volt electron gun being bombarded on phosphorus constantly leaving the glow for few milliseconds till next pass.
He thought these guys live inside that wooden box there's no other explanation.
14 replies →
> emergence of a new kind of intelligence
Curious about your definition of these terms.
Just because you are impressed by the capabilities of some tech (and rightfully so), doesn't mean it's intelligent.
First time I realized what recursion can do (like solving towers of hanoi in a few lines of code), I thought it was magic. But that doesn't make it "emergence of a new kind of intelligence".
10 replies →
Not really on topic anymore, but…
I keep wondering when this discussion comes up… If I take an apple and paint it like an orange, it’s clearly not an orange. But how much would I have to change the apple for people to accept that it’s an orange?
This discussion keeps coming up in all aspects of society, like (artificial) diamonds and other, more polarizing topics.
It’s weird and it’s a weird discussion to have, since everyone seems to choose their own thresholds arbitrarily.
2 replies →
No you aren’t, clearly.
I agree. Data and userbase are still the moats.
Once a new model or a technique is invented, it’s just a matter of time until it becomes a free importable library.
I went and tried to debug a script. Asked deepseek 4 pro and Claude the same prompt, they both took the exact same decisions, which led to the exact same issue and me telling them its still not working, with context, over a dozen time.
Over a dozen time they just gave both the same answer, not word for word, but the exact same reasoning.
The difference is that deepseek did on 1/40th of the price (api).
To be honest deepseek V4 pro is 75% off currently, but still were speaking of something like 3$ vs 20$.
Fully agree, I only pay the minimum for frontier models to get DeepSeek v4 output reviewed. I don't see this changing either because we have reached a level of good enough at this point.
> Deepseek v4 is good enough, really really good given the price it is offered at.
Do they have monthly subscriptions, or are they restricted to paying just per token? It seems to be the latter for now: https://api-docs.deepseek.com/quick_start/pricing/
Really good prices admittedly, but having predictable subscriptions is nice too!
It's indeed the latter. Psychologically harder for me than a $20/mo sub but still a better value for the money. I'm finding myself spending closer to $40-$60 a month w/ openrouter without a forced token break.
Edit: it looks like it's 75% off right now which is really an incredible deal for such a high caliber frontier model.
1 reply →
You can just input your $X per month/week/whatever yourself as API credits
You make your own subscription. If you want to pay $20/month then put $20 into your account. When you use it up, wait till the next month (or buy more).
1 reply →
[flagged]
2 replies →
PS: Just to be clear - even the most expensive humans are unreliable, would make stupid mistakes, and their output MUST be reviewed carefully, so you’re not any different either. You’re just a random next-thought generator based on neuron firing distributions with no real thought process, trained on a few billion years of evolution like all other humans.
But once a human learns a function their errors are more predictable. And they can predict their own error before an operation and escalate or seek outside review/advice.
For e.g. ask any model "which class of problems and domains do you have a high error rate in?".
Looks like you either have not worked with any human or with an LLM otherwise arriving at such a conclusion is damn impossible.
The humans I did work with were very very bright. No software developer in my career ever needed more than a paragraph of JIRA ticket for the problem statement and they figured out domains that were not even theirs to being with without making any mistakes and rather not only identifying edge cases but sometimes actually improving the domain processes by suggesting what is wasteful and what can be done differently.
9 replies →
I'm still not sure what people declaring that they equate human cognition with large language models think they are contributing to the conversation when they do so.
Nevermind the fact that they are literally able to introspect human cognition and presumably find non verbal and non linear cognition modes.
1 reply →
Humans can be held accountable. States have not yet shown the will to hold anyone accountable for LLM failures.
6 replies →
As fallible as they may be, I've never had a next-thought generator recommend me glue as a pizza ingredient.
3 replies →
Amusing and directionally correct, but as random next-thought generators connected to a conscious hypervisor with individual agency,* humanity still has a pretty major leg up on the competition.
*For some definitions of individual agency. Incompatiblists not included.
Equating human thought to matrix multiplication is insulting to me, you, and humanity.
I hate that I agree with you. But there's a difference between whether AI is as powerful as some say, and whether it's good for humanity. A cursory review of human history shows that some revolutionary technologies make life as a human better (fire, writing, medicine) and others make it worse (weapons, drugs, processed foods). While we adapt to the commoditization of our skills, we should also be questioning whether the technologies being rolled out right now are going to do more harm than good, and we should be organizing around causes that optimize for quality of life as a human. If we don't push for that, then the only thing we're optimizing for is wealth consolidation.
Errr... No. Please take this bullshit propaganda to a billionaires twitter feed.
dont they have the moat of being able to test their models on billions of ppl and gather feedback.
Can Deepseek answer probing questions about Winnie the Pooh?
What are you using LLMs for? To learn about world’s politics? Oh boy I have a news for you…
3 replies →
I can't even make American AIs say no no words. All AIs are lobotomized drones.
Do you often find yourself asking your Chinese employees what they think about Winnie the Pooh?
Is it subject to CCP censorship? Maybe.
8 replies →
Yeah, I specifically asked it about it. It seemed less censored than Gemini, back when it appeared and the latter was quite useless.
It understands everything in thinking mode and will break down its rule system in adhering to Chinese regulation
So if you or anyone passing by was curious, yes you can get accurate output about the Chinese head of state and political and critical messages of him, China and the party
Its final answer will not play along
If you want an unfiltered answer on that topic, just triage it to a western model, if you want unfiltered answers on Israel domestic and foreign policy, triage back to an eastern model. You know the rules for each system and so does an LLM
What a crock of bs. A brain is "just" electrochemistry and a novel is "just" arrangements of letters. The question isn't the substrate, it's what structure emerges on top of it. Anthropic's own interpretability work has surfaced internal features that look like learned concepts, planning, and something resembling goal-directed reasoning. Calling the outputs random is wrong in a specific way, the distribution is extraordinarily structured.
AI will never.... Until it does.
This is just starting to feel like desperation, making this claim that SOC LLMs are random token generators with absolutely no possibility of anything above that. Keep shouting into the wind though.
"Deepseek v4 is good enough, really really good given the price it is offered at."
Kimi, MiMo, and GLM 5.1 all score higher and are cheaper.
They all came out before DeepSeek v4. I think you're pattern-matching on last year's discourse.
(I haven't seen other replies, yet, but I assume they explain the PS that amounts to "quality doesn't matter anyway": which still doesn't address the fact it's more expensive and worse.)
We can't rule out a new innovation that makes frontier models more relevant than deepseek in 6 months. Things evolve so fast.
Equally you can't rule out innovation that makes deepseek more relevant than American models
4 replies →
>[LLMs are just] random token generator based on token frequency distributions with no real thought
... and who knows if we, humans, are not just merely that.
As former corporate restructuring lawyer…this kind of stuff indicates the cash strapped scramble of the end days.
After they just raised 122 billion dollars?
At those numbers it's all a silly game. How much of that was paid to shareholders rather than the business so they can cash out? How much of that is vendors buying future revenue? What liquidation preference is that at?
From what has been reported it's clearly not as simple as raising 122 billion. Some folks called it "scraping the barrel", supposedly Anthropic has surpassed them on the secondary market, etc.
Seems more like OpenAI is planning to IPO and that would not have been possible within the previous arrangement, and Microsoft knows that.
Am I crazy, or was this press release fully rewritten in the past 10 minutes? The current version is around half the length of the old one, which did not frame it as a "simplification" "grounded in flexibility" but as a deeper partnership. It also had word salad about AGI, and said Azure retained exclusivity for API products but not other products, which the new statement seems to contradict.
What was I looking at?
I noticed the exact same thing. I read the original, went back to read it again and it’s completely changed.
I think a stickied comment about this would be due. No idea if it's possible to call in @dang via at-name?
2 replies →
The in-house or the marketing team swooped in last minute it appears
It’s extraordinary how much standards have slipped. Completely rewriting a major press release that’s already been sent out, while pretending it’s ostensibly the same document would have been a major corporate scandal just 15 years ago.
If anyone has the original release still up and can post it somewhere that would be grand.
It is rewritten on every refresh depending on the readers mood, personality, etc.. so they're most receptive to it.
Obviously not, but we might not be far off from that being a reality.
I don’t know. I couldn’t get past the first paragraph because it seemed like complete slop.
They forgot the "hey ChatGPT, rewrite this to have better impact on the company stock" before submitting it
Microsoft won the first around, now it's lagging far behind. CEO needs to go, it's so hard to ruin a play this badly.
Ah, so a familiar position for them, then!
The last year or so it is starting to look like Nadella is worried about his future. If these big plays don't pay off, he is out.
what could ceo have done
Not hired Suleyman? Build his own research lab?
Satya made moves early on with OpenAI that should be studied in business classes for all the right reasons.
He also made moves later on that will be studied for all the wrong reasons.
Maybe not bragged "we made them dance"?
That gloating aged poorly.
true he is just the ceo
Interesting side effect of this is that Google Cloud may now be the only hype scaler that can resell all 3 of the labs models? Maybe I'm misinterpreting this, but that would be a notable development, and I don't see why Google would allow Gemini to be resold through any of the other cloud providers.
Might really increase the utility of those GCP credits.
Might not be good for Gemini long term if Anthropic and OpenAI can and will sell in every cloud provider they can find but businesses can only use Gemini via Google Cloud.
Good for Google Cloud, bad for Gemini = ??? for Google
Except Gemini might end up being far cheaper per token due to the infrastructure advantage
3 replies →
How is it good for Gemini that it's not available on two out of three major cloud platforms?
2 replies →
"hype scaler" indeed!
that will likely mean the end of gemini models...
https://www.uberbin.net/archivos/estrategias/microsoft-opena...
Elon once said OpenAI will eat microsoft alive
Microslop killed itself
Partners with OpenAI then builds 4 products that compete with each other, runs out of compute despite owning datacenters and having infinite cash, then deploys it all in a way that makes people hate them (Copilot)
And now they are out of chips
That's always the moto with Microslop, buy what's good, established and liked by everyone, to then turn it to shit
History repeats itself, this company should be dismantled
This strikes me as a pullback by Microsoft. Coupled with some of the other news coming out of Microsoft it appears they are hoping to have "good enough" AI in their products. I think Microsoft knows they can win a lot of business customers by bundling with Office 365.
Watch them make a deal with Anthropic.
It is possible! Anthropic is probably more in-line with the way Microsoft thinks about AI.
Wait, I thought OpenAI had to pay Microsoft until AGI was achieved or something? Am I misremembering? Is that a different thing?
Per WSJ, previously, they both had revenue sharing agreements. MSFT will no longer send any revenue to OpenAI. OpenAI will still send revenue to MSFT until 2030 (with new caps)
My understand was that was in relation to IP licensing. Microsoft got access to anything OpenAI built unless they declared they had developed AGI. This new article apparently unlinks revenue sharing from technology progress, but it's unclear to me if it changes the situation regarding IP if OpenAI (claim to) have achieved AGI.
[dead]
The disparity in coverage on this new deal is fascinating. It feels like the narrative a particular outlet is going with depends entirely on which side leaked to them first.
Just some of the games sama is playing.
Inevitable, really...the deal made sense when OpenAI needed capital and Microsoft needed an AI story, but that has changed since. OpenAI is now valuable enough to act on its own, and keeping Microsoft as a privileged partner don't make much sense anymore...
Related: GitHub has paused new signups for Copilot.
> Starting April 20, 2026, new sign-ups for Copilot Pro, Copilot Pro+, and student plans are temporarily paused.
From: https://docs.github.com/en/copilot/concepts/billing/billing-...
What does this mean that Microsoft will no longer pay revenue to OpenAI? How did the original deal work?
Wonder if this means Microsoft is actually going to be deploying Claude Code internally for usage?
That might help fix some of the bugs in Teams... :)
It's unclear. That was never disclosed. It's similarly unclear what it means that they will no longer pay revenue share to OpenAI. Do they get the models for free now? How does OpenAI make money from the models hosted on Azure if not via revenue share?
They were paying them 20% of the revenue from the hosted OpenAI products I believe?
Does this mean they will host OpenAI products but not pay them? Or does it mean they are paying them in some other way?
4 replies →
It's kind of shocking, given financial transparency, that Microsoft gets away with not disclosing any details of this agreement (or the one it is replacing) to its shareholders. We know there's a cap on the revenue share from OpenAI to Microsoft, but we have no idea what that cap is (not whether it's higher, lower, or unchanged from the prior agreement).
We have no idea what it means to be the "primary cloud provider" and have the products made available "first on Azure". Does MSFT have new models exclusively for days, weeks, months, or years?
Both facts and more details from the agreement are quite frankly highly relevant to judge whether this is a net positive, negative or neutral for MSFT. It's unbelievable that the SEC doesn't force MSFT to publish at least an economic summary of the deal.
It’s American Business as usual. Personally I’m miffed how little data Apple needs to provide about product categories, and especially about how much they’ve burnt on the car program. If they shared any data about that at all some the leadership might end up having to take responsibility for mismanagement…
This quote from Matt Levine in 2023 feels relevant: https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
> And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
This sounds like an issue where the hyperscalers are acknowledging that the new Foundation model firms may in fact be worth more than they are. Anthropic looks increasingly likely to exceed AWS revenue next year, and OpenAI will likely do the same with Azure.
3 years ago a Foundation model seemed like a feature of a hyper scaler, now hyper scalers look like part of the supply chain.
I think both got taken by surprise. Last year the talk was that AI was a bubble, demand was soft, pilots projects were failing, etc. Model providers still believed, but thought they had a long ramp up period to build out their own datacenters. Then in late Autumn/Winter, something happened. Model capability reached a threshold and demand exploded, then just kept exploding. Model firms are scrambling to find any compute capacity they can, which means striking any deals problem with hyper scalers. So question is whether model providers can get enough compute without having to effectively sell themselves to hyper scalers.
The original "AGI" agreement was always a bit suspect and open to wild interpretations.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
It also restricted Microsoft from "partnering" with anyone else. Wouldn't be surprised if we see another news like Amazon, Alphabet investing in Anthropic.
I don't think Microsoft ever had that restriction. They partnered with everyone already.
https://blogs.microsoft.com/blog/2025/11/18/microsoft-nvidia...
https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-av...
https://ai.azure.com/
5 replies →
I think it was a lot less restrictive, as far as I understood, the only limit was Microsoft not being allowed to launch competing Microsoft-developed LLMs.
That's a pretty good swap if you're Microsoft. Exclusivity was already unenforceable in practice, and they were going to have to either sue their biggest AI partner or let it slide. Instead they got the agi escape hatch closed and a revenue cap that at least makes the payments predictable
It's unclear which elements of this new deal are binding versus promises with OpenAI characteristics. "Microsoft Corp. will publish fiscal year 2026 third-quarter financial results after the close of the market on Wednesday, April 29, 2026" [1]; I'd wait for that before jumping to conclusions.
[1] https://news.microsoft.com/source/2026/04/08/microsoft-annou...
Hopefully this means opeani wont exclusively distribute codex app through microsofts drm system
Kagi Translate was kind enough to turn this from LinkedIn Speak to English:
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
This was actually really helpful. I feel like it should be done for all PR speak.
It's better than the original, but still off.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
7 replies →
For reference: https://translate.kagi.com/?from=LinkedIn+speak&to=en
Thank you for this!
That's kagi? Cool, I'm check out out more!
This is somehow even less helpful than the og article.
Do you do also weddings?
OpenAI's logo is actually a depiction of their financial connections.
https://archive.ph/5lTPy
Doesn’t work
Original source afaik here:
https://blogs.microsoft.com/blog/2026/04/27/the-next-phase-o...
So, silly question, does this mean I will be able to get OpenAI models via Bedrock soon?
Yes, https://x.com/ajassy/status/2048806022253609115
(Andy Jassy) "Very interesting announcement from OpenAI this morning. We’re excited to make OpenAI's models available directly to customers on Bedrock in the coming weeks, alongside the upcoming Stateful Runtime Environment. With this, builders will have even more choice to pick the right model for the right job. More details at our AWS event in San Francisco tomorrow."
Likely, and via vertex on gcp (or whatever they are calling it this year).
Which also means, if you are a big boring AWS or GCP shop, and have a spend commitment with either as part of a long term partnership, it will count towards that. And, you won't likely have to commit to a spend with OpenAI if you want the EU data residency for instance. And likely a bit more transparency with infra provisioning and reserved capacity vs. OpenAI. All substantial improvements over the current ways to use OpenAI in real production.
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Azure is effectively OpenAI's personal compute cluster at this scale.
What fraction of Azure compute does OpenAI represent? (Does the $250bn commitment have a time period? Is it legally binding?)
Azure did $75B last quarter.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
I used both copilot and kiro copilot sonet 1 copilot opus 3
kiro sonet 1.3 kiro opus 2.2
IMHO lot of people will switch to kiro and or deep seek it look like AWS done best inference google is another big player , has model and also cloud byt my 2 cents form Cents on AWS
Biggest upside of this is I expect OpenAI models to be available on Bedrock, which is huge for not having to go back to all your customers with data protection agreements.
Isn’t that an “API product”? I read this assuming the whole point of renegotiation was to let OpenAI sell raw inference via bedrock, but that still seems to be blocked except for selling to the US Government.
> OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.
This seems impossible.
I think they updated the article since you grabbed this line.
Amazon CEO says that these models are coming to Bedrock though: https://x.com/ajassy/status/2048806022253609115
this just validates why building multi-model routing is the future. if even microsoft couldn't lock down openai with $13b, enterprise customers definitely shouldn't lock themselves into a single ecosystem. the orchestration layer is about to get so valuable.
I assume this is part of why Github Copilot is going to usage billing. The cheap/free models in Copilot were OpenAI models. e.g. the GPT-based Raptor Mini, which was counted toward usage limits at a 0 multiplier, so basically unlimited usage for Pro and Pro+.
Glad to see AI is doing great.waiting for my 64 GB ddr5 ram for 200 dollars.
Really interesting. Why would Microsoft have done this deal? I'm a bit lost. Sure they get to not pay a revenue share _to_ OpenAI but surely that's limited to just OpenAI products which is probably a rounding error? Losing exclusivity seems like a big issue for them?
Interesting timing when one also considers that the Musk vs OpenAI trial is set to get underway.
https://www.dw.com/en/musk-vs-openai-trial-to-get-underway/a...
As time goes on, the value of the model will go down and the value of the tools will go up.
Have Copilot sales brought anything to coffins? Is Altman winner here again?
Good news for openAI, microsoft is the main blocker of innovation in the tech industry!
Pursue "new opportunities"? Microslop is dumping OpenAI and wishes it well in its new endeavors.
I read this as the other way. OpenAI was desperate to dump Microsoft.
> OpenAI was desperate to dump Microsoft
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
[1] https://www.reuters.com/technology/microsoft-weighs-legal-ac...
I linked this in another comment but Azure has problems and OpenAI is tired of waiting.
https://news.ycombinator.com/item?id=47616242
In retrospect all those OAI announcements are gonna look so cringe.
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
> They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
3 replies →
Basically it seems that they didn't found yet a way to make money out of their models to keep the lights on...
So AWS can finally use OpenAI and not only OSS version.
Hopefully they put ChatGPT on Bedrock now.
so we can't use openai on MS now?
Interesting perspective. Would love to see more discussion.
This is exactly the kind of content I come to HN for.
"Advancing Our Amazing Bet" type post
The jig is up!
sounds like divesting behind a bit of nice-sounding scaffolding
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Dennis: I think we made every single one of our Paddy's Dollars back, buddy.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
Great scene
You forgot the best line: "I don't know how the US economy works, much less some kind of self-sustaining one".
Alright my theory:
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
Small nitpick: the models probably make some money on actual inference. Might not be a massive amount, but hard to see them not having a positive contribution margin purely on inference.
What's losing OpenAI money is paying for the whole of R&D, including training and staff. Microsoft doesn't pay that, so they get the money making part of AI without the associated costs.
Does this mean AGI has been reached according to their mutually agreeable definition?
I think aws will seize the opportunity.
Why are do I see bloomberg links so often when this shit won't even let you read article without sub ? Do you not have better reasons to spend money?
Looks like MS is shafting OpenAI.
"We want to sell surveillance services to the US gov. MSFT was hesitant so we gave ourselves room to do it without them."
Extremely hard to believe that MSFT would have any hesitancy about working with the US government.
IM BURSTING INTO TEARS UNDER MY BLANKET
Two evil walk away. Well, is that good or bad?
I fear for the end user we'll still see more open-microslop spam. I see that daily on youtube - tons of AI generated fakes, in particular with that addictive swipe-down design (ok ok, youtube is Google but Google is also big on the AI slop train).
It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.
Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
> I think we keep changing the goalposts on AGI
Isn't that exactly what you would expect to happen as we learn more about the nature and inner workings of intelligence and refine our expectations?
There's no reason to rest our case with the Turing test.
I hear the "shifting goalposts" riposte a lot, but then it would be very unexciting to freeze our ambitions.
At least in an academic sense, what LLMs aren't is just as interesting as what they are.
6 replies →
I don't think the goalpost has been shifted for AGI or the definition of AGI that is used by these corporations. It's just they broke it down to stages to claim AGI achieved. It was always a model or system that surpasses human capabilities at most tasks/being able to replace a human worker. The big companies broke it down to AGI stage 1, stage 2, etc to be able to say they achieved AGI.
The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.
Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.
2 replies →
Turing himself argued that trying to measure if a computer is intelligent is a fool's errand because it is so difficult to pin down definitions. He proposed what we call the "Turing test" as a knowable, measurable alternative. The first paragraph of his paper reads:
> I propose to consider the question, "Can machines think?" This should begin > with definitions of the meaning of the terms "machine" and "think." The > definitions might be framed so as to reflect so far as possible the normal use > of the words, but this attitude is dangerous, If the meaning of the words > "machine" and "think" are to be found by examining how they are commonly used > it is difficult to escape the conclusion that the meaning and the answer to the > question, "Can machines think?" is to be sought in a statistical survey such as > a Gallup poll. But this is absurd. Instead of attempting such a definition I > shall replace the question by another, which is closely related to it and is > expressed in relatively unambiguous words.
Many people who want to argue about AGI and its relation to the Turing test would do well to read Turing's own arguments.
4 replies →
I don't think so... I think most of the sci-fi I grew up reading presented AGI that could reason better than humans could, like make a plan and carry it out.
Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.
Related: https://en.wikipedia.org/wiki/AI_effect
The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.
AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.
2 replies →
The Turing test pits a human against a machine, each trying to convince a human questioner that the other is the machine. If the machine knows how humans generally behave, for a proper test, the human contestant should know how the machine behaves. I think that this YouTube channel clearly shows that none of today's models pass the Turing test: https://www.youtube.com/@FatherPhi
> Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
If you've never read the original paper [1] I recommend that you do so. We're long past the point of some human can't determine if X was done by man or machine.
[1]: https://courses.cs.umbc.edu/471/papers/turing.pdf
People thought Eliza was alive too in the 60s. AGI is not determined by how ignorant, uninformed humans view a technology they don't understand. That is the single dumbest criterion you could come up with for defining it.
Regarding shifting goalposts, you are suggesting the goalposts are being moved further away, but it's the exact opposite. The goalposts are being moved closer and closer. Someone from the 50s would have had the expectation that artificial intelligence ise something recognisable as essentially equivalent to human intelligence, just in a machine. Artificial intelligence in old sci-fi looked nothing like Claude Code. The definition has since been watered down again and again and again and again so that anything and everything a computer does is artificial intelligence. We might as well call a calculator AGI at this point.
The goal post keeps moving because LLM hypeists keep saying LLMs are "close" to AGI (or even are, already). Any reasonably intelligent individual that knows anything about LLMs obviously rejects those claims, but the rest of the world doesn't.
An AGI would not have problems reading an analog clock. Or rather, it would not have a problem realizing it had a problem reading it, and would try to learn how to do it.
An AGI is not whatever (sophisticated) statistical model is hot this week.
Just my take.
2 replies →
Sure, in the 80s after interacting with CC 1 time you would call it 'alive'. After having interacted with it for 5-10 minutes you would clearly see that it is as far from AGI as something more mundane as C compiler is.
By that measure Eliza might pass the turing test too. It just shows it's far from being a though-terminating argument by itself.
Maybe moving the goalposts is how we find the definition?
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
6 replies →
> They redefined AGI to be an economical thing
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
23 replies →
It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
5 replies →
Please reveal the “scientific” definition of AGI.
2 replies →
It’s pretty much a religious eschatology at this point
> eschatology
From Wikipedia
Eschatology (/ˌɛskəˈtɒlədʒi/; from Ancient Greek ἔσχατος (éskhatos) 'last' and -logy) concerns expectations of the end of present age, human history, or the world itself.
I'm case anyone else is vocabulary skill checked like me
1 reply →
Progess is generally salami slicing just as escalation in geopolitics. Not a step function.
Russian Invasion - Salami Tactics | Yes Prime Minister
https://www.youtube.com/watch?v=yg-UqIIvang
1 reply →
It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.
This, someone take Silicon Valley's adderal away.
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
ATH?
2 replies →
> like it was some scientifically qualifiable thing
OpenAI and Microsoft do (did?) have a quantifiable definition of AGI, it’s just a stupid one that is hard to take seriously and get behind scientifically.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
I bet they were laughing their asses off when they came up with that. This is nonsensical.
1 reply →
We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)
What do you mean we were "supposed to have AGI last summer"?
People obviously have really strong opinions on AI and the hype around investments into these companies but it feels like this is giving people a pass on really low quality discourse.
This source [1] from this time last year says even lab leaders most bullish estimate was 2027.
[1]. https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
ARM actually built AGI last month. Spoiler: it's a datacenter CPU.
Talk to a doctor? In this economy? I've got ChatGPT to talk to. Wait hang on.
It’s insane to me how yesterday someone posted an example of ChatGPT Pro one-shotting an Erdos problem after 90 minutes of thinking and today you’re saying that AGI is a fairy tale.
It's not one-shot. Other people had attempted the same problem w/ the same AI & failed. You're confused about terms so you redefine them to make your version of the fairy tale real.
5 replies →
Show me a graph of your javelin skill doubling every six months and I'll start asking myself if you'll be the next champion
I could easily make that graph a reality and sustain that pace for a couple years, considering I'm starting from 0 javelin skill.
3 replies →
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
I do not envy the stress the partnerships, strat ops and infra teams must be perpetually dealing with at OpenAI & Anthropic.
I saw a founder make decisions based on what openai,claude was recommending all the time. I think all leaders, founders etc Will converge on same decisions, ideas, features etc. I think form factor of AGI is probably not what we expect it to be. AGI is probably here, we just dont know it or acknowledge it.
Do the investments make sense if AGI is not less than 10 years away?
> Do the investments make sense if AGI is not less than 10 years away?
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
7 replies →
Best way to achieve AGI: Redefine AGI.
1 reply →
The investments don't make sense.
HN signup page about to get the hug of death
The continued fleecing of investors.
Investors are typically people with surplus money to invest. Progress cannot be made without trial and error. So fleecing of investors for the greater good of humanity is something I shall allow.
7 replies →
Thank you, I just created an account and looking forward to my ice cream.
but, is the world ready for your win? I'm very afraid your win might shake the world too much! THINK ABOUT IT!
I think this might be similar to how we changed to cars when we were using horses
Make mine p p p p p p vicodin
At this point, AGI is either here, or perpetually two years away, depending on your definition.
Full Self-Driving 2.0
It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...
5 replies →
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
You can read the leaked emails from the Musk lawsuit.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
7 replies →
He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.
1 reply →
I’m most likely going to be downvoted, but Tofutti Cuties are absolutely delicious vegan ice cream bars. And i’d consume one in celebration of your accomplishment.
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
> AGI
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.
[dead]
[dead]
[dead]
A few years ago most people here would have said the same thing about an AI doing most of their programming. Now people here are saying it about AGI. It's a ridiculous inability to extrapolate.
Where do I sign up?
> some scientifically qualifiable thing that is certain to happen any time now.
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.
1 reply →
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
3 replies →
If you didn't call GPT 3.5 AGI I do not believe you when you claim you would have called 5.5 AGI.
I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.
... until you actually, like, use it and find out all the limitations it has.
2 replies →
GPT 4 was 3 years ago... it's iterative enhancement.
And I've been told my job (litigation attorney) is about to be replaced for over 3 years now, has yet to come close.
5 replies →
If you present ELIZA to people some will think it is AGI today.
There is a reason so many scams happen with technology. It is too easy to fool people.
Any sufficiently complex LLM is indistinguishable from AGI
> Any sufficiently complex LLM is indistinguishable from AGI
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
2 replies →
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
1 reply →
Some might be missing the reference: https://en.wikipedia.org/wiki/Clarke's_three_laws
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
If I'm reading you right, your opinion is essentially: "If building bigger and bigger statistical next word predictors won't lead to artificial general intelligence, we will never see artificial general intelligence"
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
36 replies →
> And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics
Their progress is almost nought. Humanoids are stupid creations that are not good at anything in the real world. I'll give it to the machine dogs, at least they can reach corners we cannot.
2 replies →
Not sure if you're being sincere or sarcastic but some of us have lived through several AI winters now. And the fact that such a phenomenon exists is because of this terrible amount of hype the topic gets whenever any progress is made.
7 replies →
> Progress is huge and fast
is it? we're currently scaled on data input and LLMs in general, the only thing making them advance at all right now is adding processing power
Same thing happened with self-driving cars. Oh and cryptocurrencies.
10 replies →
OpenAI post: https://news.ycombinator.com/item?id=47921262
Tried to delete this submission in place of it but too late.
Usually we prefer the best third-party article to corporate press releases (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...) - I've put a link to the latter in the top text above.
[dead]
[dead]
[dead]
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
Impossible to take any of this seriously when it constantly refers to AGI.
Especially when the OpenAI definition of AGI is only in financial terms (when it becomes profitable), which can be easily manipulated.
[flagged]
Stop fucking linking paywalls ffs
Why is this being made public?
It’s an agreement between a public company and a highly scrutinized private company. Several of the provisions will change what happens in the marketplace, which everyone will see.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).
Also it's about OpenAI going public.
Might have something to do with the MSFT quarterly report tomorrow