People can correct me if I'm wrong, but I think the core logic behind OpenAI's valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.
Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.
Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.
It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.
Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
Gemma4 in my view is good enough to do things similar to Gemini 2.5 flash, meaning if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions but it’s not great at using all tools or one shooting things that require a lot of context or “expert knowledge”
If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.
I agree. At first I was really turned off by the Gemma 4 line of models because they didn’t function with coding agents as well as the qwen3.5 line of models. However, I found that for other use cases Gemma 4 was very good.
EDIT: I just saw this: “”Ollama 0.20.6 is here with improved Gemma 4 tool calling!”” I will rerun my tests after breakfast.
I think the article was wondering how many billion dollar bulldozers the world needs. My local hardware store sells a variety of axes. I myself am a happy ax user. I even replace them.
I think the difference is that with LLMs, in a lot of cases you do see some diminishing returns.
I won't deny that the latest Claude models are fantastic at just one shotting loads of problems. But we have an internal proxy to a load of models running on Vertex AI and I accidentally started using Opus/Sonnet 4 instead of 4.6. I genuinely didn't know until I checked my configuration.
AI models will get to this point where for 99% of problems, something like Gemma is gonna work great for people. Pair it up with an agentic harness on the device that lets it open apps and click buttons and we're done.
I still can't fathom that we're in 2026 in the AI boom and I still can't ask Gemini to turn shuffle mode on in Spotify. I don't think model intelligence is as much of an issue as people think it is.
Look at the whole history of computing. How many times has the pendulum swung from thin to fat clients and back?
I don't think it's even mildly controversial to say that there will be an inflection point where local models get Good Enough and this iteration of the pendulum shall swing to fat clients again.
Assuming improvements in LLMs follow a sigmoid curve, even if the cloud models are always slightly ahead in terms of raw performance it won't make much of a difference to most people, most of the time.
The local models have their own advantages (privacy, no -as-a-service model) that, for many people and orgs, will offset a small performance advantage. And, of course, you can always fall back on the cloud models should you hit something particularly chewy.
(All IMO - we're all just guessing. For example, good marketing or an as-yet-undiscovered network effect of cloud LLMs might distort this landscape).
My thinkpad is nearly 10 years old, I upgraded it to 32GB of ram and have replaced the battery a couple of times, but it's absolutely fine apart from that.
If AI which was leading edge in 2023 can run on a 2026 laptop, then presumably AI which is leading edge in 2026 will run on a 2029 laptop. Given that 2023 was world changing then that capacity is now on today's laptop
Either AI grows exponentially in which case it doesn't matter as all work will be done by AI by 2035, or it plateaus in say 2032 in which case by 2035 those models will run on a typical laptop.
Yep, and to be honest we don't really need local models for intensive tasks. At least yet. You can use openrouter (and others) to consume a wide variety of open models which are capable of using tools in an agentic workflow, close to the SOTA models, which are essentially commodities - many providers, each serving the same model and competing with each-other on uptime, throughput, and price. At some point we will be able to run them on commodity hardware, but for now the fact that we can have competition between providers is enough to ensure that rug pulls aren't possible.
Plus having Gemma on my device for general chat ensures I will always have a privacy respecting offline oracle which fulfils all of the non-programming tasks I could ever want. We are already at the point where the moat for these hyper scalers has basically dissolved for the general public's use case.
If I was OpenAI or Anthropic I would be shitting my pants right now and trying every unethical dark pattern in the book to lock in my customers. And they are trying hard. It won't work. And I won't shed a single tear for them.
> if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions
could I ask how you do that? I installed openclaw and set it to use Gemma 4 but it didn't act in an agent mode at all, it only responded in the chat window while doing nothing, and didn't read any files or do anything that you wrote (though I see you do mention that it's not great at using all tools). What are you using exactly?
Local models seem somewhere between 9 and 24 months behind. I'm not saying I won't be impressed with what online models will be able to do in two years, but I'm pretty satisfied with the prediction that I won't really need them in a couple of years.
But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback / review mechanisms or having to babysit it prompt by prompt.
By the time gemma6 allows you to do the above the proprietary models supposedly will already be on the next step change. It just depends if you need to ride the bleeding edge but specially because it's "intelligence", there's an obvious advantage in using the best version and it's easy to hype it up and generate fomo.
> But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback
Do people actually build meaningful things like that?
It's basically impossible to leave any AI agent unsupervised, even with an amazing harness (which is incredibly hard to build). The code slowly rots and drifts over time if not fully reviewed and refactored constantly.
Even if teams of agents working almost fully autonomously were reliable from a functional perspective (they would build a functional product), the end product would have ever increasing chaos structurally over time.
When that happens, you'll have fomo from not using opus 5.x. The numbers that they showed for Mythos show that the frontier is still steadily moving (and maybe even at a faster pace than before)
I would be surprised about that behavior even for 10% people doing real AI usable work. Very few people buy new motherboard or CPU or gfxcard every 3 months?
Even now just because the latest Anthropic is super great doesn't mean people are not using other models. Not everyone is subscribed to only the best.
There is a cognitive ceiling for what you can do with smaller models. Animals with simpler neural pathways often outperform whatever think they are capable of but there's no substitute for scale. I don't think you'll ever get a 4B or 8B model equivalent to Opus 4.6. Maybe just for coding tasks but certainly not Opus' breadth.
The only thing that we are sure can't be highly compressed is knowledge, because you can only fit so much information in given entropy budget without losing fidelity.
The minimal size limits of reasoning abilities are not clear at all. It could be that you don't need all that many parameters. In which case the door is open for small focused models to converge to parity with larger models in reasoning ability.
If that happens we may end up with people using small local models most of the time, and only calling out to large models when they actually need the extra knowledge.
except you don't want knowledge in the model, and most of that "size" comes from "encoded knowledge", i.e. over fitting. The goal should be to only have language handling in the model, and the knowledge in a database you can actually update, analyze etc. It's just really hard to do so.
"world models" (for cars) maybe make sense for self driving, but they are also just a crude workaround to have a physics simulation to push understanding of physics. Through in
difference to most topics, basic, physics tend to not change randomly and it's based on observation of reality, so it probably can work.
Law, health advice, programming stuff etc. on the other hand changes all the time and is all based on what humans wrote about it. Which in some areas (e.g. law or health) is very commonly outdated, wrong or at least incomplete in a dangerous way. And for programming changes all the time.
Having this separation of language processing and knowledge sources is ... hard, language is messy and often interleaves with information.
But this is most likely achievable with smaller models. Actually it might even be easier with a small model. (Through if the necessary knowledge bases are achievable to fit on run on a mac is another topic...)
And this should be the goal of AI companies, as it's the only long term sustainable approach as far as I can tell.
I say should because it may not be, because if they solve it that way and someone manages to clone their success then they lose all their moat for specialized areas as people can create knowledge bases for those areas with know-how OpenAI simple doesn't have access to. (Which would be a preferable outcome as it means actual competition and a potential fair working market.)
I think you are underestimating the strength a small model can get from tool use. There may be no substitute for scale, but that scale can live outside of the model and be queried using tools.
In the worst case a smaller model could use a tool that involves a bigger model to do something.
This is the classic apple approach - wait to understand what the thing is capable of doing (aka let others make sunk investments), envision a solution that is way better than the competition and then architect a path to building a leapfrog product that builds a large lead.
Pretty much it. That said, they did try to appease the markets by announcing 'Apple Intelligence' so they didn't appear to be behind everyone.
They did do the smart thing of not throwing too much capital behind it. Once the hype crumbles, they will be able to do something amazing with this tech. That will be a few years off but probably worth the wait.
For consumers AI has anti hype right now. It's off-putting to see consumer products slapped with a hundred AI labels. I see people talk about how you can turn off all of Apple Intelligence with one toggle rather than hundreds on Samsung.
Firefox is also marketing how easy it is to disable AI.
Yeah exactly the Apple Intelligence thing was pure BS to shut people up who kept saying apple was going to get disrupted by missing out.
Apple seems to follow the values that Steve laid out. Tim isn’t a visionary but he seems to follow the principles associated with being disciplined with cash quite well. They haven’t done any stupid acquisitions either. Quite the contrast with OAI.
The Vision Pro was a Development Kit; Just like the first generation Apple Watch. It's not meant for the consumers, it's meant for the developers among the consumers.
We will see if they ever release a new VisionOS device, but it's not the first time they did that; see also the Apple Watch.
When have they done that since the first iPhone in 2007? The watch maybe? Though not sure that's "leapfrog" better than anyone else's smartwatch, but I don't have one so maybe I'm wrong.
They certainly announced they were going to. I've yet to meet someone who actually used that integration. Like many of these things, it seems to have been a sop to the investors who were accusing apple of ignoring the AI wave
Will this strategy work every time ? Maybe for AI it will work (market is competitive and Apple just purchases the best model for its consumers).
But this approach may not work in other areas: e.g. building electric batteries, wireless modems, electric cars, solar cell technology, quantum computing etc.
Essentially Apple got lucky with AI but it needs to keep investing in cutting edge technology in the various broad areas it operates in and not let others get too far ahead !
It works often enough for the company to be wildly successful. They can simply cut their losses and withdraw from industries where it hasn't, such as EVs.
I think their M chips are a good example. They ran on intel for so long, then did the impossible of changing architecture on Mac, even without much transition pain.
Obviously that was built upon years of iPhone experience, but it shows they can lag behind, buy from other vendors, and still win when it becomes worth it to them.
They (Apple) bought out intel's wireless modems and are using them instead of Qualcomm's chips. IIRC, they aren't the best in class when it comes to raw throughput, but quite good in terms of throughput vs power consumption.
They do the things they think they can do very well.
Why would they try to build electric batteries, wireless modems, electric cars, solar cells, or quantum computers, if their R&D hadn't already determined that they would likely be able to do so Very Well?
It's not like any of those are really in their primary lines of business anyway.
> wait to understand what the thing is capable of doing
My parents use Android to ask “What are the 5 biggest towers in Chicago” or “Remove the people on my picture” while apparently iPhone is only capable of doing “Hey Siri start the Chronometer / There is no contact named Chronometer in your phone”.
My iPhone is lagging a ridiculous 10 years behind. It’s just that I don’t trust Google with my credit card.
It’s even more superpowered than previous implementations of this strategy.
When they made the iPhone, iPod, and Apple Watch they had no specific hardware advantage over competitors. Especially with early iPhone and iPod: no moat at all, make a better product with better marketing and you’ll beat Apple.
Now? Good luck getting any kind of reasonably priced laptop or phone that can run local AI as well as the iPhone/MacBook. It doesn’t matter that Apple Intelligence sucks right now, what matters is that every request made to Gemini is losing money and possibly always will.
This is especially true in 2026 where Windows laptops are climbing in price while MacBooks stay the same.
The best part is that it’ll all run on your device, instead of siphoning off your data to the provider. Local first AI.
I think the creatives will also turn around their seething hatred of AI for Apple AI because they use more ethical training data and it feels more like they own their AI, no one’s charging them a subscription fee to use it and then using their private data for training.
Have you talked to an artist like a musician, an illustrator or a web designer about AI? It's ripping off their work without credit and making them unemployable.
Apple aren’t in the business of building chatbots to impress investors (other than some WWDC2024 vaporware they’d rather not talk about any more). They’re in the business of consumer hardware.
Consumers want iPhones and (if Apple are right) some form of AR glasses in the next decade. That’s their focus. There’s a huge amount of machine learning and inference that’s required to get those to work. But it’s under the hood and computed locally. Hence their chips. I don’t see what Apple have to gain by building a competitor to what OpenAI has to offer.
~25% of Apple's revenue came from services in FY25 (and 50% from iPhone, ~25% from other hardware). They made $415B in that year, so ~$100B from services alone!
Services revenue is mostly just 30% from App Store Sales. This means every time a user clicks a pro account for ChatGPT or Claude on their phone, Apple makes more money than they could make with a self deployed model.
No one uses iMessage in my country. Yet iPhones are sought after. Some of us just really like iPhones for the experience - not everything is a conspiracy. People can have different tastes and are more free to choose than people on HN like to believe.
What I don't get about Apple is when everyone else was giving up on yet another VR attempt, moving into AI, they decide AI isn't worth it, and it was the right time for a me too VR headset.
So no VR, given the price and lack of developer support, and late arrival into AI.
I think of it like a technology checkpoint. Make sure you got as far as everyone else when they gave up, so when the next innovation in that space comes along you can start back up on even footing.
You want to have your own pathway to production that dodges competitors’ patents, is somewhat defensible itself, maybe a brand, etc.
It is the same pattern, late on VR, late on AI. Those two tech have a pricing problem. I would guess that Apple is working to create the conditions to make these tech cheap enough to sell it to everyone.
I think it was more that the experience was pretty much there. Hardware takes a loooong time to mature, even more if its a new style or package. I'm assuming that they were prototyping this in 2015-18.
Also, Apple knows that AR glasses, if done right, and not turned into a cesspool of perverts (ie google glasses) will be a massive platform. However its going to take at least another 5 years to get something usable. So if its possible, I expect apple to come out with something just after Meta either gives up or has a string of failures.
When using Siri recently it really struck me how much worse it feels after using ChatGPT. It struggles to understand what I say correctly and you have to give commands in more of a 'computer-friendly' form.
I hope they can at least fix this, as I really only use it as a hands-free system while driving.
I don't like companies forcing their newest features on me noisily and constantly trying to ship new features and see what sticks so you can't trust whether a feature advertised one week will even be there the next.
However, I have even less patience for companies forcing paid-for third-party ads down my throat on a paid product. Slack at least doesn't sell my eyeballs. Facebook, Twitter, Google's ads are worse to me than new feature dialogues.
Which brings me to Apple. I pay for a $1k+ device, and yet the app store's first result is always a sponsored bit of spam, adware, or sometimes even malware (like the fake ledger wallet on iOS, that was a sponsored result for a crypto stealer). On my other devices, I can at least choose to not use ad-ridden BS (like on android you can use F-Droid and AuroraStore, on Linux my package manager has no ads), but on iOS it's harder to avoid.
Apple hasn't sunk to Google levels in terms of ads, but they've crossed a line.
I agree. App store is really horrible. Why is it that when I'm searching for a first party or a very very popular, the first result and many of the other results are weird scammy malware like things? I don't particularly care about the stupid homepage ads tho, I think thats just because I have "personalize app store recommendations" turned off.
Search inside Settings (both mac and ios) was also really really stupid for a long while. Why are you taking me to some random accessibility toggle when I'm looking for "displays" ? But I checked right now and it's good.
I get it but... well I think of App Store as... a store. I don't have to go there.
I'm actually pretty disappointed in the lack of discovery available in the App Store, but I rarely go there. I'm fine with advertising being there. I wish it was better but I'm not offended that there is paid promotion in a store.
I haven't noticed this at all and I wonder if you're mistaking curation for advertising? When I open up the App Store I get a panel written "games we love" and a listing of indie games that are clearly not paid for ads. The ads in search are visibly marked as ads, and while I don't particularly like ads in general, they are pretty easy to avoid.
Apple keeps nagging me to upgrade to godawful Tahoe. Every time there’s a system update (which includes Safari, Safari TP, CLT etc. updates) Tahoe is always default checked. Even when I specifically click on a Sequoia point update, the Tahoe update is always checked instead of that point release. This has way more destructive potential than “try our new AI feature” in apps.
To add insult to injury, the one AI feature that I may want to evaluate—Claude Code integration in Xcode—is gated behind Tahoe upgrade, even though it has absolutely no reason to do so, given that every other IDE integrates AI features just fine on any recent OS.
Edit: Oh and I’m not getting bombarded in Slack at all, maybe because my company doesn’t pay for any of the AI stuff there. Last time I got a banner or something like that was months ago.
Nvidia restricts gamer cards in data centers through licensing, eventually they will probably release a cheaper consumer AI card to corner the local AI market that can't be used in data centers if they feel too much of a threat from Apple.
Imagine a future where Nvidia sells the exact same product at completely different prices, cheap for those using local models, and expensive for those deploying proprietary models in data centers.
> Nvidia restricts gamer cards in data centers through licensing
So does intel, so do a lot of companies.
but
The processor is only half of the equation, memory volume, type and bandwidth as also a big factor in cost. Sure consumer GPUs are cheaper, but they have less memory and (often) less bandwidth. The proc might be the same, or binned, but thats only part of the price.
[WSJ] sources expect.. first units in H1 2026, with GTC as the most likely unveiling stage.. NPU reportedly exceeds both Intel and AMD’s current neural processing units.. If the integrated GPU delivers RTX 5070-class performance in a thin laptop form factor, it would eliminate the need for a separate GPU die, fundamentally changing how gaming laptops are designed.
If they can get Valve/Steam for an OS that handles most games well that could in fact be huge if the pricepoint is a bit lower initially but with plenty of unified RAM (both for AI but also games).
That said, gaming laptops cooling issues are so often around the GPU so it'd also require a seasoned manufacturer to make it correctly.
Any field with abstraction becomes susceptible to ai disruption. In fact, ai susceptibility is proportional to the amount of abstraction. In this sense, the more abstraction then the more ai will displace people (my observation). This turns the millenia old model upside down. Traditionally more abstraction required more schooling and experience and was rewarded with more financial rewards. Until robots and world models become safe, affordable and ubiquitous, the financial apex of careers will be those that are abstraction resistant (technicians, emts, trades, etc) and those protected by requlation and the requlators(politicians, ceos)
Why is Nvidia so central to LLMs? Because they embraced ML a decade ago. Apple did as well, machine learning is central to so many things in the iPhone. Its not so surprising then, that a strong showing in ML sets you up good for LLMs..
Thing is, Apple never considered racing against LLM runners. Apple's success comes from human-centered design, it is not trying to launch a me-too product just because it increases their stock price.
iPod was not the first MP3 player.
iPhone was not even 3G at launch -- in the middle of 3G marketing craze.
They sure got lucky that unified memory is well-suited for running AI, but they just focused on having cost- and energy-efficient computing power. They've been having glasses in sight for the last 10 years (when was Magic Leap's first product?) and these chips have been developed with that in mind. But not only the chips: nothing was forcing Apple to spend the extra money for blazing fast SSD -- but they did.
So yes, Apple is a hardware company. All the services it sells run on their hardware. They've just designed their hardware to support their users' workflows, ignoring distractions.
With that said, LLM makes the GPU + memory bandwidth fun again. NVidia can't do it alone, Intel can't do it alone, but Apple positioned itself for it. It reminds me how everyone was surprised when then introduced 64-bit ARM for everyone: very few people understood what they were doing.
Tbh there are NVidia GPUs that beat Apple perf 2x or 3x, but these are desktop or server chips consuming 10x the power. Now all Apple needs to do is keep delivering performance out of Apple Silicon at good prices and best energy efficiency. Local LLM make sense when you need it immediately, anywhere, privately -- hence you need energy efficiency.
Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).
Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.
I'm pretty sure most people didn't notice any kind of inconsistency. I myself have a hard time figuring out what's going on. I'm so focused on doing the work with the computer that I don't have the time to notice what's "wrong" with the OS. Which makes me wonder if the whole thing is blown out of proportion.
> Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed.
As far as I remember Apple basically got forced into opening the platform to 3rd party developers. Not by regulation but by public pressure. It wasn't their initial intention to allow it.
I don't think I have unique insight on this but the common belief is they are desperately trying to reach AGI or a least have some halo model that will allow them to rise over the other companies. The problem is they have a hilariously large monthly burn paying for compute. If they don't produce something, they are in trouble if investors stop offering capital.
Apple is almost 2 years out from their announcement of Apple Intelligence. It has barely delivered on any of the hype. New Siri was delayed and barely mentioned in the last WWDC; none of the features are released in China.
In other news, people keep buying iPhones, and Apple just had its best quarter ever in China. AAPL is up 24% from last year.
i dont even care about apple intelligence. stays off, not sure anyone really cares about it who is also interested in what this ai shenanigans is about on a local device. i think people keep conflating apple intelligence with all these convos about how macs are kinda dope for joe consumer wanting to tinker with llms.
that's the other part of the story that matters, not apple intelligence. this writeup tries to touch on that, apple is uniquely positioned to do really well in this arena if/when local llm's becoming commodities that can do really impressive stuff. we're getting there a lot faster than we thought, someone had a trillion parameter qwen3,5 model going on his 128gb macbook and now people are thinking of more creative ways to swap out whats in memory as needed.
Indeed, a lot of the people that bought iPhones are now buying Macs with a binned version of the chip they already bought. So much so that Apple is in danger of running out of them.
there are always three elements in the equations of business model:
1. marginal cost
2. marginal revenue
3. value created
for llm providers, i always believe the key is to focus on high value problems such as coding or knowledge work, becaues of the high marginal cost of having new customers - the token burnt. and low marginal revenue if the problem is not valuable enough. in this sense no llm providers can scale like previous social media platforms without taking huge losses. and no meaning user stickiness can be built unless you have users' data. and there is no meaningful business model unless people are willing to pay a high price for the problem you solve, in the same way as paying for a saas.
i am really not optimistic about the llm providers other than anthropic. it seems that the rest are just burning money, and for what? there is no clear path for monetization.
and when the local llm is powerful enough, they will soon be obsolete for the cost, and the unsustainable business model. in the end of the day, i do agree that it is the consumer hardware provider that can win this game.
I am super bullish on Google, they are my best bet to earn from models. Mostly because they are vertically integrated (other revenue streams) + open to provide services to other companies (Apple deal).
one day people will realize that Tim Cook as one of the best killer CEOs.
by now - by now he has more hits than Steve Jobs. His precision, and being able to manage risk maybe due to his supply chain background have made Apple into the killer it is today.
if we were in the age of Robber barons he would've been up there with them.
The whole premise is that if you don't get to AGI first then you loose.
The idea is that Anthropic with AGI could build a better version of Apple, or whatever it wants.
This was the conversation like 1 year ago. What has changed?
Nothing changed, it's new ground, we are searching it with a search light. From some vantage points our view on things may feel quite complete, even insightful. Then we look at if differently and feel lost. It's a process we are in together.
That's also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
> Pure strategy, luck, or a bit of both? I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
Maximizing the available options is in fact a "strategy", and often a winning one when it comes to technology. I would love to be reminded of a list of tech innovators who were first and still the best.
I think the article is missing a whole aspect on how Apple is ensuring to not face actual competition while they're "playing it safe":
Even if the investment is overblown, there is market-demand for the services offered in the AI-industry. In a competitive playing field with equal opportunities, Apple would be affected by not participating. But they are establishing again their digital market concept, where they hinder a level playing field for Apple users.
Like they did with the Appstore (where Apple is owning the marketplace but also competes in it) they are setting themselves up as the "the bakn always wins" gatekeeper in the Apple ecosystem for AI services, by making "Apple Intelligence" an ecosystem orchestration layer (and thus themselves the gatekeeper).
1. They made a deal with OpenAI to close Apple's competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I'm sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn't allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]
2. Apple's integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by "Apple Intelligence & Siri" and "your request is analyzed to determine whether ChatGPT might have useful results" [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.
3. Developers are "encouraged" to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple
4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]
--> Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.
--> Apple will be in control of the "Neural Engine" on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above
--> I expect that AI orchestration, including training those models and distributing/maintaining them on the devices will be a significant part of Apple's AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor (via above step#2) when it will be most economic to in-source a service instead of "just" getting revenue-share for it (via above step#1).
So the juggernaut Apple is making sure to get the reward from those taking the risk. I don't see the US doing much about this anti-competitive practice so far, but at least in the EU this strategy has been identified and is being scrutinized.
How do you rate Vision Pro? It was not the first one, but it was certainly the best one. Total dud though, while Meta Ray Bans are selling like hot cakes (irrespective of what you think of the company)
It's the same everywhere: great fundamentals pay off. It's true of martial arts, dance, and absolutely about software platforms. You just have to trust that process and invest in it, which Apple does (although frustratingly not enough!).
> Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
Well.. no. The Stargate expansion was cancelled the orginally planned 1.2MW (!) datacenter is going ahead:
> The main site is located in Abilene, Texas, where an initial expansion phase with a capacity of 1.2 GW is being built on a campus spanning over 1,000 acres (approximately 400 hectares). Construction costs for this phase amount to around $15 billion. While two buildings have already been completed and put into operation, work is underway on further construction phases, the so-called Longhorn and Hamby sections. Satellite data confirms active construction activity, and completion of the last planned building is projected to take until 2029.
> The Stargate story, however, is also a story of fading ambitions. In March 2026, Bloomberg reported that Oracle and OpenAI had abandoned their original expansion plans for the Abilene campus. Instead of expanding to 2 GW, they would stick with the planned 1.2 GW for this location. OpenAI stated that it preferred to build the additional capacity at other locations. Microsoft then took over the planning of two additional AI factory buildings in the immediate vicinity of the OpenAI campus, which the data center provider Crusoe will build for Microsoft. This effectively creates two adjacent AI megacampus locations in Abilene, sharing an industrial infrastructure. The original partnership dynamics between OpenAI and SoftBank proved problematic: media reports described disagreements over site selection and energy sources as points of contention.
Apple's reality distortion field is really really strong. People love to claim Apple is doing 4D chess, when in reality Apple has certain strengths but AI is anything but.
Which is why they were completely caught offguard with botched rollout of Apple Intelligence. Even when they were playing to their strengths, things have not gone for them (Apple Vision Pro). Liquid Glass has had mixed reception, and that's often explained away as "Apple is setting up a world for Spatial Computing by unifying design language" and when the lead designer was fired it was like "Thank God Alan Dye is gone, he was bad for Apple anyway".
This seems mistaken to me. The core idea is that LLMs are commoditizing and that the UI (Siri in this case) is what users will stick with.
But... what's the argument that the bulk of "AI value" in the coming decade is going to be... Siri Queries?! That seems ridiculous on its face.
You don't code with Siri, you don't coordinate automated workforces with Siri, you don't use Siri to replace your customer service department, you don't use Siri to build your documentation collation system. You don't implement your auto-kill weaponry system in Siri. And Siri isn't going to be the face of SkyNet and the death of human society.
Siri is what you use to get your iPhone to do random stuff. And it's great. But ... the world is a whole lot bigger than that.
> Won't be surprised for the re-introduction of Xserve again but for AI.
This means, Apple is gonna spend a lot of money standing up data centers (CapEx). And the article in question is essentially saying that Apple is smart not to spend any money.
It sounds like there's a bit of wishful thinking on - Whatever Apple is doing is 4D chess. Apple not spending any money - That's genuis. Apple re-introducing Xserve racks - genius.
For the love of all that's holy - folks please stop using AI to publish smart sounding texts. While you may think you are "polishing" your text, you are just disrespecting your readers. Write in your own words.
But why do I feel like the quality of the software from Apple declined sharply in recent years? The liquid glass design feels very unpolished and not well thought out throughout almost everywhere… seems like even Apple can’t resist falling victim to AI slop
I don’t think it’s AI slop. Even before modern generative AI, I’ve noticed a decline in Apple’s software quality.
Rather, I feel that Apple has forgotten its roots. The Mac was “the computer for the rest of us,” and there were usability guidelines backed by research. What made the Mac stand out against Windows during a time when Windows had 95%+ marketshare was the Mac’s ease of use. The Mac really stood out in the 2000s, with Panther and Tiger being compelling alternatives to Windows XP.
I think Apple is less perfectionistic about its software than it was 15-20 years ago. I don’t know what caused this change, but I have a few hunches:
0. There’s no Steve Jobs.
1. When the competition is Windows and Android, and where there’s no other commercial competitors, there’s a temptation to just be marginally better than Windows/Android than to be the absolute best. Windows’ shooting itself in the foot doesn’t help matters.
2. The amazing performance and energy efficiency of Apple Silicon is carrying the Mac.
3. Many of the people who shaped the culture of Apple’s software from the 1980s to the 2000s are retired or have even passed away. Additionally, there are not a lot of young software developers who have heard of people like Larry Tesler, Bill Atkinson, Bruce Tognazzini, Don Norman, and other people who shaped Apple’s UI/UX principles.
4. Speaking of Bruce Tognazzini and Don Norman, I am reminded of this 2015 article (https://www.fastcompany.com/3053406/how-apple-is-giving-desi...) where they criticized Apple’s design as being focused on form over function. It’s only gotten worse since 2015. The saving grace for Apple is that the rest of the industry has gone even further in reducing usability.
I think what it will take for Apple to readopt its perfectionism is if competition forced it to.
I agree that there is a decline in usability. If you took a Mac from those early days, it is still very usable and everything is where you'd expect it to be. In recent years this has changed and the general iOS-ification of the OS has occurred. I have avoided upgrading to Tahoe due to seeing how awful my wife's iPhone looks now. It looks like a children's toy.
Apple will just drip feed locally running models that enable minor conveniences. They will probably drop the Apple Intelligence label later and just have things with their own names like "magic eraser".
Apple have had Siri for decades without any meaningful movement. If you think Apple is suddenly going to get better, that's just wishful thinking. Apple neither has the expertise nor the capability to do any of that. They'd hvae demonstrated that with Siri long time back.
What Apple does it build beautiful hardware. The software has been shambles for a really long time.
I like how we are acting like this market is so novel and emergent revering the luck of some while lamenting the failures of others when it was all "roadmapped" a decade ago. It's like watching a Shaanxi shadow puppet show with artificial folk lore about the origins of the industry. I hate reality television!
People can correct me if I'm wrong, but I think the core logic behind OpenAI's valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.
Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.
Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.
It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.
Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
Gemma4 in my view is good enough to do things similar to Gemini 2.5 flash, meaning if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions but it’s not great at using all tools or one shooting things that require a lot of context or “expert knowledge”
If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.
That’s a problem.
For the others anyway.
I agree. At first I was really turned off by the Gemma 4 line of models because they didn’t function with coding agents as well as the qwen3.5 line of models. However, I found that for other use cases Gemma 4 was very good.
EDIT: I just saw this: “”Ollama 0.20.6 is here with improved Gemma 4 tool calling!”” I will rerun my tests after breakfast.
The economy is, more or less, a competition.
If someone gets a really great axe and are happy with it, that’s great for them.
But then, other people will be on bulldozers.
They can say they are happy with the axe, but then they are not in the competition at that point.
I think the article was wondering how many billion dollar bulldozers the world needs. My local hardware store sells a variety of axes. I myself am a happy ax user. I even replace them.
similar vibes as "640k ought to be enough for anybody"
I think the difference is that with LLMs, in a lot of cases you do see some diminishing returns.
I won't deny that the latest Claude models are fantastic at just one shotting loads of problems. But we have an internal proxy to a load of models running on Vertex AI and I accidentally started using Opus/Sonnet 4 instead of 4.6. I genuinely didn't know until I checked my configuration.
AI models will get to this point where for 99% of problems, something like Gemma is gonna work great for people. Pair it up with an agentic harness on the device that lets it open apps and click buttons and we're done.
I still can't fathom that we're in 2026 in the AI boom and I still can't ask Gemini to turn shuffle mode on in Spotify. I don't think model intelligence is as much of an issue as people think it is.
9 replies →
Well you can do a lot with 640k…if you try. We have 16G in base machines and very few people know how to try anymore.
The world has moved on, that code-golf time is now spent on ad algorithms or whatever.
Escaping the constraint delivered a different future than anticipated.
14 replies →
Look at the whole history of computing. How many times has the pendulum swung from thin to fat clients and back?
I don't think it's even mildly controversial to say that there will be an inflection point where local models get Good Enough and this iteration of the pendulum shall swing to fat clients again.
Assuming improvements in LLMs follow a sigmoid curve, even if the cloud models are always slightly ahead in terms of raw performance it won't make much of a difference to most people, most of the time.
The local models have their own advantages (privacy, no -as-a-service model) that, for many people and orgs, will offset a small performance advantage. And, of course, you can always fall back on the cloud models should you hit something particularly chewy.
(All IMO - we're all just guessing. For example, good marketing or an as-yet-undiscovered network effect of cloud LLMs might distort this landscape).
More than "a 3 year old laptop is fine"
My thinkpad is nearly 10 years old, I upgraded it to 32GB of ram and have replaced the battery a couple of times, but it's absolutely fine apart from that.
If AI which was leading edge in 2023 can run on a 2026 laptop, then presumably AI which is leading edge in 2026 will run on a 2029 laptop. Given that 2023 was world changing then that capacity is now on today's laptop
Either AI grows exponentially in which case it doesn't matter as all work will be done by AI by 2035, or it plateaus in say 2032 in which case by 2035 those models will run on a typical laptop.
> it’s not great at using all tools
Glad it wasnt just me - i was impressed with the quality of Gemma4 - it just couldnt write the changes to file 9/10 times when using it with opencode
https://huggingface.co/google/gemma-4-31B-it/commit/e51e7dcd...
There was an update to tool calling 3 days ago. I haven't tested it myself but hope it helps.
2 replies →
> it just couldnt write the changes to file 9/10 times when using it with opencode
You might want to give this a try, it dramatically improves Edit tool accuracy without changing the model: https://blog.can.ac/2026/02/12/the-harness-problem/
Yep, and to be honest we don't really need local models for intensive tasks. At least yet. You can use openrouter (and others) to consume a wide variety of open models which are capable of using tools in an agentic workflow, close to the SOTA models, which are essentially commodities - many providers, each serving the same model and competing with each-other on uptime, throughput, and price. At some point we will be able to run them on commodity hardware, but for now the fact that we can have competition between providers is enough to ensure that rug pulls aren't possible.
Plus having Gemma on my device for general chat ensures I will always have a privacy respecting offline oracle which fulfils all of the non-programming tasks I could ever want. We are already at the point where the moat for these hyper scalers has basically dissolved for the general public's use case.
If I was OpenAI or Anthropic I would be shitting my pants right now and trying every unethical dark pattern in the book to lock in my customers. And they are trying hard. It won't work. And I won't shed a single tear for them.
> if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions
could I ask how you do that? I installed openclaw and set it to use Gemma 4 but it didn't act in an agent mode at all, it only responded in the chat window while doing nothing, and didn't read any files or do anything that you wrote (though I see you do mention that it's not great at using all tools). What are you using exactly?
I had the same issues. I had to tell it to use sub agents explicitly, and instead of saying set a cron say set an openclaw cron.
I generally do like the model, it’s not a great agent though.
It’s good for summarization tasks, small tool use, and has pretty good world knowledge, though it does hallucinate.
Local models seem somewhere between 9 and 24 months behind. I'm not saying I won't be impressed with what online models will be able to do in two years, but I'm pretty satisfied with the prediction that I won't really need them in a couple of years.
We still aren't going to be putting 200gb ram on a phone in a couple years to run those local models.
14 replies →
But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback / review mechanisms or having to babysit it prompt by prompt.
By the time gemma6 allows you to do the above the proprietary models supposedly will already be on the next step change. It just depends if you need to ride the bleeding edge but specially because it's "intelligence", there's an obvious advantage in using the best version and it's easy to hype it up and generate fomo.
> But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback
Do people actually build meaningful things like that?
It's basically impossible to leave any AI agent unsupervised, even with an amazing harness (which is incredibly hard to build). The code slowly rots and drifts over time if not fully reviewed and refactored constantly.
Even if teams of agents working almost fully autonomously were reliable from a functional perspective (they would build a functional product), the end product would have ever increasing chaos structurally over time.
I'd be happy to be proven wrong.
[dead]
When that happens, you'll have fomo from not using opus 5.x. The numbers that they showed for Mythos show that the frontier is still steadily moving (and maybe even at a faster pace than before)
I would be surprised about that behavior even for 10% people doing real AI usable work. Very few people buy new motherboard or CPU or gfxcard every 3 months?
Even now just because the latest Anthropic is super great doesn't mean people are not using other models. Not everyone is subscribed to only the best.
There is a cognitive ceiling for what you can do with smaller models. Animals with simpler neural pathways often outperform whatever think they are capable of but there's no substitute for scale. I don't think you'll ever get a 4B or 8B model equivalent to Opus 4.6. Maybe just for coding tasks but certainly not Opus' breadth.
The only thing that we are sure can't be highly compressed is knowledge, because you can only fit so much information in given entropy budget without losing fidelity.
The minimal size limits of reasoning abilities are not clear at all. It could be that you don't need all that many parameters. In which case the door is open for small focused models to converge to parity with larger models in reasoning ability.
If that happens we may end up with people using small local models most of the time, and only calling out to large models when they actually need the extra knowledge.
4 replies →
except you don't want knowledge in the model, and most of that "size" comes from "encoded knowledge", i.e. over fitting. The goal should be to only have language handling in the model, and the knowledge in a database you can actually update, analyze etc. It's just really hard to do so.
"world models" (for cars) maybe make sense for self driving, but they are also just a crude workaround to have a physics simulation to push understanding of physics. Through in difference to most topics, basic, physics tend to not change randomly and it's based on observation of reality, so it probably can work.
Law, health advice, programming stuff etc. on the other hand changes all the time and is all based on what humans wrote about it. Which in some areas (e.g. law or health) is very commonly outdated, wrong or at least incomplete in a dangerous way. And for programming changes all the time.
Having this separation of language processing and knowledge sources is ... hard, language is messy and often interleaves with information.
But this is most likely achievable with smaller models. Actually it might even be easier with a small model. (Through if the necessary knowledge bases are achievable to fit on run on a mac is another topic...)
And this should be the goal of AI companies, as it's the only long term sustainable approach as far as I can tell.
I say should because it may not be, because if they solve it that way and someone manages to clone their success then they lose all their moat for specialized areas as people can create knowledge bases for those areas with know-how OpenAI simple doesn't have access to. (Which would be a preferable outcome as it means actual competition and a potential fair working market.)
1 reply →
I think you are underestimating the strength a small model can get from tool use. There may be no substitute for scale, but that scale can live outside of the model and be queried using tools.
In the worst case a smaller model could use a tool that involves a bigger model to do something.
1 reply →
This is the classic apple approach - wait to understand what the thing is capable of doing (aka let others make sunk investments), envision a solution that is way better than the competition and then architect a path to building a leapfrog product that builds a large lead.
Pretty much it. That said, they did try to appease the markets by announcing 'Apple Intelligence' so they didn't appear to be behind everyone.
They did do the smart thing of not throwing too much capital behind it. Once the hype crumbles, they will be able to do something amazing with this tech. That will be a few years off but probably worth the wait.
For consumers AI has anti hype right now. It's off-putting to see consumer products slapped with a hundred AI labels. I see people talk about how you can turn off all of Apple Intelligence with one toggle rather than hundreds on Samsung.
Firefox is also marketing how easy it is to disable AI.
7 replies →
Yeah exactly the Apple Intelligence thing was pure BS to shut people up who kept saying apple was going to get disrupted by missing out.
Apple seems to follow the values that Steve laid out. Tim isn’t a visionary but he seems to follow the principles associated with being disciplined with cash quite well. They haven’t done any stupid acquisitions either. Quite the contrast with OAI.
Apple waited on smartphones?
I thought the original iPhone was basically first.
Do you count blackberry and palm pilot as Apple waiting to see?
Quietly they are doing things on-device. The OCR + copy/paste is genuine goodness - modestly functional.
This feature has been around since iOS 16 (2022) though - no relationship to Apple Intelligence (2024) or the current LLM hype (2023 onwards).
That's also literally years behind the competition. https://www.androidpolice.com/2018/05/09/android-ps-new-rece...
8 replies →
Yea, they nailed that with the Newton, Apple Pippin, and the Apple Vision Pro
The Vision Pro was a Development Kit; Just like the first generation Apple Watch. It's not meant for the consumers, it's meant for the developers among the consumers.
We will see if they ever release a new VisionOS device, but it's not the first time they did that; see also the Apple Watch.
1 reply →
Apple learned to hang back from plowing the unsold Lisa's into a landfill.
How amazing is that Apple car
4 replies →
The Vision Pro is the best AR/VR product ever created.
2 replies →
When have they done that since the first iPhone in 2007? The watch maybe? Though not sure that's "leapfrog" better than anyone else's smartwatch, but I don't have one so maybe I'm wrong.
Their own chips, vertically integrating.
- AirPods
- Apple Watch
- AirTag
Those are a few that come to mind. All do multi-billions in revenue per year.
4 replies →
[dead]
Didn't they rush to integrate ChatGPT into their OS back in 2024? Reality doesn't seem to align with your description.
They certainly announced they were going to. I've yet to meet someone who actually used that integration. Like many of these things, it seems to have been a sop to the investors who were accusing apple of ignoring the AI wave
Will this strategy work every time ? Maybe for AI it will work (market is competitive and Apple just purchases the best model for its consumers).
But this approach may not work in other areas: e.g. building electric batteries, wireless modems, electric cars, solar cell technology, quantum computing etc.
Essentially Apple got lucky with AI but it needs to keep investing in cutting edge technology in the various broad areas it operates in and not let others get too far ahead !
It works often enough for the company to be wildly successful. They can simply cut their losses and withdraw from industries where it hasn't, such as EVs.
I think their M chips are a good example. They ran on intel for so long, then did the impossible of changing architecture on Mac, even without much transition pain.
Obviously that was built upon years of iPhone experience, but it shows they can lag behind, buy from other vendors, and still win when it becomes worth it to them.
5 replies →
>wireless modems
They (Apple) bought out intel's wireless modems and are using them instead of Qualcomm's chips. IIRC, they aren't the best in class when it comes to raw throughput, but quite good in terms of throughput vs power consumption.
But Apple doesn't just try to do everything.
They do the things they think they can do very well.
Why would they try to build electric batteries, wireless modems, electric cars, solar cells, or quantum computers, if their R&D hadn't already determined that they would likely be able to do so Very Well?
It's not like any of those are really in their primary lines of business anyway.
> wait to understand what the thing is capable of doing
My parents use Android to ask “What are the 5 biggest towers in Chicago” or “Remove the people on my picture” while apparently iPhone is only capable of doing “Hey Siri start the Chronometer / There is no contact named Chronometer in your phone”.
My iPhone is lagging a ridiculous 10 years behind. It’s just that I don’t trust Google with my credit card.
These are software/cloud features. You can install gemini on iphone if you want to talk about towers in Chicago.
The only reason to care about it being OS integrated is to interact with functions of the OS, which siri does fine.
3 replies →
I want the reverse version of this, if Apple can promise me to 'lag behind' for another ten years I'll buy my first Apple device in ten years
Siri is one step below that for me, it still doesn't understand my accent, I feel like its voice recognition didn't improve from 2010...
"10 years behind" would be an improvement for Siri. It's actively broken much of the time in a way that Google Assistant or Alexa never has been.
1 reply →
It’s even more superpowered than previous implementations of this strategy.
When they made the iPhone, iPod, and Apple Watch they had no specific hardware advantage over competitors. Especially with early iPhone and iPod: no moat at all, make a better product with better marketing and you’ll beat Apple.
Now? Good luck getting any kind of reasonably priced laptop or phone that can run local AI as well as the iPhone/MacBook. It doesn’t matter that Apple Intelligence sucks right now, what matters is that every request made to Gemini is losing money and possibly always will.
This is especially true in 2026 where Windows laptops are climbing in price while MacBooks stay the same.
How do you know Gemini is losing money on inference?
12 replies →
Apples advantage was that they did everything in house and had the marketing and distribution capabilities. And now you’ve got the ecosystem lock in.
In hindsight it’s obvious why they pulled it off - nobody else could do it. They all had pieces missing.
The best part is that it’ll all run on your device, instead of siphoning off your data to the provider. Local first AI.
I think the creatives will also turn around their seething hatred of AI for Apple AI because they use more ethical training data and it feels more like they own their AI, no one’s charging them a subscription fee to use it and then using their private data for training.
Why do you think "creatives" have a "seething hatred of AI"?
Have you talked to an artist like a musician, an illustrator or a web designer about AI? It's ripping off their work without credit and making them unemployable.
1 reply →
Apple aren’t in the business of building chatbots to impress investors (other than some WWDC2024 vaporware they’d rather not talk about any more). They’re in the business of consumer hardware.
Consumers want iPhones and (if Apple are right) some form of AR glasses in the next decade. That’s their focus. There’s a huge amount of machine learning and inference that’s required to get those to work. But it’s under the hood and computed locally. Hence their chips. I don’t see what Apple have to gain by building a competitor to what OpenAI has to offer.
~25% of Apple's revenue came from services in FY25 (and 50% from iPhone, ~25% from other hardware). They made $415B in that year, so ~$100B from services alone!
Services revenue is mostly just 30% from App Store Sales. This means every time a user clicks a pro account for ChatGPT or Claude on their phone, Apple makes more money than they could make with a self deployed model.
2 replies →
> (if Apple are right) some form of AR glasses in the next decade.
Pretty sure this is just a hedge or simple research project and not a main bet.
Consumers don't necessarily want iPhone. They don't want to be excluded from iMessage, which is a completely different motivation.
Yeah, that just doesn't pass the simplest sniff tests. I barely use iMessage, and yet I'm an iPhone user. Basically everyone around me is the same.
4 replies →
That is a very US centric opinion.
In other part of the globe iphone users are mostly using whatsapp or Line and couldn't care less about imessage.
1 reply →
US centric view, which I believe to be wrong. UK is predominantly WhatsApp, and the bulk of handsets sold are still iPhones.
Income is a much tighter correlation than messaging platform. Rack up those market shares by phone value and the scales tip even harder.
1 reply →
I doubt 80% of iphone users would be able to tell you if imessage was on or not.
they might say that some people's messages are green, but not much more.
That must be an american thing because I guarantee you that it doesn't mean anything for the rest of the world.
2 replies →
iMessage is AFAIK only really a big thing in the US.
11 replies →
No one uses iMessage in my country. Yet iPhones are sought after. Some of us just really like iPhones for the experience - not everything is a conspiracy. People can have different tastes and are more free to choose than people on HN like to believe.
1 reply →
What I don't get about Apple is when everyone else was giving up on yet another VR attempt, moving into AI, they decide AI isn't worth it, and it was the right time for a me too VR headset.
So no VR, given the price and lack of developer support, and late arrival into AI.
I think of it like a technology checkpoint. Make sure you got as far as everyone else when they gave up, so when the next innovation in that space comes along you can start back up on even footing.
You want to have your own pathway to production that dodges competitors’ patents, is somewhat defensible itself, maybe a brand, etc.
It is the same pattern, late on VR, late on AI. Those two tech have a pricing problem. I would guess that Apple is working to create the conditions to make these tech cheap enough to sell it to everyone.
For everyone that can afford Apple, that is.
They do have the mobile phone market duopoly advantage though, far from the 90's mistakes that almost closed shop.
> it was the right time for a me too VR headset.
I think it was more that the experience was pretty much there. Hardware takes a loooong time to mature, even more if its a new style or package. I'm assuming that they were prototyping this in 2015-18.
Also, Apple knows that AR glasses, if done right, and not turned into a cesspool of perverts (ie google glasses) will be a massive platform. However its going to take at least another 5 years to get something usable. So if its possible, I expect apple to come out with something just after Meta either gives up or has a string of failures.
When using Siri recently it really struck me how much worse it feels after using ChatGPT. It struggles to understand what I say correctly and you have to give commands in more of a 'computer-friendly' form.
I hope they can at least fix this, as I really only use it as a hands-free system while driving.
I've had it turned off since Sequoia, and this I truly appreciate. It hasn't nagged me once to turn it or Siri on, and it isn't mandatory.
When I open up JIRA or Slack I am always greeted with multiple new dialogues pointing at some new AI bullshit, in comparison. We hates it precious
I don't like companies forcing their newest features on me noisily and constantly trying to ship new features and see what sticks so you can't trust whether a feature advertised one week will even be there the next.
However, I have even less patience for companies forcing paid-for third-party ads down my throat on a paid product. Slack at least doesn't sell my eyeballs. Facebook, Twitter, Google's ads are worse to me than new feature dialogues.
Which brings me to Apple. I pay for a $1k+ device, and yet the app store's first result is always a sponsored bit of spam, adware, or sometimes even malware (like the fake ledger wallet on iOS, that was a sponsored result for a crypto stealer). On my other devices, I can at least choose to not use ad-ridden BS (like on android you can use F-Droid and AuroraStore, on Linux my package manager has no ads), but on iOS it's harder to avoid.
Apple hasn't sunk to Google levels in terms of ads, but they've crossed a line.
I agree. App store is really horrible. Why is it that when I'm searching for a first party or a very very popular, the first result and many of the other results are weird scammy malware like things? I don't particularly care about the stupid homepage ads tho, I think thats just because I have "personalize app store recommendations" turned off.
Search inside Settings (both mac and ios) was also really really stupid for a long while. Why are you taking me to some random accessibility toggle when I'm looking for "displays" ? But I checked right now and it's good.
It's best to avoid App Store and look for apps on Google (with ad blocker).
I get it but... well I think of App Store as... a store. I don't have to go there.
I'm actually pretty disappointed in the lack of discovery available in the App Store, but I rarely go there. I'm fine with advertising being there. I wish it was better but I'm not offended that there is paid promotion in a store.
11 replies →
I haven't noticed this at all and I wonder if you're mistaking curation for advertising? When I open up the App Store I get a panel written "games we love" and a listing of indie games that are clearly not paid for ads. The ads in search are visibly marked as ads, and while I don't particularly like ads in general, they are pretty easy to avoid.
3 replies →
Apple keeps nagging me to upgrade to godawful Tahoe. Every time there’s a system update (which includes Safari, Safari TP, CLT etc. updates) Tahoe is always default checked. Even when I specifically click on a Sequoia point update, the Tahoe update is always checked instead of that point release. This has way more destructive potential than “try our new AI feature” in apps.
To add insult to injury, the one AI feature that I may want to evaluate—Claude Code integration in Xcode—is gated behind Tahoe upgrade, even though it has absolutely no reason to do so, given that every other IDE integrates AI features just fine on any recent OS.
Edit: Oh and I’m not getting bombarded in Slack at all, maybe because my company doesn’t pay for any of the AI stuff there. Last time I got a banner or something like that was months ago.
Nvidia restricts gamer cards in data centers through licensing, eventually they will probably release a cheaper consumer AI card to corner the local AI market that can't be used in data centers if they feel too much of a threat from Apple.
Imagine a future where Nvidia sells the exact same product at completely different prices, cheap for those using local models, and expensive for those deploying proprietary models in data centers.
> Nvidia restricts gamer cards in data centers through licensing
So does intel, so do a lot of companies.
but
The processor is only half of the equation, memory volume, type and bandwidth as also a big factor in cost. Sure consumer GPUs are cheaper, but they have less memory and (often) less bandwidth. The proc might be the same, or binned, but thats only part of the price.
Nvidia-Mediatek Arm laptops will compete with Qualcomm and Apple, https://www.forbes.com/sites/jonmarkman/2026/03/16/the-arm-i...
If they can get Valve/Steam for an OS that handles most games well that could in fact be huge if the pricepoint is a bit lower initially but with plenty of unified RAM (both for AI but also games).
That said, gaming laptops cooling issues are so often around the GPU so it'd also require a seasoned manufacturer to make it correctly.
There’s long been professional segmentation for GPUs, long before people started running AI models on them
Having your cake and eating it too. Consumer goodwill and printing money.
Any field with abstraction becomes susceptible to ai disruption. In fact, ai susceptibility is proportional to the amount of abstraction. In this sense, the more abstraction then the more ai will displace people (my observation). This turns the millenia old model upside down. Traditionally more abstraction required more schooling and experience and was rewarded with more financial rewards. Until robots and world models become safe, affordable and ubiquitous, the financial apex of careers will be those that are abstraction resistant (technicians, emts, trades, etc) and those protected by requlation and the requlators(politicians, ceos)
Why is Nvidia so central to LLMs? Because they embraced ML a decade ago. Apple did as well, machine learning is central to so many things in the iPhone. Its not so surprising then, that a strong showing in ML sets you up good for LLMs..
Thing is, Apple never considered racing against LLM runners. Apple's success comes from human-centered design, it is not trying to launch a me-too product just because it increases their stock price. iPod was not the first MP3 player. iPhone was not even 3G at launch -- in the middle of 3G marketing craze.
They sure got lucky that unified memory is well-suited for running AI, but they just focused on having cost- and energy-efficient computing power. They've been having glasses in sight for the last 10 years (when was Magic Leap's first product?) and these chips have been developed with that in mind. But not only the chips: nothing was forcing Apple to spend the extra money for blazing fast SSD -- but they did.
So yes, Apple is a hardware company. All the services it sells run on their hardware. They've just designed their hardware to support their users' workflows, ignoring distractions.
With that said, LLM makes the GPU + memory bandwidth fun again. NVidia can't do it alone, Intel can't do it alone, but Apple positioned itself for it. It reminds me how everyone was surprised when then introduced 64-bit ARM for everyone: very few people understood what they were doing.
Tbh there are NVidia GPUs that beat Apple perf 2x or 3x, but these are desktop or server chips consuming 10x the power. Now all Apple needs to do is keep delivering performance out of Apple Silicon at good prices and best energy efficiency. Local LLM make sense when you need it immediately, anywhere, privately -- hence you need energy efficiency.
My capex is even less than Apple, I can ship to user's Apple hardware and I can't access iPhone user photos either...so really I'm the winner.
Using the author’s logic, it is Google then that will lead.
Unlike Apple, they have even more devices in the field PLUS they have strong models PLUS Apple uses Google models.
Google is an advertisement company at the end of the day and that's a conflict of interest with user privacy.
> Apple uses Google models
Source?
The article itself? lol
Apple's accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.
Maybe they thought an investment in a product with lots of substitutes & high capital requirements wasn't very attractive.
Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).
Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.
As for consistency, Apple's latest UI shows they don't give a damn any more.
I'm pretty sure most people didn't notice any kind of inconsistency. I myself have a hard time figuring out what's going on. I'm so focused on doing the work with the computer that I don't have the time to notice what's "wrong" with the OS. Which makes me wonder if the whole thing is blown out of proportion.
> Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed.
As far as I remember Apple basically got forced into opening the platform to 3rd party developers. Not by regulation but by public pressure. It wasn't their initial intention to allow it.
> I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions
I find this intriguing.. Does anyone here have enough insight to speculate more?
It's probably one of the biggest headlines right now. OpenAI has about $96 billion in debt and they don't have a revenue generating product yet.
I might be wrong but should you not have said profit generating? I pay them $20 a month so they have at least $20 of revenue
1) Put data on X/Y chart 2) Find ruler and pencil 3) Draw line
Doing this you will make all kind of fun predictions.
I don't think I have unique insight on this but the common belief is they are desperately trying to reach AGI or a least have some halo model that will allow them to rise over the other companies. The problem is they have a hilariously large monthly burn paying for compute. If they don't produce something, they are in trouble if investors stop offering capital.
Apple is almost 2 years out from their announcement of Apple Intelligence. It has barely delivered on any of the hype. New Siri was delayed and barely mentioned in the last WWDC; none of the features are released in China.
In other news, people keep buying iPhones, and Apple just had its best quarter ever in China. AAPL is up 24% from last year.
i dont even care about apple intelligence. stays off, not sure anyone really cares about it who is also interested in what this ai shenanigans is about on a local device. i think people keep conflating apple intelligence with all these convos about how macs are kinda dope for joe consumer wanting to tinker with llms.
that's the other part of the story that matters, not apple intelligence. this writeup tries to touch on that, apple is uniquely positioned to do really well in this arena if/when local llm's becoming commodities that can do really impressive stuff. we're getting there a lot faster than we thought, someone had a trillion parameter qwen3,5 model going on his 128gb macbook and now people are thinking of more creative ways to swap out whats in memory as needed.
A lot of the people that bought iPhones are now buying Macs as well.
Indeed, a lot of the people that bought iPhones are now buying Macs with a binned version of the chip they already bought. So much so that Apple is in danger of running out of them.
It's almost like people don't actually want LLMs all over their core tools...
there are always three elements in the equations of business model: 1. marginal cost 2. marginal revenue 3. value created
for llm providers, i always believe the key is to focus on high value problems such as coding or knowledge work, becaues of the high marginal cost of having new customers - the token burnt. and low marginal revenue if the problem is not valuable enough. in this sense no llm providers can scale like previous social media platforms without taking huge losses. and no meaning user stickiness can be built unless you have users' data. and there is no meaningful business model unless people are willing to pay a high price for the problem you solve, in the same way as paying for a saas.
i am really not optimistic about the llm providers other than anthropic. it seems that the rest are just burning money, and for what? there is no clear path for monetization.
and when the local llm is powerful enough, they will soon be obsolete for the cost, and the unsustainable business model. in the end of the day, i do agree that it is the consumer hardware provider that can win this game.
I am super bullish on Google, they are my best bet to earn from models. Mostly because they are vertically integrated (other revenue streams) + open to provide services to other companies (Apple deal).
What I think was a wasted opportunity was not bringing the xserve back, being one of the few e2e solutions out there at scale.
one day people will realize that Tim Cook as one of the best killer CEOs.
by now - by now he has more hits than Steve Jobs. His precision, and being able to manage risk maybe due to his supply chain background have made Apple into the killer it is today.
if we were in the age of Robber barons he would've been up there with them.
The whole premise is that if you don't get to AGI first then you loose. The idea is that Anthropic with AGI could build a better version of Apple, or whatever it wants.
This was the conversation like 1 year ago. What has changed?
Nothing changed, it's new ground, we are searching it with a search light. From some vantage points our view on things may feel quite complete, even insightful. Then we look at if differently and feel lost. It's a process we are in together.
if you actually got to A.G.I, why would you rent it out ?
I just realized that next year Apple's Neural Engine will be 10 years old, just like the "NPUs will change AI forever!" puff pieces.
Here's to another 10 years of scuffed Metal Compute Shaders, I guess.
So Apple’s AI acceleration and memory architecture is accidental, but nvidia’s is not?
Nvidia has research papers on accelerating Machine Learning as far back as 2014: https://research.nvidia.com/publications?f%5B0%5D=research_a...
Apple's website from 2017 https://machinelearning.apple.com/research?page=1&sort=oldes...
That's also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
maybe “The Only Way to Win is Not to Play”
In the larger scheme of things, the great winner will be open source, as we'll simply use AI to recreate the entire MacOS ecosystem :)
If AI coding does go anywhere and stays affordable, this would be a great outcome.
I think AI needs to greatly accelerate open hardware design and make advanced manufacturing more accessible to really make a dent.
User facing software is not the limiting factor in AI assisted replacement of Apple products.
Apple is just waiting for all the slop to inevitably crash to see what actually works
> Pure strategy, luck, or a bit of both? I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
Maximizing the available options is in fact a "strategy", and often a winning one when it comes to technology. I would love to be reminded of a list of tech innovators who were first and still the best.
Anyway, hasn't this always been Apple's strategy?
I think the article is missing a whole aspect on how Apple is ensuring to not face actual competition while they're "playing it safe":
Even if the investment is overblown, there is market-demand for the services offered in the AI-industry. In a competitive playing field with equal opportunities, Apple would be affected by not participating. But they are establishing again their digital market concept, where they hinder a level playing field for Apple users.
Like they did with the Appstore (where Apple is owning the marketplace but also competes in it) they are setting themselves up as the "the bakn always wins" gatekeeper in the Apple ecosystem for AI services, by making "Apple Intelligence" an ecosystem orchestration layer (and thus themselves the gatekeeper).
1. They made a deal with OpenAI to close Apple's competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I'm sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn't allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]
2. Apple's integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by "Apple Intelligence & Siri" and "your request is analyzed to determine whether ChatGPT might have useful results" [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.
3. Developers are "encouraged" to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple
4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]
--> Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.
--> Apple will be in control of the "Neural Engine" on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above
--> I expect that AI orchestration, including training those models and distributing/maintaining them on the devices will be a significant part of Apple's AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor (via above step#2) when it will be most economic to in-source a service instead of "just" getting revenue-share for it (via above step#1).
So the juggernaut Apple is making sure to get the reward from those taking the risk. I don't see the US doing much about this anti-competitive practice so far, but at least in the EU this strategy has been identified and is being scrutinized.
[1] https://help.openai.com/en/articles/7905739-chatgpt-ios-app-...
[2] https://www.apple.com/legal/privacy/data/en/chatgpt-extensio...
[3] https://developer.apple.com/apple-intelligence/
[4] https://9to5mac.com/2024/06/10/craig-federighi-says-apple-ho...
That’s actually by design. Apple never jumps on the tech hype bandwagon.
they wait until the dust settles before making their well-thought-out moves.
Every time they’ve jumped the hype train too quickly it hasn’t worked out, like Siri for example.
How do you rate Vision Pro? It was not the first one, but it was certainly the best one. Total dud though, while Meta Ray Bans are selling like hot cakes (irrespective of what you think of the company)
It's the same everywhere: great fundamentals pay off. It's true of martial arts, dance, and absolutely about software platforms. You just have to trust that process and invest in it, which Apple does (although frustratingly not enough!).
> Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
Well.. no. The Stargate expansion was cancelled the orginally planned 1.2MW (!) datacenter is going ahead:
> The main site is located in Abilene, Texas, where an initial expansion phase with a capacity of 1.2 GW is being built on a campus spanning over 1,000 acres (approximately 400 hectares). Construction costs for this phase amount to around $15 billion. While two buildings have already been completed and put into operation, work is underway on further construction phases, the so-called Longhorn and Hamby sections. Satellite data confirms active construction activity, and completion of the last planned building is projected to take until 2029.
> The Stargate story, however, is also a story of fading ambitions. In March 2026, Bloomberg reported that Oracle and OpenAI had abandoned their original expansion plans for the Abilene campus. Instead of expanding to 2 GW, they would stick with the planned 1.2 GW for this location. OpenAI stated that it preferred to build the additional capacity at other locations. Microsoft then took over the planning of two additional AI factory buildings in the immediate vicinity of the OpenAI campus, which the data center provider Crusoe will build for Microsoft. This effectively creates two adjacent AI megacampus locations in Abilene, sharing an industrial infrastructure. The original partnership dynamics between OpenAI and SoftBank proved problematic: media reports described disagreements over site selection and energy sources as points of contention.
https://xpert.digital/en/digitale-ruestungsspirale/
> Micron’s stock crashed. [the link included an image of dropping to $320]
Micron’s stock is back to $420 today
> One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription.
Actually, no. They'd miscalculated and consumed $2700 worth of tokens.
The same place that checked that claim also points out:
> In fact, Anthropic’s own data suggests the average Claude Code developer uses about $6 per day in API-equivalent compute.
https://www.financialexpress.com/life/technology-why-is-clau...
I like Apple's chips, but why do we put up with crappy analysis like this?
Apple's reality distortion field is really really strong. People love to claim Apple is doing 4D chess, when in reality Apple has certain strengths but AI is anything but.
Which is why they were completely caught offguard with botched rollout of Apple Intelligence. Even when they were playing to their strengths, things have not gone for them (Apple Vision Pro). Liquid Glass has had mixed reception, and that's often explained away as "Apple is setting up a world for Spatial Computing by unifying design language" and when the lead designer was fired it was like "Thank God Alan Dye is gone, he was bad for Apple anyway".
So essentially, Apple can do no wrong.
This seems mistaken to me. The core idea is that LLMs are commoditizing and that the UI (Siri in this case) is what users will stick with.
But... what's the argument that the bulk of "AI value" in the coming decade is going to be... Siri Queries?! That seems ridiculous on its face.
You don't code with Siri, you don't coordinate automated workforces with Siri, you don't use Siri to replace your customer service department, you don't use Siri to build your documentation collation system. You don't implement your auto-kill weaponry system in Siri. And Siri isn't going to be the face of SkyNet and the death of human society.
Siri is what you use to get your iPhone to do random stuff. And it's great. But ... the world is a whole lot bigger than that.
Apple never competed in the "AI race" in the first place, because they already knew they were already at the finish line.
This was really unsurprising [0].
[0] https://news.ycombinator.com/item?id=40278371
Your linked comment argues the opposite.
> Won't be surprised for the re-introduction of Xserve again but for AI.
This means, Apple is gonna spend a lot of money standing up data centers (CapEx). And the article in question is essentially saying that Apple is smart not to spend any money.
It sounds like there's a bit of wishful thinking on - Whatever Apple is doing is 4D chess. Apple not spending any money - That's genuis. Apple re-introducing Xserve racks - genius.
> This is an obvious moat for Apple who can offer a cheaper alternative for training, inference AI server farms.
According to Bloomberg, Apple's inference server farms are a flop: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...
Go a little bit deeper than what the media directly wants you to think.
For the love of all that's holy - folks please stop using AI to publish smart sounding texts. While you may think you are "polishing" your text, you are just disrespecting your readers. Write in your own words.
But why do I feel like the quality of the software from Apple declined sharply in recent years? The liquid glass design feels very unpolished and not well thought out throughout almost everywhere… seems like even Apple can’t resist falling victim to AI slop
I don’t think it’s AI slop. Even before modern generative AI, I’ve noticed a decline in Apple’s software quality.
Rather, I feel that Apple has forgotten its roots. The Mac was “the computer for the rest of us,” and there were usability guidelines backed by research. What made the Mac stand out against Windows during a time when Windows had 95%+ marketshare was the Mac’s ease of use. The Mac really stood out in the 2000s, with Panther and Tiger being compelling alternatives to Windows XP.
I think Apple is less perfectionistic about its software than it was 15-20 years ago. I don’t know what caused this change, but I have a few hunches:
0. There’s no Steve Jobs.
1. When the competition is Windows and Android, and where there’s no other commercial competitors, there’s a temptation to just be marginally better than Windows/Android than to be the absolute best. Windows’ shooting itself in the foot doesn’t help matters.
2. The amazing performance and energy efficiency of Apple Silicon is carrying the Mac.
3. Many of the people who shaped the culture of Apple’s software from the 1980s to the 2000s are retired or have even passed away. Additionally, there are not a lot of young software developers who have heard of people like Larry Tesler, Bill Atkinson, Bruce Tognazzini, Don Norman, and other people who shaped Apple’s UI/UX principles.
4. Speaking of Bruce Tognazzini and Don Norman, I am reminded of this 2015 article (https://www.fastcompany.com/3053406/how-apple-is-giving-desi...) where they criticized Apple’s design as being focused on form over function. It’s only gotten worse since 2015. The saving grace for Apple is that the rest of the industry has gone even further in reducing usability.
I think what it will take for Apple to readopt its perfectionism is if competition forced it to.
I agree that there is a decline in usability. If you took a Mac from those early days, it is still very usable and everything is where you'd expect it to be. In recent years this has changed and the general iOS-ification of the OS has occurred. I have avoided upgrading to Tahoe due to seeing how awful my wife's iPhone looks now. It looks like a children's toy.
Software quality decline has been a recognised trend long before LLMs took the limelight. Apple included.
[dead]
Don't worry, when apple introduce it, it'll be revolutionary and 10% thinner.
Apple will just drip feed locally running models that enable minor conveniences. They will probably drop the Apple Intelligence label later and just have things with their own names like "magic eraser".
Apple have had Siri for decades without any meaningful movement. If you think Apple is suddenly going to get better, that's just wishful thinking. Apple neither has the expertise nor the capability to do any of that. They'd hvae demonstrated that with Siri long time back.
What Apple does it build beautiful hardware. The software has been shambles for a really long time.
I like how we are acting like this market is so novel and emergent revering the luck of some while lamenting the failures of others when it was all "roadmapped" a decade ago. It's like watching a Shaanxi shadow puppet show with artificial folk lore about the origins of the industry. I hate reality television!