Comment by tiahura
1 day ago
Deep breath. There’s no sense in trying to outcompete Google in burning cash. They’ve got time to wait until there’s the beginning of commodification of the tech, and a large profitable market to be had.
1 day ago
Deep breath. There’s no sense in trying to outcompete Google in burning cash. They’ve got time to wait until there’s the beginning of commodification of the tech, and a large profitable market to be had.
Or, apples just so bad at this they’re fumbling the bag. Billions in cash on hand each quarter but don’t have the balls that zuck has to pay unreasonable money. They have their own hardware like google does but are talking about perplexity??? They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?
Sometimes company’s just don’t do good enough.
> Billions in cash on hand each quarter but don’t have the balls that zuck has to pay unreasonable money
It remains to be seen whether this was a smart move, or just flailing money at the wall
The difference is it’s a move. Actually doing something rather than putting out internal PR.
Zuck tried and flailed with the metaverse. That was a huge waste, but he can afford it and fortune favours the brave.
4 replies →
> "They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?"
This is actually one of the hardest frontier problems. The "general purpose" assistant is one of the singular hardest technical problems with LLMs (or any kind of NLP).
I think people are easily snowed by LLMs' apparent linguistic fluency that they impute that to capability. This cannot be further from the truth.
In reality a LLM presented with a vast array of tools has extremely poor reliability, so if you want a thing that can order delivery and remember your shopping list and remind you of your flight and play music you're radically exceeding the capabilities of current models. There's a reason successful (anything that isn't demoware/vaporware) uses of agentic LLMs tend to narrow-domain use cases.
There's a reason Google hasn't done it either, and indeed nor has anyone else: neither Anthropic nor OpenAI have a general purpose assistant (defined as being able to execute an indefinite number of arbitrary tools to do things for you, as opposed to merely converse with you).
You split up the tasks into sub agents. This is something my company builds on top of langgraph.
> They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?
This does seem like an embarrassing fail, but even Google has not completed replacing Assistant with Gemini. There have also been lost functionality (maybe temporary) in the process.
they are not talking about perplexity; the endless rumor mill talks about perplexity. The same that has them buying everything from Disney to Porsche to Nike for decades.
Undercut the competitors by charging less. Apple can afford to run its product at a loss.
They don't really have much time to wait, they could be forced to allow default voice assistants and access to private APIs by the DOJ antitrust, the App Store Freedom Act, the Open Markets Act, if any of those come through then OpenAI and Gemini will quickly end up entrenched.
Isn't a larger concern that Tim "Services" Cook failed to skate where the puck was headed on this one? 15 years ago the Mac had Nvidia drivers, OpenCL support and a considerable stake in professional HPC. Today's Macs have none of that.
Every business has to make tradeoffs, it's just hard to imagine that any of these decisions were truly worthwhile with the benefit of hindsight. After the botched launch of Vision Pro, Apple has to prove their worth to the wider consumer market again.
Silicon Mac’s are great for running LLMs. Unified memory and memory bandwidth of the Max and Ultra processors is very useful in doing inference locally.
I imagine they recognize the need for increased memory bandwidth and will bump that up significantly -- they are well positioned to do so.
Great news, but entirely lost on commercial hyperscalers and much of the PC market. Apple's recalcitrance towards supporting Nvidia drivers basically killed their last shot at real-world rackmount deployment of Apple Silicon. Now you can go buy an ARM Grace CPU that does the same thing, but cheaper and with better software support.
1 reply →
> Isn't a larger concern that Tim "Services" Cook failed to skate where the puck was headed on this one?
Doesn't somebody (not named Nvidia) need to make a serious profit on AI before we can say that Tim Cook failed?
OpenAI and Anthropic aren't anywhere close. Meta? Google? The only one I can think of might be Microsoft but they still refuse to break out AI revenue and expenses in the earnings reports. That isn't a good sign.
I certainly don't think that profit would be required. Many of the massive tech companies that exist today went through long periods of time were they focused on growth and brand no profits for many years even post IPO.
I won't pretend to know exactly how the AI landscape will look in the future, but at this point it's pretty clear that there's going to be massive revenue going to the sector, and Moore's law will continue to crank.
I see what you're saying though. In particular is first generation gigs data centers might be black holes of an investment, considering in the not too distant future AI compute will be fully commoditized and 10x cheaper.
1 reply →
Their X/OpenGL support has also been in stasis for 10 years or more. There’s not enough money taking over for SGI to move their needle.
Macs are basically a dead business. The key is somehow creating the AI equivalent of an App Store or something
Dead business?
The Mac is something like 30 billion in revenue per year, and 10 billion in profit.
The entire "generative AI" "industry" is struggling to reach 30 billion in revenue even with their creative accounting (my free Perplexity that comes with Revolut is somehow counted at full price, even though I never paid anything, and I'm sure Revolut doesn't pay full price), and gross profit is deep in the negative.
Don't abandon Intel Macs, then and call them Mac AI systems with NVIDIA chips. Sell them for more than the Apple Silicon Macs.
No one would buy slower hotter computers for more money. Most people who own Apple computers today are extremely satisfied with Apple silicon, and AI enthusiasts are an increasing large slice of those people (since there really isn’t anything else and getting a 3090/4099/5090 is still hard and expensive).