← Back to context

Comment by someguyiguess

8 hours ago

Apple has not flopped on AI as you say. They are just focused on privacy and are likely waiting for the time when local models become efficient enough to run on iPhones (which is quickly becoming a reality).

Google could probably train models for orders of magnitude less money as you say, but they aren't. They are not capable of creating high quality models like OpenAI and Anthropic are. Their company is just too disorganized and chaotic.

Anecdotally, I don't know a single person who uses Gemini on purpose.

> They are just focused on privacy and are likely waiting for the time when local models become efficient enough to run on iPhones

How does that make any sense?

iPhones may be able to run local model inference, but Apple still can't train anything if they don't have any data.

> hey are just focused on privacy and are likely waiting for the time when local models become efficient enough to run on iPhones (which is quickly becoming a reality).

This is such revisionist history. They were not strategicially waiting. They tried, really really hard. The entire iPhone 16 pro was built on AI. Heck, they even (re)named it as Apple Intelligence.

Remember, this is the same time when Microsoft launched Copilot (RIP), Google launched Gemini, OpenAI with ChatGPT etc.

--- They had to walk back hard because it was a flop. They might be accidentally successful because they are a company with multiple strengths, but dont think of it as they were sitting AI out.

>They are just focused on privacy

Is that why they rushed out introducing AI summaries etc in order to play catch-up and then backpedaled when they exploded in customers' faces/individuals concerned in false headlines threatened to sue?

I use Gemini on purpose all the time. It can start timers for me, add calendar entries without having to type it out, convert email to calendar or reminders etc. I'd use it even more if it had more access to other bits of my phone.

The "waiting for local LLMs" came up re: Apple and IMHO that's too passive for company where if someone else has a better AI assistant, it's going to be a huge problem.

What if somebody cracks the problem if splitting inference between local and remote? What if someone else manages so modularize learning so your local LLM doesn't need to have been trained on how to compute integrals? Obviously we can't disect a current LLM and say "we can remove these weights because they do math" but there's no guarantee there isn't an architecture that will allow for that.

Apple could also be training an LLM Siri 2.0 that knows enough to do the things you want. Setting alarms, sending messages, etc. Apple would have all the information on what the major use cases are and where Siri is currently failing. They can increase Siri's capabilities as local LLM inference improves.

As for Google creating high quality models, I personally believe the models are going to be commoditized. I don't believe a single company is going to have a model "moat" to sustain itself as a trillion dollar company. I base two reasons for this:

1. At the end of the day, it's just software and software is infinitely reproducible and distributable. I mean we already saw one significant Anthropic leak this year; and

2. China is going to make sure we're not all dependent on one US tech company who "owns" AI. DeepSeek was just the first shot across the bow for that. It's going to be too important to China's national security for that not to happen.

And OpenAI's entire funding is predicated on that happening and OpenAI "winning".