Comment by hollerith

7 months ago

I thank Tim Cook for this information. Till today I did not know the extent of Apple's commitment to or interest in doing frontier AI research.

I was leaning towards buying a Mac, but now I won't because I do what (little) I can to slow down AI.

Switching to Windows would also clearly be encouraging the AI juggernaut, so I will stay with Linux.

I understand your sentiment, but AI is the new internet -- despite the hype it's not going away.

The ability to have true personal AI agent that you would own would be quite empowering. Out of all the industry players I'd put Apple as the least bad option to have that happen with.

  • >Out of all the industry players I'd put Apple as the least bad option

    To be the least bad option, Apple would need to publish a plan for keeping an AI under control so that it stays under control even if it undergoes a sharp increase in cognitive capability (e.g., during training) or alternatively a plan to prevent an AI's capability from ever rising to a level that requires the aforementioned control.

    I haven't seen anything out of Apple suggesting that Apple's leaders understand that a plan of the first kind or the second kind is necessary.

    Most people who have written about the topic in detail put Anthropic as the least bad option because out of all the groups with competitive offerings, their leadership has written in the most detail about the need for a plan and about their particular (completely inadequate IMHO) plan.

    I myself put Google as the least bad option -- the slightly less awful option, to be precise -- with large uncertainty because Google wasn't pushing capabilities hard till OpenAI and Anthropic put it in a situation in which it either had to start pushing hard or risk falling so far behind it wouldn't be able to catch up. Consequently, I use Gemini as my LLM service. In particular, Google risked finding itself in a situation in which it cannot create a competitive offering because it doesn't have access to enough data collected from users of LLMs and generative AIs and cannot get enough data because it cannot attract users. While it was the leading lab, Google was proceeding slowly and at least one industry insider claims credibly that the slowness was deliberately chosen to reduce the probability of an AI catastrophe.

    I must stress that no one has an adequate plan for avoiding an AI catastrophe while continuing to push capabilities, and IMHO no one is likely to devise one in time, so would be great if no one did any more frontier AI research at all till humanity itself becomes more cognitively capable.

    • Of the companies mentioned I believe apple is the only one that does not provide their own chat bot. If they aren’t opening an interface for open ended interaction with their AI tools I think your concern is much less relevant. I’m curious if you’d disagree though.

      1 reply →

You might enjoy the Aussie saying “pissing into the wind”.

  • It is pissing in the wind, but at least I'm not contributing to the catastrophic outcome by cooperating or doing business with Apple.

    It's not my fault that the reality in which humanity find itself turned out to be more dangerous than almost anyone suspected. My only moral obligation is to do what I can to make the future turn out okay even though what I can do is very very little.

> because I do what (little) I can to slow down AI.

I think you're focusing on the wrong things. AI can be used in harmful ways, but not because they're outsmarting human beings despite all the cult-like hype. In fact, they don't need to be actually competent for the rich to take advantage of the tech in destructive ways. They just need to convince the public that they're competent enough so that they have an excuse to cut jobs. Even if AI does a poorer job, it won't matter if consumers don't have alternatives, which is unfortunately the case in many situations. We face a much bigger threat of data breaches from vibe coded apps than conscious robots manipulating humans through the Matrix.

Just look at Google support. It's a bunch of mindless robots that can kick you out of their platform on a whim. Their "dispute process" is another robot that passive-aggressively ragebaits you. [1][2] They're incompetent, yet it helps one of the richest companies in the world save money.

Also, let's not forget Google's AI flagged multiple desperate parents sharing medical pics of their kids to their doctors. Only when the media contacted them did a human being come out, only to falsely accuse the parents of being pedos. [3] People were harmed, and it's not because of competency.

Another greater concern is the ability of LLMs to mass-produce spam or troll content with minimal effort. It's a major threat to democracies all around the globe, and it turns out we don't need a superintelligence for demagogues to misuse it and cause harm.

There are more real concerns regarding AI other than the perpetually "just around the corner" superintelligence. What we need is a push for stronger regulatory protection for workers, consumers, and constituents. Not boycotting Macbooks because of AI.

[1]: https://news.ycombinator.com/item?id=32538805