← Back to context

Comment by badloginagain

2 years ago

Arguments:

1. The danger of AI is they confuse humans to trust them as friends instead of as services.

2. The corporations running those services are incentivized to capitalize on that confusion.

3. Government is obligated to regulate the corporations running AI services, not necessarily AI itself.

---

As a counter you could frame point 2 to be:

The corporations running those services are incentivized to make/host competitive AI products.

---

This is from Ben's take from Stratechery: ( https://stratechery.com/2023/openais-misalignment-and-micros... )

1. Capital cost of AI only feasible by FAANG level players.

2. For Microsoft et. al., "winning" means being the defacto host for AI products- own the marketplace AI services are run on.

3. Humans are only going to provide monthly recurring revenue to products that provide value.

---

Jippity is not my friend, it's a tool I use to do knowledge work faster. Google Photos isn't trying to trick me, it's providing a magic eraser so I keep buying Pixel phones.

High inference cost means MSFT charges a high tax through Azure.

That high cost means services running AI inference are going to require a ton of revenue in a highly competitive market.

Value-add services will outcompete scams/low-value services.

Note that scams, at least for now, will be rarer due to the high costs of inference, don't assume that's always going to be the case.

  • The real danger I see is targeted AI scams against high value targets.

    How much money would someone be willing to spend to capture Elon entering his eX-Twitter password?