← Back to context

Comment by ckastner

3 days ago

I can understand the appeal; being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost. This could be very attractive especially to those managing personal relations, like sales representatives.

But I'm surprised that the risks seem to be so underestimated.

Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.

And the better the approximation of this model, the worse the damage to yourself.

> being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost.

This is magical thinking. "Presence" and "time cost" are inextricably linked. You can't have one without the other.

When you use AI to decouple them, you're telling your audience/colleagues/attend that you want them to listen to you but not the other way around.

  • > This is magical thinking.

    But it was helpful to me!

    Reading it I mean. The commenter putting into words why exactly someone would think that this would be a good idea.

    Of course, you're 110% right that it isn't, but it's still nice that HN provides some subtiles for those that are out of the loop and out of substances in their bloodstream.

Very ironic for the billionaire to be openly replacing himself with AI, I suppose he believes his job is easy enough that an LLM can do it, so we definitely don't need him

  • Yes, exactly. Anyone training a model to replace themselves, is replacing themselves -- with something that can run 24/7 and can easily scale. And the better the model, the easier to replace.

    Hence why I'm so surprised that MZ, of all people, is arguing in this direction.

    I would think that the potential for malicious abuse alone should have scared him off of this.

> Imagine your competition putting your own model to use against you.

I imagine that this is part of the original plan. “Okay, we wasted 80 billion dollars on VR, and that hurts. But if we can somehow to convince all of our competitors to also waste 80 billion dollars each, then it’ll even out. How can we trick our competitors into thinking more like Zuckerberg?”

The real risk is when shareholders realize an LLM can do the CEO's job.

  • But you still get a lot of "shareholder responsibility" comments. Imagine a company that dumps sewage into a river (be that literal or metaphorical). Internet people come around to tell you this is the nature of capitalism and shareholder structure means (increasing?) return on investment is critical and so CEOs have to spend all their waking hours having to juggle this

    Am I arguing against this? I don't know - I'm not an economist. But I would like to point out there is such a thing as shareholder fraud and the venn diagram between "sacrifice quality to please shareholders" and "deceiving shareholders" has to be one big intersecting circle, you know? Especially when the guy (Zuckerberg with dual-class shares) can't ever be fired