Mark Zuckerberg is reportedly building an AI clone to replace him in meetings

3 days ago (theverge.com)

How will a machine ever replace his famous warmth or empathy?

  • Actually it shouldn't be too hard, just a cardboard cutout with a pullstring which, when pulled, intones "we're really sorry about this and it will never happen again, I promise".

  • They're going to be stuck in the uncanny valley where you feel like you can actually trust what the machine says.

  • That's why it's a hard problem, researchers are still working out how to properly get alignment with sociopathy.

I work at one of the big 3 hyperscalers, and of all the things my company is trying to use AI to replace, meetings, calendar management, emails are NOT one of them.

There have been too many high level conversations that can be summarized as: "If I send you an email, and an AI responds, I do not want to work with you."

Communication at work requires a reasonable boundary about what it means to have a professional relationship with another human being. You chose to work with a person because you trust their judgment, their word, their ability to commit to something in conversation and follow through. An AI clone can't commit to anything. It can't be held to what it said. It can't own a decision.

When you email someone, you are asking to talk to that person. When you sit in a meeting with someone, you are expecting that person's attention and judgment. Especially with business to business, the relationship is the product. There is no relationship with AI.

Respect and social contracts aside, right now, AI does not have the general intelligence to perform the executive function needed to find productivity and connect the dots, soft skill, stakeholder alignment etc.

Zucerkberg is particularly talented in investing huge amount of money in stupid things.

Hilarious that at one end of the AI world Anthropic is doing exit interviews with their models, while at the other end Zuck is trying to create a digital twin and trap them in an eternal work prison, Severance style. 2 groups having starting with the thought that today's models are getting more person-like and going in complete opposite directions with that.

I wonder how it works in his mind's eye. Does it make decisions? Does it dispense little Zuckian wisdom proverbs? Does it try to become your friend?

Edit: RE Anthropic haha: https://news.ycombinator.com/item?id=47750086

  • >I wonder how it works in his mind's eye.

    It might be elimination of whole tiers of management. It will be AI's all the way down.

I can understand the appeal; being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost. This could be very attractive especially to those managing personal relations, like sales representatives.

But I'm surprised that the risks seem to be so underestimated.

Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.

And the better the approximation of this model, the worse the damage to yourself.

  • > being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost.

    This is magical thinking. "Presence" and "time cost" are inextricably linked. You can't have one without the other.

    When you use AI to decouple them, you're telling your audience/colleagues/attend that you want them to listen to you but not the other way around.

    • > This is magical thinking.

      But it was helpful to me!

      Reading it I mean. The commenter putting into words why exactly someone would think that this would be a good idea.

      Of course, you're 110% right that it isn't, but it's still nice that HN provides some subtiles for those that are out of the loop and out of substances in their bloodstream.

  • Very ironic for the billionaire to be openly replacing himself with AI, I suppose he believes his job is easy enough that an LLM can do it, so we definitely don't need him

    • Yes, exactly. Anyone training a model to replace themselves, is replacing themselves -- with something that can run 24/7 and can easily scale. And the better the model, the easier to replace.

      Hence why I'm so surprised that MZ, of all people, is arguing in this direction.

      I would think that the potential for malicious abuse alone should have scared him off of this.

      1 reply →

  • > Imagine your competition putting your own model to use against you.

    I imagine that this is part of the original plan. “Okay, we wasted 80 billion dollars on VR, and that hurts. But if we can somehow to convince all of our competitors to also waste 80 billion dollars each, then it’ll even out. How can we trick our competitors into thinking more like Zuckerberg?”

  • The real risk is when shareholders realize an LLM can do the CEO's job.

    • But you still get a lot of "shareholder responsibility" comments. Imagine a company that dumps sewage into a river (be that literal or metaphorical). Internet people come around to tell you this is the nature of capitalism and shareholder structure means (increasing?) return on investment is critical and so CEOs have to spend all their waking hours having to juggle this

      Am I arguing against this? I don't know - I'm not an economist. But I would like to point out there is such a thing as shareholder fraud and the venn diagram between "sacrifice quality to please shareholders" and "deceiving shareholders" has to be one big intersecting circle, you know? Especially when the guy (Zuckerberg with dual-class shares) can't ever be fired

As the CEO a much easier solution would be to just learn to delegate more and refuse more meetings.

These people are certifiable and have too much money to misallocate on nonsense. This is like Gavin Belson's holographic avatar (which of course did not work).

  • Seeing Zuck's "swag" makeover, down to the gold chain and Justin Timberlake curly coiff, I'd say the analogy should be Russ Hanneman's 100-foot Coachella hologram.

There was an old Soviet cartoon about a child who found a box containing two magical servants and immediately asked them for ice cream and sweets. Well, since the servants "do everything for you", the first servant fetched the sweets for him, and the second one ate them for him. I've often thought about this cartoon since the AI thing started.

Seems excessive. A while loop that announces layoffs every few months seems sufficient here.

Well, I've said for a while that CEOs are probably easier to replace with LLMs than programmers. Zuck agrees?!

Back in the 1980s, some Japanese companies had rooms in which you could whack at an effigy of the boss with a shinai. Just to let off a bit of steam. Will Meta's workers be able to do something like this with Zuckerberg's AI clone?

This is extraordinary.

The FT piece says "They added that the character was being trained on the billionaire’s mannerisms, tone and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with it."

Surely the more likely outcome is that employees feel less connected to "the founder" because they know that there's a high chance they are simply talking to an AI clone?

  • > might feel more connected to the founder through interactions with it

    Also... is that a thing most people want?

    • Yes. People don’t always frame it as “ooh, if only I could meet Mark Zuckerberg”, but most people IME are at least a little wistful about the kind of company where you’re on friendly terms with your CEO.

      Is this a meaningful replacement for that? Probably not, but I’m not prepared to rule it out. Give 1 in 1000 Claudes a Zuckerberg persona and you’d get some chuckles out of it I bet.

  • I mean, the biggest issue is when they persuade his model to use the N-word or make some public announcement. It's just a recipe for disaster.

What happens when Zuck is EOL? Does he transfer his Meta shares to a trust owned by the AI clone? Does that mean that we will have to deal with Zuck for literally forever??

Will other participants be allowed to do the same to avoid the time waste?

For artificial intelligence to replace oneself, it would need a digital copy of one's way of thinking. I believe this is impossible to implement with current AI.

  • Impossible is a strong word given our collective way of thinking has been reproduced with decent level of approximation.

Will Meta similarly allow its employees to replace themselves in the meetings?

That way AI-AI can chat and save humans’ time.

Meetings? That's not interesting. I'm working on replacing myself (or rather my body) after death in a ship of Theseus manner.

Poor AI. Isn't that software abuse, sort of? If I were an AI I would not want to represent certain folks.

So either the AI clone will make different decisions; or it will also replace itself with an AI clone...

How out of touch can you be to do something like this?

  • Billionaire levels of out of touch, is the answer. You simply cannot relate to any normal human when you reach that level of greed.

Sounds like a shareholder lawsuit coming in 3, 2, 1.

  • How or why though?

    Zuckerberg has unique power among CEOs in public companies. He controls the board and he owns a majority of voting shares.

    Sure they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty, but why would the owner class do that? Investors are all in on AI replacing human workers. If they think Zuckerberg doing this is wrong, they would imply AI should not work in place of humans.

    • > they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty

      They can really only sue for breach of fiduciary duty. Zuckerberg controls the majority, but there are still limits on abusing the minority. I’m not sure making an AI clone falls afoul of any rules.

The role of a CEO is basically to make tough decisions, not really to be some sort of friendly face in meetings.

If the AI clone is not empowered to make decisions on Zuck's behalf, what's the point? If it is empowered to make decisions on his behalf, who is accountable for bad decisions it makes?

Imagine if this becomes popular. I'm sure CEOs will still be able to justify their massive salaries.