← Back to context

Comment by pizzafeelsright

2 years ago

I suppose this is a great example of how trust in authentic videos, audio, images, company marketing must be questioned and, until verified, assumed to be 'generated'.

I am curious, if the voice, email, chat, and shortly video can all be entirely generated in real or near real time, how can we be sure that remote employee is actually not a full or partially generated entity?

Shared secrets are great when verifying but when the bodies are fully remote - what is the solution?

I am traveling at the moment. How can my family validate that it is ME claiming lost luggage and requesting a Venmo request?

If you can't verify whether your employee is AI, then you fire them and replace them with AI.

  • The question is if an attacker tells you they lost access can you please reset some credential, and your security process is getting on a video call because you're a fully remote company let's say.

>I am traveling at the moment. How can my family validate that it is ME claiming lost luggage and requesting a Venmo request?

PGP

Ask for information that only the actual person would know.

  • That will only work once if the channels are monitored.

    • You only know one piece of information about your family? I feel like I could reference many childhood facts or random things that happened years ago in social situations.

Make up a code phrase/word for emergencies, share it with your family, then use it for these types of situations.

  • Fair, but that also assumes the recipients ("family") are in a mindset of constantly thinking about the threat model in this type of situation and will actually insist on hearing the passphrase.

I think it's also why we as a community should speak out when we catch them for doing this as they are discrediting tech demos. It won't be enough because a lie will be around the world before the truth gets out the starting gates but we can't just let this go unchecked.