← Back to context

Comment by AlecSchueler

1 day ago

But if we're talking about optionally giving it access to your email, PayPal etc and a "YOLO-outlook on permissions to use your creds" then the VM itself doesn't matter so much as what it can access off site.

Bastion hosts.

You don't give it your "prod email", you give it a secondary email you created specifically for it.

You don't give it your "prod Paypal", you create a secondary paypal (perhaps a paypal account registered using the same email as the secondary email you gave it).

You don't give it your "prod bank checking account", you spin up a new checking with Discover.com (or any other online back that takes <5min to create a new checking account). With online banking it is fairly straightforward to set up fully-sandboxed financial accounts. You can, for example, set up one-way flows from your "prod checking account" to your "bastion checking account." Where prod can push/pull cash to the bastion checking, but the bastion cannot push/pull (or even see) the prod checking acct. The "permissions" logic that supports this is handled by the Nacha network (which governs how ACH transfers can flow). Banks cannot... ignore the permissions... they quickly (immediately) lose their ability to legally operate as a bank if they do...

Now then, I'm not trying to handwave away the serious challenges associated with this technology. There's also the threat of reputational risks etc since it is operating as your agent -- heck potentially even legal risk if things get into the realm of "oops this thing accidentally committed financial fraud."

I'm simply saying that the idea of least privileged permissions applies to online accounts as well as everything else.

  • isn't the value proposition "it can read your email and then automatically do things"? if it can't read your email and then can't actually automatically do things... what's the point?

    • Yes -- definitely that's the value prop. But it's not binary all or nothing.

      AI automation is about trust (honestly, same as human delegation).

      You give it access to a little bit of data, just enough to do a basic useful thing or two, then you give it a bit of responsibility.

      Then as you build confidence and trust, you give it a little more access, and allow it to take on a little more responsibility. Naturally, if it blows up in your face, you dial back access and responsibility quick.

      As an analogy, folks drive their cars on the highway at 65-85+ MPH. Fatality rate goes up somewhat exponentially with speed and anything 60+ is considerably more deadly than ~30mph.

      We're all so confident that a wheel won't randomly fall off because we've built so much trust with the quality of modern automobiles. But it does happen (I had a friend in high-school who's wheel popped off on a 45 mph road -- naturally he was going 50-55 IIRC).

      In the early 1900s people would have thought you had a death wish to drive this fast. 25-30mph was normal then -- the automobiles at the time just weren't developed enough to be trusted at higher speeds.

      My previous comment was about the fact that it is possible to build this sandboxing/bastion layer with live web accounts that allows for fine grained control over how much data you want to expose to the ai.

    • Setup automatic forwards. If I was to do this, I’d forward all the emails from my kids activities to its email.