← Back to context

Comment by YeGoblynQueenne

3 days ago

I think this is absolute madness. I disabled most of Windows' scheduled tasks because I don't want automation messing up my system, and now I'm supposed to let LLM agents go wild on my data?

That's just insane. Insanity.

Edit: I mean, it's hard to believe that people who consider themselves as being tech savvy (as I assume most HN users do, I mean it's "Hacker" news) are fine with that sort of thing. What is a personal computer? A machine that someone else administers and that you just log in to look at what they did? What's happening to computer nerds?

Bath salts. Ever seen an alpha-PVP user with eyes out of their orbits, sitting through the night in front of basically a random string generator, sending you snippets of its output and firehosing with monologues about how they're right at the verge of discovering an epically groundbreaking correlation in it?

That is what's happening to nerds right now. Some next-level mind-boggling psychosis-inducing shit has to do with it.

Either this or a completely different substance: AI propaganda.

Whats it got to do with being a nerd? Just a matter of risk aversity.

Personally I dont give a shit and its cool having this thing setup at home and being able to have it run whatever I want through text messages.

And it's not that hard to just run it in docker if you're so worried

  • > And it's not that hard to just run it in docker if you're so worried

    There is risk of damage to ones local machine and data as well as reputational risk if it has access to outside services. Imagine your socials filled with hate, ala Microsoft Tay, because it was red pilled.

    Though given the current cultural winds perhaps that could be seen as a positive?

The computer nerds understand how to isolate this stuff to mitigate the risk. I’m not in on openclaw just yet but I do know it’s got isolation options to run in a vm. I’m curious to see how they handle controls on “write” operations to everyday life.

I could see something like having a very isolated process that can, for example, send email, which the claw can invoke, but the isolated process has sanity controls such as human intervention or whitelists. And this isolated process could be LLM-driven also (so it could make more sophisticated decisions about “is this ok”) but never exposed to untrusted input.

  • I don’t understand how “running it in a vm” Or a docker image, prevents the majority of problems. It’s an agent interacting with your bank, your calendar, your email, your home security system, and every subscription you have - DoorDash, Spotify, Netflix, etc. maybe your BTC wallet.

    What protection is offered by running it in a docker container? Ok, It won’t overwrite local files. Is that the major concern?

    • Read my second paragraph.

      It’s a matter of giving the system shims instead of direct access to “write” ops. Those shims have controls in place. Their only job is to examine the context and decide whether the (email|purchase|etx) is acceptable, either by static rules, human intervention, or, if you’re really getting spicy. separate-llm-model-that-isn’t-polluted-by-untrusted-data.

      Edit: I actually wrote such a thing over the weekend as a toy PoC. It uses the LLM to generate a list of proposed operations, then you use a separate tool to iterate though them and approve/reject/skip each one. The only thing the LLM can do is suggest things from a modest set of capabilities with a fairly locked-down schema. Even if I were to automate the approvals, it’s far from able to run amok.

The idea that the majority of computer nerds are any more security conscious than the average normy has long been dispelled.

The run everything as root, they curl scripts, they npx typos, they give random internet apps "permission to act on your behalf" on repos millions of people depend on

> and now I'm supposed to let LLM agents go wild on my data?

Who is forcing you to do that?

The people you are amazed by know their own minds and understand the risks.

  • > and understand the risks

    I'm very unconvinced this is true. Ignorance causes overconfidence.

> That's just insane. Insanity.

I feel the same way! Just watching on in horror lol