← Back to context

Comment by moron4hire

7 hours ago

The idea of physically separating the agent harness and the artifacts on which you want the harness to work seems to be the wrong answer to the right problem, a symptom of a worrying problem with all the agentic workflow work I see in the wild: that of being lazy about how you develop the harness and tools it has available. Specifically, there seems to be a desire to make agent tools in an incredibly naive and simplistic way that ignores access control.

My company is not a "sophisticated" software development organization. We're 3000 people where 2000 do nothing with AI, 900 are naively jumping on every AI bandwagon that comes along (or rather, parroting whatever they read on the Internet to complain about how hard their job is despite it not having materially changed in the last 15 years), 90 are capable of doing anything towards implementing anything involving AI beyond "just use Claude," and 10 have the experience to really scrutinize what is going on. And our work is of a nature that scrutinizing the exact process of how results occurred, what data we came to it by, is essential. There are regulations and compliance issues that could land people in prison if we don't and the results are eventually proved to involve inappropriate data. What does that mean? I'll just say we primarily work for DoD.

I have very long experience with managers asking for the moon and not listening when the engineering staff raises red flags. They ask us, "why can't we just do X?" Where X is whatever they've recently read about in whatever MBA-targeted publication that was bought and paid for by the service provider profitting off of X, with no skin in the game regarding the nuance, because the relevant regulations are written to scapegoat the person in the chair bashing the keys and not the person making the decisions and hanging the former person's jobs over their heads. The "why can't we just do X" is not an in-good-faith question, it's a statement that you need to shut up about your concerns and "just do X."

But out of desperation/malicious compliance, I've started developing an agentic harness that can "just get AI to do it" for the data sources on which we work. And I've noticed two things: A) agent harnesses are not that hard to write (honestly, anyone with basic programming competency should be able to do it), and B) they can only ever work on what you give them. I suppose the last point should be obvious, but I've had enough conversations with folks who expect magic that it is clear that it is not actually obvious.

And that's where I get into "extant agent work is lazy." The agent harness I've developed is incapable of accessing data its operator should not have. If you are cleared to only see a subset of the universe of data, then running this harness cannot possibly give you access to more than your clearance. I'm not trying to brag here, because this was not a difficult guarantee to make. In developing the harness and giving it tools to do work, I just developed the same access controls I would have done for a human user accessing an API to the same data. The only thing that is different about my approach is that I didn't use an off-the-shelf harness with tools developed by others. I just wasn't lazy about my job.

My key stakeholder was skeptical that I was able to do this, mostly because he has subconsciously intuited that our organization is not very sophisticated in developing software. He doesn't understand that employing AI isn't magic, and I think that is the case for a lot of the people who use AI the most here. They see products like Claude go to work and think there is some kind of special sauce there that requires the development by a "frontier" AI firm to actualize. But the truth is, the more you develop agentic AI capability, the less AI you are actually employing. The AI eventually becomes just an orchestrator of tools that perform work by not-AI means. If you are lazy, you try to lean on naive tool implementations that let the AI do whatever "it wants." And that's where you get into trouble. But if you show up to your job and be diligent about implementing those tools, there is no possible way the AI can screw you over, because you never gave it the unrestricted access to curl or `rm -rf`.

This is why, even if AI does become a permanent fixture of software development (still not convinced, even after all this experience), you're still not getting rid of us software engineers, despite how much you hate us. You still need us to protect your data, and nothing about AI has changed the equation that ends in "data is king." If anything, it's more important than ever.

Edit: I'm specifically developing a multi-user agent, accessed via a Web application over a shared database. Row-level access control is baked into every tool and I can do this with little effort because dependency injection Is A Thing. Thus, the parameters of access control never even reach the AI.