Comment by dpedu
3 days ago
Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.
If I were interviewing people for the position of personal assistant, I would probably find the resume entry "willing to grind up babies for food" to be a negative mark. You?
I'm not about to run OpenClaw, but I suspect similar capabilities will gradually creep in without anyone really noticing. Soon Claude Code will be able to do many of the same things. ("Run python to add two numbers? Sure, that's safe, run whatever python you want.") Given that it is now representing me in the world, yes I would not only like some guardrails, but I would also like to have some confidence that the company making those guardrails actually gives a sh*t and isn't just doing their best to fill in a checkbox. But maybe that's just me.
Cars have seatbelts and other safety measures.
Reasonable countries have gun control laws.
The list goes on of things that need to be restricted or legislated to add limits.
Is this a serious question?
Seatbelts don't block me from getting to my destination, even if I don't use them.
Ok sure. I made an imperfect comparison, clearly my fault.
The big companies with reputations to protect will keep the guardrails in place. I don't think there's a huge market pressure to remove the guardrails since for the majority of uses the guardrails are fine. Pentagon excluded. And there can be serious fallout that makes them lose other customers when the public thinks the guardrails aren't enough (see Grok). But I'm sure open source models will exist without the guardrails.
I personally would love it if AI would say "Sorry Dave (or Pete), I'm afraid I can't spy on Americans for you," and I'd happily pay higher taxes to force the Pentagon to use that AI.
I am 100% sure that AI with guardrails will become the dominant models as they become more widely adopted, and the bigger issue you should be concerned with is can you even tell what those guardrails are.
You can't and that is the danger. These tools are one of many to drive "right-think" at scale, which is against the users knowledge and wishes.