The problem is, out of ten companies who take this approach, nine will indeed destroy themselves and one will end up with a trillion-dollar market cap. It will outcompete hundreds of companies who stuck with more conservative approaches. Everybody will want to emulate company #10, because "it obviously works."
I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.
On a more serious note, they were mostly f*cked by their paas provider imo. Claude will always do dumb shit. Especially if you tell it to not do something... By doing so you generally increase the likelihood of it doing it.
It's even obvious why if you think about it, the pattern of "you had one job, but you failed" or "only this can't happen, it happened!" And all it's other forms is all over literature, online content etc.
But their PaaS provider not scoping permissions properly is the root cause, all things considered. While Claude did cause this issue there, something else would've happened eventually otherwise.
Also, some folks seem to be forgetting the virtues of boring, time-tested platforms & technologies in their rush to embrace the new & shiny & vibe-***ed. & also forgetting to thoroughly read documentation. It’s not terribly surprising to me that an “AI-first” infrastructure company might make these sorts of questionable design decisions.
The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people.
PII breaches have been pretty consistently a problem for the last several decades, predating modern LLMs.
So that is a structural problem with their data and security management and operations, totally independent of the architecture for doing large scale token inference.
Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face.
It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe.
AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is.
That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures
Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem
This isn’t a problem for organizations that have well aligned incentives across their workflows
A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving
The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives
The problem is, out of ten companies who take this approach, nine will indeed destroy themselves and one will end up with a trillion-dollar market cap. It will outcompete hundreds of companies who stuck with more conservative approaches. Everybody will want to emulate company #10, because "it obviously works."
I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.
One does not even need OpenClaw to achieve this outcome: https://x.com/lifeof_jer/status/2048103471019434248
Yeeeehaaaaa, the vibes shall never end!
On a more serious note, they were mostly f*cked by their paas provider imo. Claude will always do dumb shit. Especially if you tell it to not do something... By doing so you generally increase the likelihood of it doing it.
It's even obvious why if you think about it, the pattern of "you had one job, but you failed" or "only this can't happen, it happened!" And all it's other forms is all over literature, online content etc.
But their PaaS provider not scoping permissions properly is the root cause, all things considered. While Claude did cause this issue there, something else would've happened eventually otherwise.
I absolutely agree with you.
Also, some folks seem to be forgetting the virtues of boring, time-tested platforms & technologies in their rush to embrace the new & shiny & vibe-***ed. & also forgetting to thoroughly read documentation. It’s not terribly surprising to me that an “AI-first” infrastructure company might make these sorts of questionable design decisions.
Sounds like a pretty efficient self correcting mechanism
I’m not sure what the problem is there
The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people.
PII breaches have been pretty consistently a problem for the last several decades, predating modern LLMs.
So that is a structural problem with their data and security management and operations, totally independent of the architecture for doing large scale token inference.
Normalisation of deviance is the problem: https://en.wikipedia.org/wiki/Normalization_of_deviance
Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face.
It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe.
AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is.
That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures
Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem
This isn’t a problem for organizations that have well aligned incentives across their workflows
A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving
The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives
2 replies →