Comment by FrasiertheLion
9 hours ago
AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.
Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.
That's supposing the autonomous kill chain needs more than one 9. There are wars going on right now with less than 20% targeting accuracy.
> AI has normalized single 9's of availability, ...
FWIW I use AI daily to help me code...
And apparently the output of LLMs are normalizing single 9's too: which may or may not be sufficient.
From all the security SNAFUs, performance issues, gigantic amount of kitchen-skinky boilerplate generated (which shall require maintenance and this has always been the killer) and now uptime issues this makes me realize we all need to use more of our brains, not less, to use these AI tools. And that's not even counting the times when the generated code simply doesn't do what it should.
For a start if you don't know jack shit about infra, it looks like you're already in for a whole world of hurt: when that agent is going to rm -rf your entire Git repo and FUBAR your OS because you had no idea how to compartmentalize it, you'll feel bad. Same once all your secrets are going to publicly exposed.
It looks like now you won't just be needing strong basis about coding: you'll also be needing to be at ease with the entire stack. Learning to be a "prompt engineer" definitely sounds like it's the very easy part. Trivial even.
"It's fine, everyone does it"
There's probably a curve of diminishing returns when it comes to how much effort you throw in to improve uptime, which also directly affects the degree of overengineering around it.
I'm not saying that it should excuse straight up bad engineering practices, but I'd rather have them iterate on the core product (and maybe even make their Electron app more usable: not to have switching conversations take 2-4 seconds sometimes when those should be stored locally and also to have bare minimum such as some sort of an indicator when something is happening, instead of "Let me write a plan" and then there is nothing else indicating progress vs a silently dropped connection) than pursue near-perfect uptime.
Sorry about the usability rant, but my point is that I'd expect medical systems and planes to have amazing uptime, whereas most other things that have lower stakes I wouldn't be so demanding of. The context I've not mentioned so far is that I've seen whole systems get developed poorly, because they overengineered the architecture and crippled their ability to iterate, sometimes thinking they'd need scale when a simpler architecture, but a better developed one would have sufficed!
Ofc there's a difference between sometimes having to wait in a queue for a request to be serviced or having a few requests get dropped here and there and needing to retry them vs your system just having a cascading failure that it can't automatically recover from and that brings it down for hours. Having not enough cards feels like it should result in the former, not the latter.
I kind of agree. The AI train depends more on having a cute user interface than being actually reliable.
[dead]