Comment by TedDallas
18 hours ago
Per Anthropic’s RCA linked in Ops post for September 2025 issues:
“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”
So according to Anthropic they are not tweaking quality setting due to demand.
And according to Google, they always delete data if requested.
And according to Meta, they always give you ALL the data they have on you when requested.
>And according to Google, they always delete data if requested.
However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
What would you like?
An SLA-style contractually binding agreement.
2 replies →
That's about model quality. Nothing about output quality.
I guess I just don't know how to square that with my actual experiences then.
I've seen sporadic drops in reasoning skills that made me feel like it was January 2025, not 2026 ... inconsistent.
LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.
Funny how those probabilities consistently at 2pm UK time when all the Americans come online...
It's more like the choice between "the" and "a" than "yes" and "no".
I wouldn't doubt that these companies would deliberately degrade performance to manage load, but it's also true that humans are notoriously terrible at identifying random distributions, even with something as simple as a coin flip. It's very possible that what you view as degradation is just "bad RNG".
yep stochastic fantastic
these things are by definition hard to reason about
Thats what is called an "overly specific denial". It sounds more palatable if you say "we deployed a newly quantized model of Opus and here are cherry picked benchmarks to show its the same", and even that they don't announce publicly.