Comment by samusiam

4 hours ago

I'd argue that it depends how that degradation manifests whether you want to include it or not.

Consider two scenarios: (1) degradation leads to the model being routed behind the scenes to a different server, with subtly different performance characteristics, all unbeknownst to the user; (2) degradation leads to the model refusing a request and returning an "overloaded" message.

In the first case, absolutely you want to include that because that's the kind of lack of transparency about performance that you'd want signal on. In the second case, an automated test harness might fail, but in the real world the user will just wait and retry when the server is under less load. Maybe you don't include that because it's actually misleading to say that performance (in terms of the model's intelligence, which is how the benchmark will be interpreted) is worse.