I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.
Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust.
Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.
Bad news, John Google told me they already quantized it immediately after the benchmarks were done and it sucks now.
I miss when Gemini 3.1 was good. :(
I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.
Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust. Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.
What are you talking about?