Comment by eaf7e281
17 hours ago
I think they changed the quantification to save computer power for their new model. This might be why the benchmark scores look good, but the real world performance is much worse. I'm wondering if they're testing the model internally and didn't find anything wrong with the new parameter.
I canceled my subscription and switched to a codex, but it's not as good. I'm tired of Anthropic changing things all the time. I use Claude because it doesn't redirect you to a different model like OpenAI does. But now it seems like both companies are doing the same thing in different way.
Claude is worse, they don't tell you when your experience has degraded and don't even let you use worse models if you run out any.
i mean, openai does same, even worse, they change the model, like gpt 5.4 to -mini
anthropic for now, at least just seems to change quantization of the model