Comment by sergiopreira
9 hours ago
An interesting question is whether the tokenizer is better at something measurable or just denser. A denser tokenizer with worse alignment to semantic boundaries costs you twice, higher bill and worse reasoning. A denser tokenizer that actually carves at the joints of the model's latent space pays for itself in quality. Nobody outside Anthropic can answer which it is without their eval suite, so the rugpull read is fair but premature. Perhaps the real tell will be whether 4.7 beats 4.6 on the same dollar budget on the benchmarks you care about, not on the per-token ones Anthropic publishes.
[dead]