← Back to context

Comment by wobfan

10 hours ago

Usually A/B testing is just on the surface, and when you actually subscribe you get the "better" terms of the possible options.

Like, they're just advertising different terms to test how many people would still click on it and very likely start the subscription process, but after they click they go back to the usual terms. Changing the whole payment flow, account models and permissions in their backend just for a quick test is usually too much work.

But yes, basically, if you're B and not A, and B has objectively worse terms than A, then you're just unlucky. But this is the essence of A/B tests. They are done by basically every company everytime, because it's the most straightforward and simple way to test new terms or designs.

I don't think anyone is questioning the idea of A/B testing in general, but we are confused how it works in practice when you are advertising a service. Can you give me an example of this happening elsewhere? Or are you saying it's so normalized that there would be no record or news articles about it, and this only is newsworthy because "AI"?

I have never heard of a company intentionally advertising less features to only some of their customers to see if they'll still bite.

And we still have no actual confirmation that Claude is giving people who fall into that 2% of testers the full service, as opposed to the advertised one.