Comment by phildeschaine
13 years ago
OK, in AB testing that same 0% design, you're showing it 50% of the time.
You seem to be saying "I'll AB test it just for a little, then weed out the 0% one. but in the case of this new algorithm, I'll let it run for a long time." That's not exactly fair. Not to mention, both algorithms would allow you to clearly see the 0% option sucks.
But the only way this testing method is superior (at least as explained in the article) is that it automatically adjusts itself. If you're going in and adjusting manually, it sounds like this is — at best — precisely as reliable as A/B testing and subject to the same critique the OP levels at A/B testing.
"But the only way this testing method is superior (at least as explained in the article) is that it automatically adjusts itself."
That's actually very useful for me though. Especially if a site has a lot of tests, or I'm running tests for a multitude of clients. It means I have to babysit the tests less frequently.
It will adjust itself in the sense that while it is still enabled and testing (a suboptimal state for your product), it will let your product perform better than plain old A/B.
At some point you still step in and decide based on the data, especially if you detect degenerate cases, and move on to the next experiment.