Statistically insignificant means you didn't prove anything by usual standards. I do agree that it's not a waste, as knowing that you have a 70% chance that you're going in the right direction is better than nothing. The 2 sigma crowd can be both too pessimistic and not pessimistic enough.
Statistical significance depends on what you're trying to prove. Looking for substantial harm is a lot easier than figuring out which option is simply better, depending on what level of substantial you're looking for.
If your error bar for some change goes from negative 4 percent to positive 6 percent, it may or may not be better, but it's safe to switch to.
You can still enshittify something by degrees this way.
I think the disconnect here is some people thinking A/B testing is something you try once a month, and someplace like Amazon where you do it all the time and with hundreds of employees poking things.
Statistically insignificant means you didn't prove anything by usual standards. I do agree that it's not a waste, as knowing that you have a 70% chance that you're going in the right direction is better than nothing. The 2 sigma crowd can be both too pessimistic and not pessimistic enough.
Statistical significance depends on what you're trying to prove. Looking for substantial harm is a lot easier than figuring out which option is simply better, depending on what level of substantial you're looking for.
If your error bar for some change goes from negative 4 percent to positive 6 percent, it may or may not be better, but it's safe to switch to.
To prove that a change isn't harmful is still a hypothesis.
You can still enshittify something by degrees this way.
I think the disconnect here is some people thinking A/B testing is something you try once a month, and someplace like Amazon where you do it all the time and with hundreds of employees poking things.