Most A/B Tests Are a Waste of Time
I’d rather run no A/B tests than A/B test every product feature.
Somewhere along the line in tech, we went from saying, “I’m not sure—let’s A/B test it to find out” to “Did you A/B test that before launching it?”
There’s a huge difference between these two statements.
In the first, A/B testing is used to validate a specific hypothesis.
In the other, it’s used to justify building a feature under the veil of objectivity that A/B testing provides.
Everything doesn’t need to be A/B tested. It’s not a substitute for learning from customers or for developing real product intuition.
A/B testing should be used sparingly and only after a hypothesis is generated. If any of the following conditions are true, you shouldn’t A/B test:
You have sufficient conviction from customer insights
The feature is too small to move the needle regardless of A/B testing
A/B testing is going to tell you something not worth knowing
You don’t have a sufficient sample to have confidence in the outcome of an A/B test
You need to move fast and you’re testing something easily reversible (this is true most of the time for companies big and small)
The thing you’d test is not something that a scientific test can properly measure (e.g., your brand)
You’re making strategic choices at the global level rather than optimizing locally
You already know what the answer will be
Your product is highly interdependent
The change you intend to make is not reversible
The primary goal is to cover your tracks
The list of reasons not to A/B test is much longer than the list of reasons to A/B test.
So when should you A/B test?
A/B test when you have something important to learn and none of the above conditions are true.
Here’s an ideal scenario for when A/B testing makes sense.
You talk to customers and develop a set of testable hypotheses based on your best diagnosis of the problems surfaced in their feedback.
From there, you should ask yourself the question: does this need to be A/B tested?
If you have sufficient conviction from the user feedback, and the change is reversible, you can just ship it. You don’t need to A/B test. So the ideal scenario is not having to A/B test.
Only when you have two equally plausible hypotheses should you A/B test. Building things that are a coin flip isn’t a strategy for learning, which is precisely where A/B testing belongs.
One last issue with A/B testing is that it keeps you thinking small. No one has ever optimized their company into something bigger and bolder by A/B testing. It’s usually done defensively.
Large companies usually have the resources to optimize everything so A/B testing can be helpful.
And given big companies have something to lose, being defensive can be a good strategy.
If I were a small company, however, I’d avoid A/B testing entirely.
This doesn’t mean you should ignore data. Quite the opposite. It just means you rely on a different type of data—real conversations with customers.
