Booking.com runs around 27,000 experiments every year. According to Wikipedia, Google, Facebook, and Amazon each run more than 10,000 online controlled experiments annually.
Of course, these figures reflect activity across entire organizations, not individual teams. The right number of tests for your group depends heavily on your industry, your department’s role, and your goals.
Take two extremes:
- A marketing team running paid ads might test multiple ad creatives every week. They have high traffic, quick feedback loops, and low switching costs between variants.
- A product team, by contrast, might run a single checkout flow experiment for several weeks or even months, because user actions inside the product occur less often and changes carry a higher risk.
So how many experiments are enough?
Basically, Your Traffic Sets Your Ceiling
Your available traffic determines how many statistically reliable tests you can run. If your traffic is below a certain threshold (say ~80,000 monthly visitors), you might only be able to run about 15 meaningful tests per year without sacrificing accuracy.
As your traffic grows, so does your testing capacity. That’s how companies like Amazon and Booking.com can run thousands of experiments, they have the audience volume to support it.
Generally speaking, each test should be just enough traffic to get a statistically significant result for your minimum detectable effect, so if you want to run more tests, you need to increase the potential impact of the change.
When You Can Only Run a Few Tests a Year…
If you only get a few chances to run A/B tests each year, every single one has to count.
Sure, you want to run a test when the value of what you’ll learn is worth the cost, but moving slowly means you’re not learning what to do next, and that delay can set you back even more.
And cost isn’t just the hours your team spends setting up and digging through results. It’s also the ideas you can’t try while you’re tied up with one test, and the sales you risk losing if your new idea flops while it’s running.
So… how do you move faster when you only get a few shots at testing and you want your next call to be a winner?
One approach is to shave down the time you spend waiting on results so you can squeeze in an extra test or two. The best way to do this is to run tests on more impactful changes where you can try a high MDE. The drawback of this is that if the changes aren’t moving the needle, you're more likely to get an inconclusive result.
One trick to help ease this downside is to borrow what competitors have already learned. Not by copying their stuff word-for-word, that’s lazy and rarely fits your audience anyway, but by treating their experiments as free research. They’re paying for the traffic and running the tests; you’re just watching, figuring out what’s working, and reshaping the idea to fit your product, brand, and customers. It’s like getting bonus test slots without actually running them yourself.
Conclusion
There's no optimal number of tests you should run per year. It depends on your traffic, your goals, and your team's capacity. But the more you test, the more you learn, and the more you can grow.
Countless product teams are chasing the big wins they see from industry leaders, but the truth is those wins are built on years of disciplined experimentation, not one-off sparks of genius. The companies you admire have spent years learning the subtle art of running enough reliable tests to separate real signals from noise.
You already have the ability to run experiments as rigorously as the giants. You just need to respect your traffic limits, plan your tests strategically, and commit to the practice.