Run faster A/B tests with this Little Known Optimizely trick

What if I told you you could run tests up to 2X faster using a standard Optimizely feature hardly anyone uses?

The concept is really not that difficult, but so many people get it wrong that it easily costs marketers and product owners millions of dollars a year in lost revenue. I've made the mistake myself, so want to share this with you so you don't have to learn it the hard way!

Let's say you're a product manager at Widgets.ly, a SaaS app designed to help business owners make widgets faster and easier. As it stands, 15% of visitors to your landing page click a "start my free trial" CTA button, which takes them to a dedicated sign up page. From there, about 50% actually fill out the form to create a free trial. So your conversion rate from landing page to sign up is 7.5%. Not bad, but you'd like it to be higher.

A hypothetical SaaS company landing page. 15% of visitors click the red CTA button, and out of those, 50% of visitors actually sign up.

A hypothetical SaaS company landing page. 15% of visitors click the red CTA button, and out of those, 50% of visitors actually sign up.

Enter the Sign Up Modal Split Test

You and your design team brainstorm some ways to reduce the perceived friction of the signup experience. Rather than taking visitors to a separate page to sign up, you decide to test allowing visitors to sign up via a modal right from the landing page. You think this new experience could lift sign ups by 10%, which would boost the conversion rate of people reaching the sign up form to 55%.

After showing your developer some wireframes, she suggests you use Optimizely to change the anchor tag of the CTA button to trigger a popup for users who are bucketed into the modal variant. She sets up the experiment to run automatically as soon as the page loads. Simple enough.


The Problem

Your boss is wondering how long this test needs to run until you can declare a winner, assuming you get the 10% lift you're aiming for. You scurry over to your handy sample size calculator and start running the numbers. Since your baseline conversion rate is 7.5% (15% of landing page visitors click the CTA, and half of them complete the sign up), you'll get an improved conversion rate of 8.25% if things work out the way you're expecting. Looks like you'll need 20,000 visitors per branch to get a significant result. You're going to split visitors 50/50 into both variants, so assuming your landing page gets 1,000 visitors a day, you'll need to run your test for 40 days.

That's a long time to wait for results! If all your tests take this long, you'll only be able to run a handful of tests a year. No bueno.

The problem with simply changing the button behaviour for everyone who lands on the page is that only a fraction (15%) of landing page visitors even click the CTA. Running the experiment for everyone means that 85% of visitors who don't click the CTA are still bucketed into your sample. You're adding a ton of dead weight into your test since these people never even get a chance to experience your new design – and noisy tests take way longer to run.

The Solution

The solution is to activate your experiment only once people click your CTA button. Evan Miller calls this lazy assignment, and it's a super important concept in A/B testing. To do this, you'll need to change your experiment from automatic activation to manual activation. You'll probably need help from your developer to set this up, but a few hours of extra up front dev time will save you a lot of time running the test.

By activating the experiment only when visitors click the CTA, you'll get a much better signal to noise ratio. Instead of trying to get your baseline from 7.5% to 8.25%, you'll be aiming to move your baseline from 50% to 55%. Plugging that into your sample size calculator, you see you need only 1,500 visitors per side now, rather than the 20,000 you needed before. The catch, of course, is that you also will get less traffic into your test. So does it actually save you time?

Since you get 1,000 visitors a day, and 15% of visitors click your CTA, you actually get 150 visitors a day to the test with manual activation. So, to get your required 1,500 visitors per side, you'lll need to run the test for 20 days. That's 2X faster than when you used automatic activation. Not bad considering this is just a different way of measuring results and your traffic has stayed the exact same.

Why it Matters

One reason this is important is that you'll avoid throwing away perfectly good variants just because you didn't want to wait so long for the results to come in. If you had run the test the slow way, chances are you might have given up on the test half-way through, giving your test the dubious "inconclusive results" label before you even had a chance to gather enough traffic (i.e., comitted a 'type I error' for all the stats geeks out there). I am sure that tons of well meaning optimizers have had winning test turn out insignificant for this reason alone.

But more importantly, being able to get faster results is critical to creating a successful A/B testing program. Sean Ellis calls this "high tempo testing", and if you haven't seen his video on the subject it's a must watch. One of the single best predictors for the success of an optimization program is test velocity, so any way to make decisions faster will pay big dividends in the growth of your product!

Mike Fiorillo

Toronto

Mike Fiorillo is a growth marketing and customer acquisition consultant based in Toronto, Canada. He is the lead optimizer at swiftCRO, a conversion rate optimization consultancy.

Skills:
SEO, CRO, Google Analytics, Optimizely

Interests:
Tech, Espresso, Design, Travel, Photography, Software, Minimalism