most common ideas

Everyone, who has already made a single A/B test (or planning to carry out one) should understand what ideas work and what just can work. Sometimes it’s hard to.

A lot of them look pretty. They must work and boost website conversion. But it doesn’t happen. Why? I asked experts about most common A/B testing ideas that, actually, don’t work. Here are what they say.

Brian Massey

Conversion Sciences, The Conversion Scientist™

brian massey

Let’s define “ideas that don’t work”. Something won’t work for any of the following reasons:

– An idea tests as inconclusive or reduces revenue or leads.

– A test changes more than one variable so we don’t know what influenced the visitors.

– A test is not statistically valid due to small sample size.

– A test changes something that doesn’t deliver insights.

– A test shows a “win” but doesn’t deliver extra revenue or leads when rolled out.

No idea always fails just as no idea always succeeds. Given the list of failed states, the things to avoid are pretty straight forward.

Inconclusive tests happen.

However, it’s crazy to test an idea that an executive dreamed up with no research or ranking. The best things to test have sample evidence in analytics that there is a problem. Research and rank your hypotheses for expected ROI. Corollary: don’t test something you’ve seen on your competitors’ websites without research.

Testing a complete redesign.

You can often increase the conversion rate of a landing page by redesigning, but changing it step by step will tell you to want help and what hurts.

Editor’s note: About 2 weeks ago Yandex (Russian search engine) made a complete redesign of the biggest website about movies – kinopoisk.ru (russian IMDB clone). The result was horrible. Tons of negative comments from users. For example, someone created a website with only one question: “Do you like the new design of kinopoisk.ru?” 97% of responders said “No”.

CMO of the project was forced to leave. The backbone of the developer’s team left the project as they didn’t agree with company terms. Yandex has to back to old version.

So think twice before making redesign.

Testing on pages with little traffic.

If you can’t get a statistically significant sample before two months have passed, you’re probably testing on the wrong page.

Editor’s note: Little traffic is not the only metric. If you have almost zero conversion rate with huge traffic it means you have the wrong audience

Testing something that offers little insights.

Testing a new font, or various button colors may deliver a win, but won’t deliver any insights about the audience to inform your next set of tests.

Editor’s note: Before testing you have to have a hypothesis: if I change this, something happened I will get more traffic/conversion/registrations etc.

Testing the wrong audience.

If you don’t segment correctly, you’ll find you’re testing an audience that doesn’t exist on your site naturally. Be sure your test audience looks like your website audience.

“Testing requires “rigorous creativity.” Don’t get creative without the rigor.”

John Ekman

Chief Conversionista, Conversionista!

john ekman

You test where you don’t have enough traffic.

Before you do anything else, make sure that the place where you are about to start testing has enough visitors to produce significant results in your lifetime. We’ve often come across best laid plans and designs of online marketers only to find that they are nowhere near the traffic they need for that kind of test.

Editor’s note: The same issue was noticed by Brian Massey, so it definitely makes sense.

You test where you have a lot of traffic.

The home page has a special and warm place in the minds of web designers. It’s the departure point and template page for the rest at the web site. And it gets a lot of traffic. Nevertheless, benchmark studies, for e-commerce sites, in particular, show that homepage tests yield low uplifts, since that page is not an important step in the purchase decision-making process of the visitor.

Editor’s note: For SaaS business, in most cases, homepage is the start point to meet with user.

You test too many things in one test.

You’re fast paced, results-driven marketer. You want to make bold moves. So you swing for the fences and you test the headline AND the hero shot AND the button copy, all in one single test. You wait to see the results come in and it shows (drum roll) – Nothing. No uplift. Now, what do you do? Since you changed many things at once you have no way of knowing if you had a killer headline a sucky image or the other way around? You’re back to square one.

You don’t have a structured discovery process

Testing is easy. Finding our what is testworthy is more difficult. Too many testers lack a structured discovery process for finding out what matters. And they lack a process to prioritize and plan their testing program. So they end up doing shotgun testing. Spraying their tests all over the place, hoping that something sticks.

Editor’s note: create more than one hypothesis and prioritize them from 1 to 5  to know  what to test today and what in two weeks.

You call it too soon

You have a test that shows a 90% chance of winning. Why wait for few more percentage points? 90 is pretty close to 100, right? Let’s reel that fish in and start monetizing on your uplift. I believe that what many miss is that the baseline is 50%. This is what a dart throwing monkey would achieve, on average. 90% sounds great in comparison to 0% and close to 100% But never forget that calling it in at 90% means that 1 in 10 of your tests will fail.

Justin Christianson

Co-founder and President, Conversion Fanatics

Justin Christianson

A few of the common mistakes people make when A/B testing

Calling tests too early

If you don’t run tests out with enough traffic and conversions, you can ultimately end up with what is called a false positive. Meaning you may assume one version is the winner but it could actually be a losing variation after you have had enough conversions on the variations.

If you call a test after only 10 conversions even 1 additional conversion could swing the results dramatically.

Letting tests run too long

There is a time when you have to call a test before a winner is determined. Such as if the results are bouncing back and forth and you don’t have a clear winner (or at least a prospective winner) while at the same time you have had plenty of traffic and conversions on the variations.

It may be time to analyze and cut the test to do something else.

Assuming

Never assume something is going to work or is not going to work. Assumptions lead to losing precious conversions and growth opportunities. Always test your assumptions. Therefore use real data from your analytics to build hypothesis.

Not running A/A tests

People typically jump in and A/B test. Only to be left disappointed or with conflicting results from tracking. Be sure you A/A test to confirm your results if you are unclear.

Not testing consistently

We come across companies all the time that test once in awhile but don’t have an active split testing process. In order to gain and keep a competitive advantage you must test often and constantly be striving for better, not just once in awhile or when you get around to it. Make a test plan and implement it to your marketing strategy.

Keith Hagen

Vice president, ConversionIQ

Keith Hagen

Here is the answer based on the +1000 tests we ran over the past year (we don’t tend to do tests we think are not going to work, but here is what I know).

Any test not based on Real and Validated Insights.

When a test is suggested by someone “important”, it needs to put into the context of how much insight the idea is based on, and how it can be validated. Testing programs are easily derailed and valuable time and resources are wasted (money lost – all profit) when thing that don’t matter at tested. Start with team wide by-in that all tests will be prioritized on Potential and Difficulty with tests based on validated insights being given priority. Assign a score to each test idea based on factors everyone agrees on and consider calculating an “Opportunity Cost” that tracks the time it took to implement it since the time it was first identified (so over the long term, wins that get delayed have a “lost revenue” metric assigned to them, re-enforcing the overall system of prioritizing properly).

Optimizing the Checkout because it has a high rate of abandonment

The experience before checkout often has more to do with why people abandon checkout than anything in the checkout itself. Often the checkout is just the “Straw that broke the Camel’s Back” (we call it the Straw Effect). Likewise, Checkouts might look stellar (and untouchable) because only the most determined shoppers power through a poor experience to get there (we call this the Weeded Effect). Either way, testing the checkout will do nothing.

Increasing the presence of Site Search

Often, site search has a awesome conversion rate compared to the overall site’s conversion rate. On the surface, it’s logical to want to get more people into it thinking it will generate more sales. It’s important to realize that on most eCommerce sites, site search is a sign of failure to navigate and is reflective of the most determined shoppers who are willing to try site search. Driving more people into site search can increase the amount of lesser qualified traffic into site search and at best is partial and short-term solution to the greater problem that the site has navigation issues.

Button colour

I have never seen button color by itself increase conversion rates. Its contrast, not color that typically explains the lift. Sure color is a factor of contrast, but so is button size, position, surrounding white space, hue, texture, visual elements and more.

Conclusion

Conducting an A/B test is a big deal. It demands time, efforts and real data.

One does not simply make A/B test and get significant results. it’s consistent and unlimited process.

But this is a good way to save your money on attracting new visitors.