Wednesday, August 19, 2015

10 Reasons Your Latest CRO Campaign Failed (And What to Do About It)

A news article pops up in your Feedly account proclaiming that conversion rate optimization (CRO) is the “be all, end all” for online business success these days.

Excited, you pop over to Google Analytics’ Content Experiments tool and launch your first A/B test, confident that the changes you’ve made are going to result in major bottom-line gains for your website. You wait…

And you wait…

And you wait some more.

Finally, the test is done, and the results are… inconclusive. You don’t have a statistically significant winner, and you don’t have any appreciable lift to show for your efforts. What gives? Should you give up on CRO altogether?

Of course not. Your failure to drive measurable results could come down to one of the following easily fixable reasons for CRO campaign failure:

Mistake #1 – You Don’t Know What You’re Testing For

I can’t tell you how many times I’ve seen marketers get excited about the potential CRO and testing have to improve a site’s bottom line.

So what do they do?

They go out, find an article titled “The 10 Split Tests You Have to Run (Unless You’re a Total Loser)” and put the first suggestion on the list into practice.

I suppose that’s better than doing absolutely nothing. But it’s not going to generate much in the way of worthwhile data for you.

Suppose you run a small wealth management firm. The lifeblood of your business is the referrals you generate through your website’s lead capture form. Now, what happens if that first split test you decide to try involves adding images of faces to your homepage to reduce bounce rate.

Is reducing your homepage bounce rate a bad thing? No.

But is it doing much to impact your business’s bottom line? The answer, again, is no.

That’s why it’s so important to know what you’re testing for. By taking the time to understand the different types of CRO campaigns and split tests you can run – in addition to matching these strategies to your business model – you’ll increase the likelihood of your future efforts actually moving the needle for your company.

Mistake #2 – You Don’t Have Enough Traffic for Testing

That said, even if your goals and campaign objectives are in alignment, you still might not be ready for testing. If your website traffic is low, generating statistically significant results becomes much more complicated.

To understand why, imagine that you’ve asked ten of your friends to choose between your control page and the Web Page A variation you’ve created. Would you be confident that the results of your poll would hold true in the world at large? Would you be more confident if you had polled ten thousand people to start?

The larger sample size you have, the better you’ll be able to identify trends with confidence – and that’s where more traffic comes in handy.

Of course, all is not lost if your traffic resembles a steady stream more than a raging river. As the image below, published by the VWO blog demonstrates, sites with less traffic can still get conclusive results – if they’re willing to wait longer for their results:

correlation-between-website-traffic-and-test-duration

Source

Mistake #3 – You’re Optimizing For Traffic Before Conversions

Now, having said that, you might be thinking that I’d encourage webmasters with low traffic to run out and get more visitors before launching their CRO campaigns. But as you’ll see, there are issues there as well.

Before you start with any true CRO effort, your conversion funnel should be relatively well-established. Think about it… Would you prefer to have a website that converts 10 out of every 100 visitors, or one that converts 10 out of every 10,000 visitors?

In the second example, you’ve got more traffic, but that’s not necessarily a good thing because your conversion rate is so low that those extra visitors aren’t helping you make more money. Fixing your conversion funnel before you throw traffic at your site will ensure you get the maximum value out of the visitors you do acquire.

Mistake #4 – You Set Up Your Tests Incorrectly

In an interview with Content Verve, Craig Sullivan of Optimise or Die shared how an unexpected test results taught him the importance of making sure his tests were set up correctly.

The test – which involved significantly different A and B variations – was showing no real difference in performance between the two creatives. With changes that massive, Sullivan knew that there must be something in the code affecting the tests results. In the end, it turned out that a coding error meant that visitors were being exposed to both variations – the test never remembered which versions they’d seen previously.

In Sullivan’s words:

“When it comes to split testing, the most dangerous mistakes are the ones you don’t realise you’re making.”

Mistake #5 – You Blindly Followed CRO Best Practices

A lot of people think that they can circumvent the split testing process entirely by just applying the CRO wisdom other publishers have discovered from their campaigns.

The problem with this line of thinking is that there’s no “one size fits all” set of recommendations that’s going to apply equally well to all websites. Angus Lynch shows how damaging this failure can be in a blog post for Rooster.

Lynch profiles Compare Courses, an Australian education website, which mistakenly follows the advice to move all of their calls-to-action “above the fold.” Here’s what the original page looked like:

compare-courses-ab-test

And here’s the variation:

compare-courses-test-variation

Source for both images

Even with the addition of testimonials and social proof, the test page saw a 53.87% decrease in “Send Enquiry” conversions.

If that doesn’t make clear how important it is to base your tests off your own performance goals and data, I don’t know what else will!

Mistake #6 – You’re Focused on The Wrong Metrics

I hinted at this earlier with my example of the wealth management firm, but the idea of tracking the right metrics deserves special mention here.

Ideally, if you’re running split tests, you’re doing so because you want to achieve something. You aren’t just testing for the fun of it – so what kinds of results do you want to see at the end of your campaign?

Keep in mind also that “conversions” doesn’t just mean sales. Our sample wealth management firm was tracking lead generation form completions, but your campaign might be based around:

  • Social shares
  • Email newsletter subscriptions
  • Video views
  • PDF downloads
  • Contact form completions
  • …Or any number of other target actions

There’s no “right” type of conversion to track. What’s important is that you actually make an effort to track metrics, and that the metrics you choose to track are those that matter most to your business’s operations.

Mistake #7 – You Stop Your Tests Too Soon

Peep Laja, writing for ConversionXL, demonstrates why sample size is so important:

conversionxl-variation-one

The results above come from a split test performed by one of Laja’s clients, just days after the test’s launch. With just over 100 visitors per variation, it would seem that the winner was clear.

But despite the temptation to call test early, Laja persisted. Here’s what happened after each variation received more than 600 visitors:

conversionxl-sample-size

Source for both images

In this instance, calling the test after 200 visitors would have resulted in an incorrect conclusion being drawn, potentially costing the client money.

For best results, Laja recommends waiting for roughly 100 conversions per variation (if not 200-400) and for the test to proclaim a winner with at least 95% confidence.

Mistake #8 – You Don’t Account for False Positives

Now, just to make things more complicated, consider that, even if you have enough conversions to declare a winner in your split test, you could be facing false positives if you’ve included too many variations in your test.

According to Isaac Rothstein of Infinite Conversions:

“A false positive is when a test result indicates that a condition is true when it is not, usually due to an assumption that has been made from the results. False positives typically occur when a high number of variations are tested.”

Imagine that you’re testing eight different versions of a single web page (Google’s a great example of this, having once tested 41 different shades of blue to see which option customers prefer). At the end of this exercise, can you really be sure that one of them is a conclusive winner? What if none of your variations are actually the right choice?

In all cases, watch out for assumptions about the data you’ve gathered. Test for different variations, but also test to be sure the conclusions you’ve drawn are based on fact, not opinion.

Mistake #9 – You’re Testing Small Tweaks Instead of Major Changes

Search Google for “split test ideas,” and you’ll come across endless lists recommending such simple tweaks as “change your button color” or “use action words in your headlines.”

And those things are great, don’t get me wrong.

But if your site needs major changes, these kinds of small, limited tests aren’t going to get you there.

Marketer extraordinaire Neil Patel is one advocate for making big changes before minor swaps, saying:

“The biggest conversion boosts are going to come from drastic changes. So if you really want to move your conversion rates, don’t focus on small changes. Instead, focus on drastic changes as they are the ones that boost your revenue.”

On his QuickSprout blog, Patel shares an example of how he put this principle into practice on his Crazy Egg website, where changing the homepage into a long sales letter led to huge wins. Only after this major change was complete did Patel go back and test individual calls-to-action, button colors and more.

crazy-egg-homepage

Mistake #10 – Your Tests Aren’t Timed to Your Sales Cycles

In many retail environments, both online and offline, sales cycles are short, lasting as little as a few hours or a few days between the recognition of a need and the purchase decision.

But what if your company sells bigger-ticket items that come with longer sales funnels? What if your buyers are only making purchase decisions once every few years, making them less likely to take any conversion actions (including form completions, file downloads and more) in the interim?

Ultimately, the length of time your split tests run should be a function of your traffic and your conversion rates. However, if, after you’ve completed your test, you notice that the duration of your test is less than the length of your average sales cycle, consider that the data you’ve generated may not give you a complete pictures of the way your particular buyers interact with your website.

At the end of the day, CRO is a powerful tool for improving your website’s results, but it comes with a pretty significant learning curve. If your campaign results have been lackluster so far, one of the ten reasons described above could be to blame.

Don’t give up. Instead, look for ways to improve your testing and campaign protocols. With time and continual effort, split testing and other CRO techniques can be used to increase conversions and move the needle for your company.

Have you made any other CRO mistakes that deserve a spot on this list? If so, leave a comment below describing your experiences.

About the Author: Alex Bashinsky is the co-founder of Picreel, an online marketing software program that converts bounce traffic into revenue. He’s passionate about helping businesses improve their conversion rates and, in his down time, enjoys reading and playing the guitar. Get in touch with Alex at @abashinsky or check out Picreel.com.

No comments:

Post a Comment