New Feature Highlight: Split Testing Tool Will Help Your Emails Perform Even Better

Split testing (also known as A/B testing) is a great way to test new hypotheses about what works and what doesn’t work in your emails. Let’s imagine a situation (totally fictional, of course, never ever happened to me) where your boss/partner/colleague thinks it would be fun to put a cute picture of a puppy as the main image of an email. The objective being to grab the reader’s attention and draw them through the email and down to the call to action. You, being the savvy marketer you are, have your doubts. But instead of getting into an argument with your boss/partner/colleague or worse, leaving it to chance, only to have that nagging doubt that your first instinct was right, you say “I know! Let’s split test it.”

How the Split Testing Tool Works

The new Split Testing tool in LeadMachine takes a subset of your audience and splits them into two groups. One group would get the non-puppy version and the other group would get the puppy version. Whichever version performs better would automatically get sent out to the rest of the group.

So you don’t have to worry about making the wrong decision and regretting it only after you’ve sent it to your entire customer base. Instead just let the data make the decisions.

pug-picture-email

Getting Started with the Split Testing Tool

  1. Choose What to Test

Split testing options are almost infinite. This is the perfect time to try out different copy, test new images, rephrase calls to action, or try testing a whole new version of an email. Even changing a single word can affect the response that your email receives!

It’s good to go into the split test with only one question in mind, like “Do pictures of puppies capture attention better than pictures of our product?” or “Does a green call to action get more clicks than a red one?” These questions will help you create test versions that try to answer something specific and maximize the efficiency of your test, rather than just testing things at random. Split testing will help take out the “gut feeling” from these decisions and provide you with real results that you can take action on.

When you do your first split test, try starting with one email template and call it Split A. Create a copy of that template and make one small change and call it Split B. If you’re looking for something easy to test, we suggest subject lines, which are really the most important element of your email as it’s the first thing the customer sees. If you’re looking for more things to split test, check out this list of 150 split test ideas.

In this example, we are testing two new email templates against each other.  On your ‘Send Email’ tab, choose the SPLIT TEST button. Then choose the two templates that you want to pit against each other.split-test-step1

  1. Choose the Winning Metric

In the Split Testing tool, you can set how you want to choose your winner, whether it’s by clicks, opens or click per open ratio. Here’s a quick breakdown of each metric:

Open: If you were testing a subject line or sender’s name, you would probably choose open rate as the success indicator.

Clicks: If you were testing out the new color of a call to action button, you’d want to know which color got more clicks, so that would be your success indicator.

Clicks per Open Ratio: This metric is useful to determine how well the messaging in your email is driving people who have opened to go one step further and click.  Let’s say you wanted to test out a different way of communicating with your customers, for example, adding urgency to an offer with a live countdown timer. You’d want to see if the email with the timer drove more people to click than the email without the timer. The clicks per ratio open would be the metric you’d want to use.

split-test-step2

  1. Set the Schedule of the Test

split-test-step3

How long you want to let the results to roll in before choosing a winner is essential to correctly interpreting the results of the split test. Declare a winner too soon and you may see the results change after the test is over.

Looking at the past behavior of your audience and when they’ve opened emails in relation to when they were sent, you’ll usually notice that there’s a sharp spike around the time the email is sent and then it titters off. Try ending your test sometime after this titter off period.

Another idea would be to declare the winner after a 24 hour period. This gives you enough time to interpret results and allows you to send the winning email at the same time you sent the test email, so you’ll not inadvertently be adding time of day as a testing element.

  1. Choose Your Test Audiencesplit-test-step4

You’ll want to choose a sizable group that’s going to give you statistically significant results. With the Split Testing tool, you can choose the percentage size of the group that you want to test. The percentage you choose will be dependent on the size of your list (now is the time to use your college stats course learnings.) I always find it handy to use a tool that helps with finding the right size, like this calculator.

  1. Start the Test

Once you’ve got these elements selected in the Split Testing tool, you’re ready to schedule your email to send. You go, girl!

  1. Check out your Email Stats

Now to the most important part – seeing the results! LeadMachine makes it easy to interpret the results of the split test with the Split Testing Report.

split-test-step5

Keep track of what worked well and what could be improved. These insights can be useful when creating future campaigns.

What kind of tests do you think you’ll try out? Let us know in the comments!