July 4, 2014
You've always been able to conduct split tests with Messenger, but in our latest update we've added features to streamline the process. There's also now a simple way to compare your split tests in Reports. Read on for more details.
Before we get started it might be good just to explain what a split test actually means. In essence it's very easy - take two small subsets of your campaign's targets and send each set slightly different versions of the campaign. You then compare the click, open or call through rate of each to see which is most effective, and use that version of the campaign to the remainder of the targets. This data driven approach can give a dramatic uplift in conversions, as well as giving you hard data to use to inform your decisions.
Setting up, tracking and ultimately using each version has just gotten a lot easier as on the 'send' menu there's a new 'split test' item that allows you to manage your versions. The new split test page lets you set up the different subject lines for each version. You can change these any time up until the versions are sent.
Speaking of which, if you have any unsent items on the spit test page then the schedule page will change to tell you how many unsent versions there are, and the batch size you enter there will be used for each of them. So for instance if you have 5000 targets in your campaign, and two versions, entering '750' for the batch size will send 1500 of them an email - 750 get the first version and another different 750 the second one. Any filters you have applied to the selected lists will still take effect of course.
Just like a live campaign send, and unlike test messages, the sending of the versions must be scheduled (even if this is 'immediately'. You should schedule your split test to go at the same time of day and day of week as your final campaign send will ultimately go out.
On the submit page the submit button will remind you that you are sending a split test rather than the whole campaign, otherwise sending a split test is the same as sending a normal campaign. If you've configured submission reports, for instance, you will get one for each version, and you'll see two sets of summary statistics on screen once the job has been submitted.
Of course, there's no point sending split tests with no way to compare them, so if a campaign was split tested, in Reports there'll be a new menu item on the campaign's tab that allows you to compare them. Of course the open/click 'by version' reports are still available, and in the CRM tab you can use the filter to really dig down into those results if you want. Once you've chosen the winning version, you can head back to Messenger's split testing page and easily apply the chosen version to the main campaign, then send it as normal to the remainder of the targets.
One slight wrinkle that may trip you up is the minimum size you need to get a meaningful result from a split test. If you only email a small number of people, or a small fraction of your users, than even a large difference between versions could just be coincidence. This is why we require you to have at least one thousand people in each version (500 for Newsletters where the click rates are different). As you must have at least two versions to compare between, that means you need at least three thousand (1,500 for Newsletters) people in the lists assigned to the campaign (after filtering) in order to leave enough people who didn't get the version for it to be worthwhile.
Things get more complicated if you want to test more than one change such as two different subjects and two different messages, or comparing differing send times. We're interested in hearing from you if you've found yourself doing things like that in the past; so please contact us so we can make a future version of the split testing page even more useful for you.
Happy split testing !
Posted by Tom Chiverton
Most people have heard of the Data Protection Act and if your in email marketing…
December 5, 2016
It’s easy to get stuck in a rut with your email marketing campaigns. You can…
October 26, 2016