Very rarely does a marketer, sales rep, or copywriter find the perfect version of copy on the first try. Marketing and outbound sales are more of a science experiment: sometimes what you try works, sometimes it fails miserably, but you are never really done tweaking and experimenting and trying new things. Just when you think you’ve found the best possible version of copy for your targeted audience, the structure or subject lines are old and tired (and being used by EVERYONE) or you are reaching out to prospects who have received emails from you before so you have to find a new way to say the same thing in different words.
It’s a never-ending cycle of try this, look at the data, tweak that, look at the data, lather, rinse repeat. All at the same time you’re expected to be getting immediate results.
It can be daunting for even the most experienced copywriter, but that is why A/B testing is so important. Through trial and error you can not only show clients exactly why a version of the copy was not performing and how you intend to fix it, but you can find ways to improve or change even the seemingly best version of an email you are sending.
Here are some best practices for having accurate A/B testing:
Data. Data. Data.
It’s impossible to know what to test, what to change, and what is working or not without enough data. When our company switched over to a new marketing automation tool earlier this year, our A/B testing capabilities became more limited and we had to get creative with how we tracked our data. It was a learning experience but the two main things we learned were that you need enough data to make a decision, and that certain kinds of data could signal what needed to be tested.
When launching new campaigns, there is almost always a warm up period to make sure domains are set correctly and nothing is going to spam. The downside to this period is that lower than normal volume is going out. Without volume, the open and reply rates on various versions of copy can be skewed and running a test will take a longer amount of time to get enough meaningful data back to know what version was the winner. There were instances where clients wanted to immediately change all of the copy in order to get results faster but we had to explain to them that without enough time and volume to test, we could be throwing out perfectly good copy that just needed time to generate enough data.
Once you have enough data on your copy versions in, it’s then a matter of “What is my data trying to tell me?”. If your open rates are lower than 20% it can mean one of two things: either you’re in spam, or your subject lines suck. Going off this data, you will be able to determine what kind of tests to run and what needs to be changed. If the open rate is decent but your reply rate is low it can mean that the actual body of your email needs to be tweaked, or your emails haven’t had enough touches go out to get a response back.
What to test first
We have a very set order of what we test when copy isn’t performing:
Key Testing Variables (In Order) | |
Subject Line | Length, personalization, trigger words, word order |
First Initial Sentence | First 5-7 words impact open rate |
Email Copy | Template, value proposition, spacing |
Call-to-Action | Wording, placement |
Time & Day | Morning, evenings, weekends |
This gives us a clear way to determine what we should be tweaking. If we change the call to action first, but the first initial sentence is turning prospects off, our test will be a failure and we won’t be doing anything to fix underperforming copy.
Don’t run too many sales testing procedures at once
Similar to point 1, you need enough data to be able to make a concrete decision on what works and what doesn’t. If you are testing 3 different subject lines across 4 different versions of copy, it will be incredibly difficult to determine which subject line and which copy version performed the best. Instead, the data gets muddled and it takes longer to run the test to know what is working. I try to stick to testing a max of 2 subject lines and 2 versions of copy at a time, but prefer one test a week if possible to get the most data back.
Be creative
Sometimes it just comes down to one word that convinces someone to open your email or reply. If your subject line is too generic, use a synonym to find a word that is less common that may make your email stand out. Every prospect gets emails with subject lines “Increase sales for {Company}”, but it’s not every day that Jim in Sales gets an email titled “Oodles of new business for {Company}”. Is it wacky? Yes. Is Jim more likely to click on it? You bet.
If you feel your copy is too convoluted, try re-writing it as if you were explaining the product or service to your 90 old grandma. The result will be a more concise, easy to read, and straight to the point email that is more likely to be responded to.
These are just some of the ways that you can A/B test your campaigns to make sure you are constantly improving your outreach. Remember, it’s an experiment. Sometimes you’re going to write a shitty subject line. But you aren’t going to know it’s shitty or find one that works better without a little testing first.