Seven common problems that derail A/B/n email testing success

These crop up most often in my work with clients. Solutions to some of these challenges will require a total mindset change. For others, just learning the proper way to set up tests can resolve many of your current issues. That's the good part about testing. For every problem, there's a way to correct it. Every time you solve a problem via testing, you take another step toward putting your email program on the right path.
- Testing without a hypothesis: Many email marketers pick up the rudiments of testing by using the tools their ESPs give them, mainly for setting up basic A/B split tests on simple features such as subject lines. However, this ad hoc, one-off approach is like learning to drive a car without knowing how to read a map. You can turn the car on just fine. But you need map skills to plan out a journey that will get you where you want to go with the fewest traffic jams and detours.
- Using the wrong conversion calculation: This relates to the customer's journey and the test's objective. When you do a standard A/B split test on a website landing page, you often use "transactions/web sessions" as your conversion calculation to see how well the page is converting. This makes sense because you don't know the path your customers took to get there on the site, so you focus on this particular part of the journey, as it ignores everything that happens before it.
- Measuring success with the wrong metrics: A workable testing plan needs relevant metrics to measure success accurately. The wrong metrics can inflate or deflate your results. This, in turn, can mislead you into optimizing for the losing variant instead of the winner.
- Testing without statistical significance: If your testing results are statistically significant, it means that the differences between testing groups (the control group, which was unchanged, and the group that received a variable, such as a different call to action or subject line) didn't happen because of chance, error or uncounted events.
- Stopping with one test: The philosopher Heraclitus said, "No man ever steps in the same river twice, for it's not the same river and he's not the same man." The same is true for your email campaigns. Your subscriber base is always gaining new subscribers and losing old ones, and customers don't react the same way every time to every campaign. A campaign that worked well one time might fall flat the next.
- Testing only one element in a campaign: Subject-line testing is ubiquitous, mainly because many email platforms build A/B subject line split testing into their platforms. That's a great start, but it gives you only part of a picture and is often misleading. A winning subject line that's measured on the open rate doesn't always predict a goal-achieving campaign.
- Not using what you learned to make email better: We don't test to see what happens in a single campaign or satisfy curiosity. We test to find out how our programs are working and what will improve them - now and the long term. We test to determine if we are spending money on things that help us achieve our goals. We test to discover trends and shifts in our audience that we can apply across other marketing channels - because our email audience is our customer population in a microcosm. Don't let your test results languish in your email platform or in a team notebook.