If you’re not testing, you’re missing out

Should your campaign take a humorous or serious approach? Should it be imagery-heavy, or rely more on its messaging? Will a new campaign direction resonate with the target audience? 

Whether it’s creator versus creator, account team versus production, or client versus agency–it’s not uncommon to disagree about which subjective creative concept will work best for a particular campaign. At cohort.digital, we’re believers in setting disagreements aside and letting data speak for itself.

why should I test?

While we’re poking a bit of fun at ourselves and the creative industry, there’s good reason for debating the merits of differing creative approaches. Clients and agencies can spend significant time, budget, and resources on the development of campaign materials, and often there’s more than one on-brand approach. It’s natural to want to put your best foot forward and ensure that the chosen direction will have maximum impact with your target audience.

You can use digital to test the response to a soft-launch of a new concept, test multiple variables within a single execution, AB test multiple executions, and more. Whether you’re beta testing or testing in an active campaign, there’s rich data to be gathered. You can learn a lot about the customer’s response and use that information to enhance your campaigns. In a beta-testing situation, you may gather valuable insights you can apply to the official campaign launch. When testing multiple variables or executions, these results allow you to make real-time updates instead of waiting until the budget is spent to optimize toward growth and improved performance. All of this learning can be used in future iterations of the same campaign, or as you launch new ones.

where should I start?

Start by reviewing your key performance indicators (KPIs). What is the action you want people to take when they see your ad in the wild, and how are you measuring success? Let’s say you’re measuring click through rate (CTR) from an ad to your website. If you’re running this on your own, we recommend using the benchmarks provided in-platform. Or if you’ve hired an agency to place media for you, ask for an estimate based on your budget and campaign length.

If you met or exceeded your estimates, chances are you’re on the right track. Congrats, you just might be a creative genius! If you fell short of the desired metrics, it could be that the creative isn’t resonating with your target audience but it’s important to understand there are multiple factors at play. In this example when using CTR we’re measuring the effectiveness of the ad to get people to your website, but that’s only the first step. They still have to complete the on-site action–your ultimate KPI. At the end of the day a great ad isn’t going to be able to do all of the conversion work for a less-than-stellar landing page. So keep in mind testing may include evaluating other factors that can have an impact on your campaign performance.

testing multiple variables within one execution

Often the purpose of testing a single-ad execution is to compare multiple variables supporting that ad. This can be useful if you want to gather information on different sizes, headlines, copy, or calls to action. Having one ad and testing multiple copy or size variations can tell you more about the written words people are responding to or how its size on screen affects response. Running one ad execution can also provide some information about whether your campaign creative resonates and performs well in general. Though, as we mentioned above there are other variables that may contribute to campaign performance.

testing multiple ad executions

Testing multiple executions can be useful whether you’re designating a small budget specifically for the purposes of testing, or running a campaign in the wild and testing as you go. You can learn a lot about what your audience responds to with at least 2-3 different ads, though we recommend 3-5 in ideal circumstances.

In this form of testing, it’s important to remember to test just one variable at a time. (See related – Best Practices for Digital Creative.) Introducing multiple elements in any given round of testing makes it really difficult to pinpoint which change made the difference in delivery. So we recommend designating a test pathway and sticking to it. For example, in the graphic below you can see that we suggest first testing the image, then CTA, and so on. Testing in this method allows you to identify from round to round or week to week which elements are performing best and to optimize toward your highest performers.

factors to consider when testing various platforms

The image above outlines a suggested pathway for testing creative elements of a standard in-feed Facebook/Instagram ad campaign. We chose this example because marketers have grown fairly accustomed to the set-up and performance metrics in this type of campaign. Available metrics vary from platform to platform, and may be more or less relevant depending on your goals. For instance, a YouTube campaign might test three different videos week one, headline in week two, CTA week three, and ad targeting week four. From platform to platform ask yourself what metric gives you data to verify which ad variation came out on top. Most often we’re looking for engagement and conversion metrics like:

  • Likes
  • Shares
  • Comments
  • Video view percentages
  • Earned impressions
  • Downloads
  • Form submissions
  • Tickets sold
  • Return on Ad Spend (ROAS)

Start with one, then you simply remix and repeat.

Testing not only puts an end to the ‘which creative is better’ argument, but when implemented as part of your active campaigns it can give you valuable insights to use in future marketing.

Go forth. Go digital.