With Halloween upon on us you could say we’ve officially started the holiday shopping season, when retailers shift their holiday marketing campaigns into high gear to get their share of consumer’s attention and spending. Marketers have given a lot of time and attention in developing the campaigns we’ll see over the next ten weeks. But have they given the same amount of attention to how they will measure campaign success?

Earlier this year we touched on the subject of campaign measurement, and more specifically sales lift analysis.  In it reviewed the basic parameters to consider when putting together a sales lift study. Today we’re going to dig deeper into finer details on how ShopAdvisor helps brands and retailers measure campaign  success through sales lift analysis.

Evaluating the results is the core metric of our sales report offerings, combined with contextual insight, takeaways, and recommendations to supplement the numbers. Traditional figures such as CTR (click-through rate), AOV (average order value), and CPC (cost per click) show us how well a given creative or overall campaign performed, and we break down the numbers further into segments to show how it resonated among different regions and demographics. Pulling those numbers together give a quick and accurate first glance at a campaign’s effectiveness.

But what really gets people’s attention is the sales lift analysis. Measuring the sales lift provides a deeper understanding of how well a campaign drove consumers to buy a product at a given retailer. It’s a tangible measure of sales success that goes beyond clicks on ads that may or may not translate into consumer spending. In simplest terms, sales lift compares sales within a group of predetermined test stores with those of a group of control stores. The test stores exist in zip codes where we send out the ad impressions, and the control groups receive none of the campaign ads. We measure the difference in dollar figures among those two groups, then compare that discrepancy with the same measure over the time period one year prior.

As detailed in our new published iPaper, we provide specific examples of sales lift analysis reports with differing characteristics.  For example, we illustrate how store level data versus syndicated regional data can reveal a drastic difference in a sales lift for a campaign across different stores and geographies. In another report, where store level data was available, but for which we had limited contextual analysis and sales data history, we show how to take another approach by comparing campaign sales with a baseline immediately preceding the campaign, rather than year-over-year. This provided some comparative context on which to evaluate the effectiveness of the campaign.

And as mentioned earlier, the final step is to incorporate contextual information to go along with our figures and calculations. This helps shed some light onto why a campaign performed as we did or didn’t expect it to, as well as make recommendations for a future study. For example, frozen fruit campaign showed a modest sales lift.  The contextual overlay on the results was that the campaign would have performed better if we had been able to compare sales from the same period the year before rather than with the late winter months when frozen fruit sales may be higher than in the summer, with fresh fruit offerings.

There is no single answer that covers all situations, but as we’ve shown through a combination of case studies and in-depth explanations from real life examples, the bottom line is that intelligently measuring and contextualizing data is much more crucial than simply producing loads of raw data with no direction or method of refinement.

Share This