What You Need to Know When Validating the Case for Loyalty

Aug 14, 2015

Validate the Case for Loyalty - Market Research

Marketers dread the moment that inevitably occurs in front of Senior Management when someone asks, “How much will revenue and profit increase if we implement a new loyalty program?” That’s because it’s much easier to quote results from something you did in the past than to project something entirely new. The safest course for knowing how much lift to promise is to recommend something that incrementally improves on a past initiative.

However, there is a distinct problem with the incremental approach to new marketing initiatives: Breakthrough loyalty programs that drive new profitable sales don’t repeat what’s been done in the past. Not even remotely.

If you want to launch something truly innovative, something that could change the course of your business, then you must make some assumptions about what the future could hold. That leaves a marketer very exposed when asked that inevitable question about return on investment. Anything you say can be countered with someone else’s opinion unless you have some form of external confirmation.

The best way to validate an assumption about sales growth is to pilot the program in market. Given the expense of pilot programs and the competitive risk of showing too much too soon, most companies opt for market research before committing resources.

The challenge is how to make that research validate the financial model. Oddly, this is where many marketers make the mistake of treating loyalty program research as satisfaction research. The survey questions focus on “Do you like the new value proposition?” or “Which of the following benefits make you feel most rewarded?” Attitudinal research can be dangerously misleading for a loyalty program because what customers like most is something that won’t make them buy anything more than they currently buy. And even if the research proves that customers would be much more satisfied with a new program, satisfaction is often poorly linked to actual sales lift.

Another frequent mistake is to ask about likelihood to enroll and ONLY likelihood to enroll, in the interest of keeping the survey short. “Would you enroll?” is a critical question, but often 90% tell you they will join and then become that dreaded “one and done” customer who never buys anything incremental. What you need for your business case is what they’ll do AFTER they join.

The more applicable discipline is new product research. New product researchers typically construct a quantitative survey with questions laid out in 3 sequential stages:

  1. Establish a baseline – how much is the customer buying now?
  2. Introduce the new product.
  3. Quantify potential lift – how much more would the customer buy of the new product?

This kind of a survey directly results in the lift number needed for the business case because you will get an estimate of who is currently buying X times and after seeing the new program concept who says they will buy Y times.

There are caveats and adjustments when the new “product” is a loyalty program:

  • Deflating results: Customers are notoriously bad at estimating how much they currently spend and if they do like the concept you present, they are likely to over estimate how much more they’ll buy, so you’ll often need to deflate the lift projected. The same skepticism should be applied to how many respondents say they’ll switch brands, increase their visits, review a product, tweet a coupon, or buy additional items – all may be behaviors important for a profitable program. The deflation factors vary greatly by type of program, industry, and even the type of enrollment process chosen.
  • Account for context: The customer is seeing the concept in a survey and not in the context in which they will join and participate in the program. This factor is even more of an issue when the concept is groundbreaking – as Steve Jobs famously observed when asked why he didn’t do market research, “A lot of times, people don’t know what they want until you show it to them.” There are also categories, such as gaming, where there are excellent reasons why a customer won’t know how much they spend so you have to create a context where you can ask about proxy behaviors that the customer can reasonably estimate, while staying close enough to the behavior you need to quantify for your business case.
  • Look below the topline: Research is commonly fielded in a rush so marketers look at topline results and run out of time to explore cross tabulations for customer segments. However, you don’t want to find out later that your program only increased purchases from the least profitable customers, so be sure to look below the surface. Also be sure to check that the respondents who say they’ll increase their purchases are the same customers who want to join the program.
  • When richer is not better: Often, marketers need to be able to test a richer program concept against a lower payout concept. Showing two concepts side by side is the easiest type of research, but that automatically favors the richer benefit mix and understates the potential lift from a less expensive list of benefits. There are multiple techniques for reducing this bias, which are well worth the investment.
  • Fatigue: Research is often used to break ties between secondary benefits or to size the impact of variations in the program terms – will more customers engage when points expire after 36 months instead of 12 months? While important, the questions tend to become complex and/or repetitive. This can fatigue the respondent and confound, or even reverse, the response to the primary program concept. Know when to stop or to run separate research.

Research should always be viewed as a guideline for validating assumptions in a business case – it is never a guarantee. There are good reasons for the disclaimers that all reputable market researchers use. Research results are more likely to be actionable when the survey questions are coordinated with the business case and ask about actual purchase behaviors as well as satisfaction and other attitudinal factors. Good research can help you recognize the program concept that may not have the most likes, but will in fact drive the most profitable behaviors.

And it pays to know that some of the most innovative programs, the ones that leapfrogged past the competition, were barely researched – there was no way to describe the new customer experience without real customers really experiencing it.