Over the past few weeks, I’ve been part of a group that picked the winners of Forrester’s Voice Of The Customer Awards for 2011. I can’t yet tell you the names of the three winners — those companies will be announced on June 21 at our Customer Experience Forum in New York, along with the other seven entrants that made up our top 10. But I can share some insight into what separated the winners from the contenders.

At one end of the spectrum, the clarity with which entrants described their programs didn’t create much differentiation. With very few exceptions, descriptions ranged from very clear to extremely clear and “please stop with the detail already, my eyes are starting to bleed” clear.

At the other end of the spectrum, the business benefits that companies derived from their voice of the customer (VoC) programs provided diamond-hard clarity as to which companies were great and which were just good.  

To understand why that is, consider the question in the awards submission form that asks about business benefits. It was worded exactly like this:

“How has this activity improved your organization's business results? Please be as specific as possible about business benefits like increased revenue, decreased cost, increased customer satisfaction, or decreased customer complaints. Please specify how you measure those benefits.”

The judges were looking for a response along the lines of:

  1. We heard these specific things from customers through our VoC program.
  2. As a result of what we heard, we made these specific changes.
  3. We then measured these specific results from those changes: this much in extra revenue, this much in cost savings, this big a rise in some metric that we correlate to economic benefit in this way.

And we did hear that exact kind of response from a few companies — firms that are heavily represented in our top 10, as it turns out. And from many other companies, we heard something that was good but considerably less compelling. That story went something like:

  1. Using VoC data and some economic modeling, we proved that a one-point increase in a key customer metric like NPS or ACSI (or some other measure) equates to this much extra revenue per customer per year.
  2. When we multiply that extra revenue across the number of customers who showed an increase in that key customer metric, we calculate this much extra business benefit.

So why is the first approach I outlined so much better than the second? It’s because the first approach draws a bright line between the VoC program and business benefit, whereas the second approach only shows how useful the VoC program is in proving that customer experience (or some loyalty metric) has business benefit.

In other words: The VoC program produces benefit versus the VoC program proves benefit. It’s that simple. If you were looking to defend the budget for your VoC program, which approach would you rather be armed with when going into this year’s budget process? Personally, I want the slide deck that shows that I’m a player and not just a scorekeeper.

Of course, either of those approaches beat a third way of trying to prove business benefit. We saw it a lot, and it went something like this:

  1. We listen to customers.
  2. We respond to customers.
  3. Here are some macro-level business results like a rise in overall revenue or EPS that we’d like to take full credit for.

As judges of these awards, we can’t take a leap of faith that “if we listen, profits will come.” And if you were an entrant, would you really want to claim that your VoC program was the only thing that made your company’s revenue rise this year? Is that something you’d say to your CEO? Because even though there’s good reason to believe that a great VoC program will contribute to business results, even VoC enthusiasts have to acknowledge that there are other factors in the mix like the economic climate, competitive environment, products, pricing, distribution . . . you get the picture.

With that in mind, I’d like to offer up one more observation and then a piece of advice.

The observation is that this year’s field of contenders was very strong. They were even stronger than last year, when we all agreed that the entries were a big improvement over the previous year of the competition. That demonstrates real progress in terms of the impact that VoC programs have on customer experience improvement efforts in particular and business results in general. And you know what’s really encouraging? We saw some programs that were in startup mode in 2010 that look like they’ll be serious contenders in 2012.

The implication is clear: Companies that want to win next year should not chase this year’s bar for excellence; they should shoot even higher.

That leads to my advice. If you’re running a VoC program, please take the time to think through how you’ll measure — and prove — your program’s impact on your business and your customers — not because you want an award (although those are nice, too) but because it will crystallize your thinking about why you’re listening to customers and how the insight you gain can create benefits for your organization. If you take this advice, be sure to follow the chain of evidence from insight to action and results. That’s sure to win you supporters at your company and maybe even some extra budget so you can do even more good.