Showing posts with label checklists. Show all posts
Showing posts with label checklists. Show all posts

Friday, November 28, 2008

Judging the Value of Marketing Data

Last week’s post on ranking demand generation vendors highlighted a fundamental challenge in marketing measurement: the data you want often isn’t available. So a great deal of marketing measurement comes down to deciding which of the available data best suits your needs, and ultimately whether that data is better than nothing.

It’s probably obvious why using bad data can be worse than doing nothing, but in case this is read by, say, a creature from Mars: we humans tend to assume others are telling the truth unless we have a specific reason to question them. This innate optimism is probably a good thing for society as a whole. But it also means we’ll use bad data to make decisions which we would approach more cautiously if we had no data at all.

But how do you judge a piece of data? Here is a list of criteria presented in my book The MPM Toolkit, due in late January.

· Existence. Ok, this is pretty basic, but the information does have to exist. Let’s avoid the deeper philosophical issues and just say that data exists if it is recorded somewhere, or can be derived from something that’s recorded. So the color of your customers’ eyes only exists as data if you’ve stored it on their records or can look it up somewhere else. If the data doesn’t exist, you may be able to capture it. Then you have to compare the cost of capturing it with its value. But that’s a topic for another day.

· Accessibility. Can you actually access the data? To get back to last week’s post, we’d love to know the revenue of each demand generation vendor. This data certainly exists in their accounting systems, but they haven’t shared it with us so we can’t use it. Again, it’s often possible to gain access to information if you’re willing to pay the price, and you must once more compare the price with the value. In fact, the price / value tradeoff will apply to every factor in this list, so I won’t bother to mention it from here on out.

· Coverage. What portion of the universe is covered by the data? In the case of demand generation vendors, the number of blog posts was a poor measure of market attention because the available sources clearly didn’t capture all the posts. In itself, this isn’t necessarily fatal flaw, since a fair sample could still give a useful relative ranking. But we can’t judge whether the coverage was a fair sample because we don’t know why it was incomplete. This is a critical issue when assessing whether, or more precisely how, to use incomplete data. (In the demand generation case, the very small numbers of blog posts added another issue, which is that the statistical noise of a few random posts could distort the results. This is also something to consider, although hopefully most of your marketing data deals with larger quantities.)

· Accuracy. Data may not have been accurate to begin with or it may be outdated. Data can be inaccurate because someone purposely provided false information or because the mechanism is inherently flawed. Survey replies can have both problems: people lie for various reasons and they may not actually know the correct answers. Even seemingly objective data can be incorrect: a simple temperature reading may inaccurate because the thermometer was miscalibrated, someone read it wrong, or the scale was Celsius rather than Fahrenheit. Errors can also be introduced after the data is captured, such as incorrect conversions (e.g., inflation adjustments used to create “constant dollar” values) or incorrect aggregation (e.g., customer value statistics that do not associate transactions with the correct customers). In our demand generation example, statistics on search volume were highly inaccurate because the counts for some terms included results that were clearly irrelevant. As with other factors listed here, you need to determine the level of accuracy that’s required for your specific purpose and assess whether the particular source is adequate.

· Consistency. Individually accurate items can be collectively incorrect. To continue with the thermometer example, readings from some stations may be in Celsius and others in Fahrenheit, or readings from a single station may have changed from Fahrenheit to Celsius over time. This particular difference would be obvious to anyone examining the data, although it could easily be overlooked in a large data set that combined information from many sources. Other inconsistencies are much more subtle, such as changes in wording of survey questions or the collection mechanism (e.g., media consumption diaries vs. automated “people meters”). As with coverage, it’s important to understand any bias introduced by these factors. In our demand generation analysis, Compete.com used several different techniques to measure Web traffic, and it appeared that these yielded inconsistent results for sites with different traffic levels.

· Timeliness. The primary issue with timeliness is how quickly data becomes available. In the past, it often took weeks or months to gather marketing information. Today, data in general moves much more quickly, although some information still take months to assemble. There is a danger that quickly available data will overwhelm higher-quality data that appears later. For example, initial response rate to a promotion is immediately available, but the value of those responses can only be measured over time. Decisions based only on gross response often turn out to be incorrect once the later performance is included in the analysis. Still, timely data can be extremely important when it can lead to adjustments that improve results, such as moving funds from one promotion to another. Online marketing in particular often allows for such reactions because changes can be made in hours or minutes, rather than the weeks and months needed for traditional marketing programs.

I haven’t listed cost as a separates consideration only because there are often incremental investments that can made to change a data element’s existence, accessibility, coverage, etc. Those investments would change its value as well. But you will ultimately still need to assess the total cost and value of a particular element, and then compare it with the cost and value of other elements that could serve a similar purpose. This assessment will often be fairly informal, as it was in last week’s blog post. But you still need to do it: while an unexamined life may or not be worth living, unexamined marketing data will get you in trouble for sure.

Friday, November 7, 2008

Cognos Papers Propose Sales and Marketing Metrics

I’ve always felt that defining a standard set of marketing measures is like prescribing medicine without first examining the patient. But people love those sorts of lists, and they offer a starting point for a more tailored analysis. So I guess they have some value.

Based on that somewhat crotchety premise, I’ll call your attention to a pair of papers from Cognos on “Delivering the reports, plans, & metrics Sales needs” and “Delivering reports, plans, and metrics for better Marketing” (idiosyncratic capitalization in the original). These are widely available on the Internet; you can find both easily if you run this search at IT Toolbox.

Since the whole point of standard measures is to be broadly applicable, I suppose it’s a compliment to say that the measures in this paper are reasonable if not particularly exciting. One point they do illustrate is the difference between marketing and sales, which are often conflated into a single entity but are in fact quite distinct. Let’s look at the metric categories for each:

- Sales: sales results; customer/product profitability; sales tactics; sales pipeline; and sales plan variance.

- Marketing: market opportunities; competitive positioning; product life cycle management; pricing; and demand generation.

It’s surely a cliché, but these measures suggest that marketing is strategic while sales is almost exclusively tactical. That’s a bit blunt but it sounds about right to me.

Given my admittedly parochial focus on demand generation these days (see www.raabguide.com), I couldn’t avoid noticing that Cognos gave demand generation just one of its five marketing slots. That seems a bit underweighted, given that it probably accounts for the bulk of most marketing budgets. But I do have to agree that strategically, marketing should be spending its time on those other topics too.

The papers list specific measures within each category. It’s going to as boring to type these as you’ll find it to read them, but I guess it’s worth the trouble to have them readily available for future reference. So here goes:

Sales metrics:

Sales results
- new customer sales
- sales growth
- sales orders

Customer/product profitability
- average customer profit, lifetime profit and net profit
- net sales
- gross profit
- customer acquisition and retention cost
- sales revenue
- units sold

Sales tactics
- average selling price
- direct cost (of sales efforts)
- discount
- sales calls and sales rep days
- sales orders
- units quoted

Sales pipeline
- pipeline ratio (they don’t define this; I’m not sure what they mean. Maybe distribution by sales stage)
- pipeline revenue
- sales orders and conversions
- cancelled order count
- active and inactive customers
- inquiries
- new customers and lost business

Sales plan variance
- sales order variance
- sales plan variance
- sales growth rate variance
- units ordered and sold variance

You’ll notice a bit of overlap across groups, and I’m not sure why “Sales plan variance” is a separate area: I would expect to measure variances against plan for everything. The list is also missing a few common measures such as profit margin (which shows the net impact of decisions regarding product mix, pricing and discounts), actual vs. potential sales (hard to measure but critical), lead-to-customer conversion rates, and win ratios in competitive deals.

Marketing metrics:

Market opportunities
- company share
- market growth
- market revenue
- profit
- sales

Competitive positioning
- competitor growth
- competitor price change
- competitor share
- competitor sales
- market growth
- market revenue and profit
- sales

Product life cycle management
- new products developed
- new product growth, share, & profit
- new competitor product sales & growth
- market growth
- brand equity score
- new product share of revenue

Pricing
- price change
- sales
- price segment share and growth
- discount ($)
- discount spread (%)
- list price, net price, & average price
- price elasticity factor
- price segment sales and value
Demand generation
- marketing campaigns (#)
- marketing spend
- marketing spend per lead
- qualified leads (#)
- promotions ROI
- baseline and incremental sales

If these weren’t two separate papers, I’d say the author had gotten tired by the time she wrote this one. We see even more redundancy (sales appears in three of the five lists) and “brand equity score” sticks out like a moose at a Sarah Palin rally. (Now there’s a joke that will age quickly.) It’s interesting that the competitive measures provide some of the relative performance information that was lacking in the sales metrics, and that reporting on profit addresses to some degree my earlier question about margins. Is the author implicitly suggesting that sales shouldn’t be held accountable for such things? I disagree. On the other hand, measures of customer value or quality are all assigned to sales. I think marketing is primarily responsible for that one.

Well, that’s interesting: I hadn’t really planned to criticize these measures when I sat down to write this, but now that I look more closely, I do have some objections. It honestly doesn’t seem fair to be harsh, since any list can be criticized. Maybe I’m just crotchety after all. In any event, you can add this list to your personal inventory of metrics to consider for your own business. Maybe something in it will prove useful.

Tuesday, August 26, 2008

Measuring the Value of a Marketing Measurement Project - Part 2

The first post in this series explained why I might create a standard spreadsheet to measure the value of a marketing performance measurement (MPM) project. In a traditional consulting engagement, this measurement would be a custom analysis tailored specifically to the project at hand. In the self-service world, this is an unavailable luxury. I’m starting with value measurement because I’m incurably linear and the first question to answer about a project is what value the client hopes to receive. The answer drives everything else.

In creating a generic project value form, the trick is to define a set of categories that are specific enough to be useful yet broad enough to cover all the possible cases. My inner Platonist wants to start with a general value formula such as value=revenue – costs, and then subdivide each element. But how would you know if the subcomponents corresponded to the value drivers of actual MPM projects? It’s better to start with a sample of MPM projects, identify their value drivers, and then see if these can be joined as components of a single formula. (Philosophy majors everywhere will recognize the difference between deductive and inductive reasoning. But I digress.)

A reasonable list of typical MPM projects would include marketing mix models, brand value studies, response measurements, Web analytics, social media measurement, and operational process measurements. Except for the final category, these all help allocate marketing resources to the most effective use. In contrast, operational processes help the department perform its internal functions more efficiently. This distinction immediately suggests breaking the value formula into two primary components: value received and marketing operations.

Of course, value received is the more important of the two, particularly if the calculation includes non-overhead marketing costs such as advertising, discounts and channel promotions. One way to subdivide value is to consider that a typical marketing plan will be divided among customer acquisition, development and retention programs. Of these, acquisition and retention focus on number of customers, while development focuses on value per customer. It therefore makes sense to calculate value as the product of these factors (i.e., value= number of customers x value per customer). Since many companies are more product-oriented than customer-oriented, value per customer could further be divided into value per unit and units per customer. Value per unit, in turn, could be split into revenue per unit, product cost per unit (cost of goods sold, shipping, etc.), and marketing cost per unit.

A single value for “number of customers” doesn’t really capture the dynamic between acquisition and retention rates, so it too must be broken into pieces. The basic formula is number of customers = (existing customers + customers added – customers lost).

The result of all this is a value formula with the following elements:

net value = value received – marketing operations cost

value received = number of customers x units per customer x value per unit

number of customers = existing customers + customers added - customers lost
units per customer (possibly broken down by product mix)
value per unit =revenue per unit – product cost per unit – marketing cost per unit

Now let’s do a reality check against our list of MPM projects:

- marketing mix models include product mix, pricing, advertising, and channel promotions as their major components.

- product mix is covered by revenue per unit and/or units per customer.

- pricing is covered by revenue per unit and/or marketing cost per unit, depending on how you treat discounts, coupons, etc.

- advertising is covered by marketing cost per unit

- channel promotions are covered by product cost per unit and/or marketing cost per unit

- brand value studies measure the relation of consumer attitudes to behaviors such as trial, retention and consumption rates. These are covered by existing customers, customers added, customers lost, and possibly by units per customer. A more formal sales funnel could easily fit into this section of the formula if appropriate.

- response measurements are covered by customers added and marketing cost per unit.

- Web analytics projects encompass a range of objectives such as lower cost per order, improved conversion rates and higher revenue per visitor. These are covered respectively by marketing cost per unit, number of new customers, and a combination of units per customer and revenue per unit. Other objectives could also probably be covered by the formula components.

- social media measurements are like brand value measurements: they relate messages to attitudes to behaviors. They would also be covered by changes in customer numbers and units per customer.

- operational process measurements are covered directly by marketing operations cost

So it looks like the proposed set of variables lines up reasonably well with the value drivers of typical MPM projects. This means that project planners should be able to define the expected benefits in terms of these variables without too many intermediate calculations. Once they’ve done this, the actual value calculation is purely mechanical.

The final set of inputs would look like this (with apologies for poor formatting):

inputs..................current value....expected value......change

Number of Customers:
existing customers........xxxx............xxxx...................xxxx
+ customers added........xxxx............xxxx...................xxxx
- customers lost.............xxxx............xxxx...................xxxx

Units per customer:......xxxx............xxxx...................xxxx

Value per Unit:
revenue per unit............xxxx............xxxx...................xxxx
- product cost per unit...xxxx............xxxx...................xxxx
- marketing cost/unit.....xxxx............xxxx...................xxxx

Marketing Ops Cost.......xxxx............xxxx...................xxxx

Based on those inputs, the final calculation is:

Value Received (= Number of Customers x Units per Customer x Value per Unit)
- Marketing Ops Cost
= Net Value

OK, so now we have a formula that calculates business value using inputs relevant to marketing measurement projects. But the real work is deciding what values to apply to those inputs. Part 3 of this sequence will talk about that.

Monday, April 21, 2008

MPM System Checklist

I’m planning a series of posts to review individual MPM reporting systems. Before doing that, it will help to set a framework of items to consider. I may expand this over time.

Inputs:

  • treatments given to individual customers across all channels (this refers primarily to marketing messages, but could also include operational messages such as bills and service interactions)

  • customer behavior history (purchases, refunds, service requests, etc.; this includes costs and revenues)

  • customer attributes (demographics, location, needs, etc.; these should be standardized and grouped into audience segments to make analysis easier)

  • treatment attributes (campaigns, products offered, pricing, positioning, media cost, etc. These should also be standardized and aggregated for analysis.)

  • external conditions (competitive advertising and promotions, economic conditions, weather, etc.)

Processes:

  • customer data integration (to associate all information related to individual customers. This is the clichéd 360 degree view.)

  • response attribution (to measure the influence of each treatment on subsequent customer behavior. This a complex analytical process if done in depth; otherwise, it can be as simple – and simplistic – as directly matching responses to promotions.)

  • brand value attribution (to measure the influence of each treatment on brand value. This is an unholy marriage of two frighteningly obscure methodologies: response attribution with brand value measurement.)

  • predictive modeling and optimization (to recommend treatments during interactions and for outbound campaigns. This would also recommend optimized spending levels across channels based on business objectives and constraints.)

  • lifetime value calculations. (These include projections for audience segments and estimated incremental lifetime value impact of specific treatments.)

Outputs:

  • campaign reports (Basic information includes quantities, costs, and responses. Financial information includes return on investment, cash flow and profits by period. The financial results are based on response attribution. Depending on how attribution is handled, reports may show only direct effects or show direct effects plus long-term impacts.)

  • treatment reports (Basic information shows how often different treatments are delivered. Advanced information shows the impact of treatments and treatment attributes on campaign response, customer value and brand value. Like campaign reports, treatment reports are based largely on response attribution.)

  • customer reports (Basic information includes segment counts, demographic profiles and behavior profiles. Trends show changes in segments and migration of customers among segments. Customer reports also includes levels and changes in lifetime value.)

  • Brand value reports (These show current brand value and changes in brand value. They report both aggregate values and details for brand value components.)

Key considerations within each of these topics include:

  • the scope of data included (for example, which channels)

  • degree of detail (or “granularity” if you want to impress someone. For example is customer information reported for individuals, audience segments or campaigns? What is the level of financial detail?)

  • data latency (how long does it take before new data becomes available to the system?)

  • implementation effort (time, skill levels, flexibility)

In addition, there are the standard considerations for any system:

  • delivery model (hosted vs. on-premise)

  • technical skills required to deploy and maintain

  • end-user skills required to benefit from the system

  • costs (hardware, software and services; implementation vs. on-going operation)

  • scalability (relative to your own data volumes and user counts)

  • vendor experience and stability

  • technologies used (databases, .NET vs. J2EE, ‘fat client’ vs. browser-based, etc.)

A word of caution: any list like this favors products with many features, even though some of those features may not be poorly implemented. I consider MPM systems to be in an early stage of development, which means there will be a variety of specialist products (a.k.a. “point solutions” or “best of breed”) that do just a few things but do them very well. This will change over time as requirements for different functions become better understood and standard practices emerge. For now, it’s probably more important to make sure any give product meets your critical needs than to try to find a one-size-fits-all solution that does everything. It’s a safe bet that the major platform vendors (of enterprise systems like SAP or Oracle, or marketing systems like Unica, Aprimo, SAS or Teradata) will eventually provide comprehensive solutions, so anything you buy now will only last a few years at best.