Our friends at Aberdeen Group apply a highly standardized research process to technology issues. They take a survey that asks companies about their business performance and the business processes, organization, knowledge management, technologies and performance measures related to a technology. They then divide the companies into leaders (“best-in-class”), laggards and industry average based on their business performance, and compare replies for the different groups. The not-quite-stated implication is that the differences in performance are caused by differences in the other factors. This is not necessarily correct (the ever-popular post hoc ergo propter hoc fallacy) and you could also wonder about the sample size (usually around 200) and how accurately people can answer such detailed questions. But so long as you don’t take the studies too seriously, they always give an interesting look at how firms at different maturity levels manage the technologies at hand.
It so happens that three of the Aberdeen studies have been sitting on my desk for some time, so I had a chance to look at them together. The topics were Lead Nurturing, Trigger Marketing and Cross-Channel Campaign Management. All are currently available for free although the sponsors may contact you in exchange.
Since the Aberdeen reports all follow a similar format, it’s easy to compare their contents. From the perspective of marketing performance measurement, they contain two elements of interest. These are the performance measures highlighted as distinguishing best-in-class companies, and the role of measurement among recommended strategic actions. Here’s a brief look at each of these in the three reports:
Lead Nurturing. The report highlighted number of qualified leads and lead-to-close ratio as critical performance measures, and found that 77% of best-in-class companies were tracking them. It also recommended tracking revenue associated with leads, although it found only 35% of best-in-class companies could do this. But otherwise, it didn’t see performance measurement as a central issue: the primary focus was on matching marketing messages to the prospect’s current stage in the buying cycle. Other important strategies were leveraging multiple channels, identifying prospect buying cycle and needs, and using automated lead scoring to move customers through the cycle.
Trigger Marketing. This report did not identify particular marketing measures as critical, although it did say that having defined performance goals for trigger marketing programs is important. It reported the most common measure is change in response rates, used by 69% of all respondents. (The next most common measure, change in retention rates, was used by just 54%.) I take this as a sign of immaturity (among the respondents, not Aberdeen), since response rate is a primitive measure compared with profitability and return on marketing investment, which were used by 43% and 42% respectively. This is consistent with another finding: the most common strategic action is to “link trigger marketing activities to increased revenues and other business results” (32%). I interpret that as meaning people are just learning to do make that linkage and are simply using response rate until they figure it out. It might be worth noting that the Aberdeen analyst highlighted digital dashboards as next step for best-in-class companies wishing to do still better, although I didn’t see a particularly compelling case for selecting that over other possible activities. But I’m all in favor of dashboards, so I’m glad to see it.
Cross-Channel Campaign Management. Again, the report doesn’t specify particular performance measures. It does say that it’s important to optimize future campaigns based on past performance (pretty obvious) and highlight real-time tracking of results across channels (less obvious, although I’m not so sure I agree. Immediate results may not in fact correlate with long-term profitability). This report did include segmentation and analytics as a strategic actions. (I consider these as part of performance measurement.) In particular, it stressed that best-in-class companies were focused on identifying their high value customers and treating them uniquely. Most of the recommendations, however, were about building the infrastructure needed to coordinate marketing messages across channels, and then executing those coordinated campaigns.
So where does this leave us? I don’t draw any grand lessons from these three reports, except to note that financial measures (i.e., customer profitability and return on investment) don’t play much of a role in any of them. Even that probably just confirms that such measures not widely available, which we already knew. But it’s good to know that people are working on performance measurement and that Aberdeen is baking it into its research.
Thursday, December 18, 2008
Thursday, December 11, 2008
Survey: Marketing Accountability Measures Remain Weak
Every year since 2005, the Association of National Advertisers and vendor MMA (Marketing Management Analytics) have joined forces to produce a survey on marketing accountability. Although the details change each year, the general results have been sadly consistent: marketers, finance executives and senior management are very unhappy with their marketing measurement capabilities.
In the 2008 study, released in July and just recapitulated in a new MMA white paper, only 23% of the marketers were satisfied with their metrics for marketing’s impact on sales, and just 19% were satisfied with metrics showing marketing impact on ROI and brand equity.
Furthermore, only 14% of the marketers felt their senior management had confidence in marketing’s forecasts of sales impact. And even this is probably optimistic: a separate MMA-funded study, also cited in the new white paper, found that only 10% of financial executives use marketing forecasts to help set the marketing budget.
The obvious question is why so little progress has been made. Marketers consistently rank performance measurement as their top priority (for example, see the CMO Council’s Marketing Outlook 2008 survey). Nor are marketers doing this out of the goodness of their hearts: they know that being able to show the impact of their expenditures is the best way to protect and grow their budgets. So marketers have every reason to work hard at developing performance measures that finance and senior management will accept.
And yet...when the ANA survey asked marketers to rank their accountability challenges, the top score (45%) went to “understanding the impact of changes in consumer attitudes and perceptions on sales”. This strikes me as odd, if the marketers’ ultimate goal is to understand the impact of marketing programs on sales. Measuring the impact of marketing programs and measuring the impact of customer attitudes are not the same thing.
Nor is this a simple fluke of the wording. A separate question showed the most common accountability investment was in “brand and customer equity models” (53%). These also measure the link between attitudes and sales.
One explanation for the disconnect would be that marketers can already measure the relationship between marketing programs and consumer attitudes, so they can complete the analysis by adding the link between attitudes and sales. This seems a bit optimistic, especially since it also assumes that marketers also understand the impact on sales of marketing programs that are not aimed at consumer attitudes, such as price and trade promotions.
A more plausible explanation would be that the link between attitudes and sales is the hardest thing to measure, so that’s where marketers put their effort. Or, maybe that relationship is the question that marketers find most intriguing because, well, that’s the sort of thing they care about. A cynic might suggest that marketers don’t want to measure the link between marketing programs and sales because they don’t want to know the answer. But even the cynic would acknowledge that marketers need a way to justify their budgets, so that can’t be it.
None of these answers really satisfies me, but let’s put this question aside. I think we can safely assume that marketers really do want to measure their performance. This leaves the question of why they haven’t made much progress in doing it.
One reason could be that they simply don’t know how. Marketing measurement is truly difficult, so that’s surely part of it.
Another possibility is that they know how, but lack the resources. Since good marketing measurement can be quite expensive, this is probably part of the problem as well. Remember that the resources involved will ultimately come from the corporate budget, so finance departments and senior management must also agree that marketing measurement is the best thing to spend them on. And, indeed, this doesn’t seem to be their priority. The white paper states that “the number of CEOs and CFOs championing marketing accountability programs within their firms remained negligible and unchanged from 2007.”
This is a pretty depressing conclusion, although to me it has the ring of truth. Fuss though they may, CEOs and CFOs are not willing to invest money to solve the problem. Indeed our friend the cynic might argue that they are the ones with a motivation to avoid measurement, since it gives them more flexibility to allocate funds as they prefer.
The white paper doesn’t dwell on this. It just lists lack of senior management involvement as one of many obstacles. The paper authors then go on to propose a four step process for developing an accountability program:
- assess and benchmark existing capabilities and resources
- define an achievable future state, in terms of the business questions to answer and the resources required to answer them
- work with stakeholders to align metrics with corporate goals and key business questions
- establish a roadmap with a multi-year phased approach
There’s not much to argue with here. The paper also provides a reasonable list of success factors, including:
- realistic stakeholders expectations
- agreement on scope at the start of the project
- cross-functional team with clearly defined roles, responsibilities and communication points
- simple math and analytics
- integration of analytics for pricing, ROI, and brand analysis
Again, it’s all sound advice. Let’s hope you can get the resources to follow it.
In the 2008 study, released in July and just recapitulated in a new MMA white paper, only 23% of the marketers were satisfied with their metrics for marketing’s impact on sales, and just 19% were satisfied with metrics showing marketing impact on ROI and brand equity.
Furthermore, only 14% of the marketers felt their senior management had confidence in marketing’s forecasts of sales impact. And even this is probably optimistic: a separate MMA-funded study, also cited in the new white paper, found that only 10% of financial executives use marketing forecasts to help set the marketing budget.
The obvious question is why so little progress has been made. Marketers consistently rank performance measurement as their top priority (for example, see the CMO Council’s Marketing Outlook 2008 survey). Nor are marketers doing this out of the goodness of their hearts: they know that being able to show the impact of their expenditures is the best way to protect and grow their budgets. So marketers have every reason to work hard at developing performance measures that finance and senior management will accept.
And yet...when the ANA survey asked marketers to rank their accountability challenges, the top score (45%) went to “understanding the impact of changes in consumer attitudes and perceptions on sales”. This strikes me as odd, if the marketers’ ultimate goal is to understand the impact of marketing programs on sales. Measuring the impact of marketing programs and measuring the impact of customer attitudes are not the same thing.
Nor is this a simple fluke of the wording. A separate question showed the most common accountability investment was in “brand and customer equity models” (53%). These also measure the link between attitudes and sales.
One explanation for the disconnect would be that marketers can already measure the relationship between marketing programs and consumer attitudes, so they can complete the analysis by adding the link between attitudes and sales. This seems a bit optimistic, especially since it also assumes that marketers also understand the impact on sales of marketing programs that are not aimed at consumer attitudes, such as price and trade promotions.
A more plausible explanation would be that the link between attitudes and sales is the hardest thing to measure, so that’s where marketers put their effort. Or, maybe that relationship is the question that marketers find most intriguing because, well, that’s the sort of thing they care about. A cynic might suggest that marketers don’t want to measure the link between marketing programs and sales because they don’t want to know the answer. But even the cynic would acknowledge that marketers need a way to justify their budgets, so that can’t be it.
None of these answers really satisfies me, but let’s put this question aside. I think we can safely assume that marketers really do want to measure their performance. This leaves the question of why they haven’t made much progress in doing it.
One reason could be that they simply don’t know how. Marketing measurement is truly difficult, so that’s surely part of it.
Another possibility is that they know how, but lack the resources. Since good marketing measurement can be quite expensive, this is probably part of the problem as well. Remember that the resources involved will ultimately come from the corporate budget, so finance departments and senior management must also agree that marketing measurement is the best thing to spend them on. And, indeed, this doesn’t seem to be their priority. The white paper states that “the number of CEOs and CFOs championing marketing accountability programs within their firms remained negligible and unchanged from 2007.”
This is a pretty depressing conclusion, although to me it has the ring of truth. Fuss though they may, CEOs and CFOs are not willing to invest money to solve the problem. Indeed our friend the cynic might argue that they are the ones with a motivation to avoid measurement, since it gives them more flexibility to allocate funds as they prefer.
The white paper doesn’t dwell on this. It just lists lack of senior management involvement as one of many obstacles. The paper authors then go on to propose a four step process for developing an accountability program:
- assess and benchmark existing capabilities and resources
- define an achievable future state, in terms of the business questions to answer and the resources required to answer them
- work with stakeholders to align metrics with corporate goals and key business questions
- establish a roadmap with a multi-year phased approach
There’s not much to argue with here. The paper also provides a reasonable list of success factors, including:
- realistic stakeholders expectations
- agreement on scope at the start of the project
- cross-functional team with clearly defined roles, responsibilities and communication points
- simple math and analytics
- integration of analytics for pricing, ROI, and brand analysis
Again, it’s all sound advice. Let’s hope you can get the resources to follow it.
Labels:
marketing measurement
Friday, December 5, 2008
TraceWorks' Headlight Integrates Online Measurement and Execution
I’ve been looking at an interesting product called Headlight from a Danish firm TraceWorks. Headlight is an online advertising management system, which means that it helps marketers to plan, execute and measure paid and unpaid Web advertising.
According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)
These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.
All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.
The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.
Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.
The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.
Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.
Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.
Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.
TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.
According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)
These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.
All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.
The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.
Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.
The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.
Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.
Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.
Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.
TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.
Labels:
marketing measurement,
reviews,
software
Subscribe to:
Posts (Atom)