eMarketer recently released a deeply researched report on Online Brand Measurement. Since it touched on several topics I’ve been pondering recently (see Web Analytics Is Dead… on my Customer Experience Matrix blog) , I read it with particular care.
This is a long report (58 pages), so I won’t review it in detail. But here are what struck me as the critical points:
- Web measurement has largely focused on counting views and clicks, not measuring long-term brand impact. Counting is much easier but it doesn’t capture the full value of any Web advertisement. One result has been that marketers overspend on search ads, which are great at generating immediate response, and underspend on Web display ads which influence long term behavior even if they don’t generate as many click-throughs.
- Media buyers want Web publishers to provide the equivalent of Gross Rating Points (GRPs), so they can effectively compare Web ad buys with purchases in other media. That’s okay as far as it goes, but it’s still just about counting, not about measuring the quality or impact of the impressions. As the paper points out, even engagement measures such as time on site or mentions in social media, don’t necessarily equate to positive brand impact.
- Just about everyone agrees that the right way to measure brand impact is to tailor measurements to the goal of a particular marketing program. This may sound like a conflict with the desire for a standard GRP-like measure, but it really reflects the distinction between counting the audience and measuring impact. GRPs work fine for buying media but not for assessing results. Traditional media face precisely the same dichotomy, which is why marketing measurement is still a puzzle for them as well. And just as most offline brand measures are ultimately based on surveys and panels, I'd expect most online brand measures will be too.
- Meaningful impact measurement will integrate several data types, including online behaviors, visitor demographics, offline marketing activities and actual purchase behavior. These will come from a combination of direct online sources (i.e., traditional Web analytics), panel-based research and surveys (for audience and attitudinal information), and offline databases (for demographics and purchases). Ideally these would be meshed within marketing mix models and response attribution models that would estimate the incremental impact of each marketing program and allow optimization. But such sophisticated models won’t appear tomorrow.
To me, this final point is the most important because it points to a “grand unification theory” of marketing measurement that combines the existing distinct disciplines and sources. The paper cites numerous current efforts, including:
- multimedia databases being created (separately) by panel-based measurement firms including comScore, Nielsen, Quantcast and TNS Media Compete;
- Datatran Media’s Aperture, which combines email and postal addresses with Acxiom household data, IXI financial data, MindSet Marketing healthcare data and NextAction retail data;
- a joint effort between Omniture and WPP’s Kantar Group that combines data from email, search, display ads and traditional media;
- another Nielsen project combining TV ad effectiveness information from Nielsen IAG with panel purchase data from Nielsen Homescan.
These all reinforce the claim I made in last week’s blog post that individual data will increasingly be combined with panel- and survey-based information to provide community-level insights that are actually more valuable than individual data alone.
Showing posts with label brand value. Show all posts
Showing posts with label brand value. Show all posts
Tuesday, September 29, 2009
Monday, July 6, 2009
CMO Council Study: Customer Loyalty Is Fleeting
The CMO Council and Catalina Marketing’s Pointer Media Network recently released a major study on consumer loyalty in packaged goods brands. The study, Losing Loyalty: The Consumer Defection Dilemma™, draws on Catalina’s vast loyalty card transaction database to analyze the individual buying patterns of more than 32 million consumers in 2007 and 2008 across 685 leading CPG brands.
The bottom line is that “loyal” consumers are not as reliable as most of us might have guessed. “For the average brand in this study, 52% of highly loyal consumers in 2007 either reduced loyalty or completely defected from the brand in 2008.” You can download the 12 page report for details.
Not surprisingly, the report proposes to use individualized targeting services like Pointer Media Network to reduce churn by making carefully selected offers to at-risk consumers. Although the recommendation is obviously self-serving, I do think it’s correct.
But it seems to me that the implications are more fundamental. In the eternal debate about brand value, finding that loyalty evaporates more quickly than expected makes it even harder to justify marketing programs that don’t bring about an immediate, measurable return.
I’ve seen arguments (sorry, I can’t recall where) that the traditional buying model of awareness – interest – trial – purchase doesn’t correspond to reality. The survey results seem consistent with that position, in that they present consumer behavior as much less predictable than expected. This further reinforces the idea that investments with short-term results are more reliable than the long-term investments traditionally associated with brand building.
Pardon the cliche, but what we’re talking about here is a paradigm shift. If consumers don’t follow a predictable buying pattern, then brand value models based on such a pattern are not justifiable. Marketers need a fundamentally new framework to predict how their activities will affect consumer behavior. This framework may owe more to chaos theory than a linear process flow. I don’t know what they new model will look like, but recognizing that one is necessary is the first step towards creating it. If anybody out there has some candidates to offer, I’d love to hear about them.
The bottom line is that “loyal” consumers are not as reliable as most of us might have guessed. “For the average brand in this study, 52% of highly loyal consumers in 2007 either reduced loyalty or completely defected from the brand in 2008.” You can download the 12 page report for details.
Not surprisingly, the report proposes to use individualized targeting services like Pointer Media Network to reduce churn by making carefully selected offers to at-risk consumers. Although the recommendation is obviously self-serving, I do think it’s correct.
But it seems to me that the implications are more fundamental. In the eternal debate about brand value, finding that loyalty evaporates more quickly than expected makes it even harder to justify marketing programs that don’t bring about an immediate, measurable return.
I’ve seen arguments (sorry, I can’t recall where) that the traditional buying model of awareness – interest – trial – purchase doesn’t correspond to reality. The survey results seem consistent with that position, in that they present consumer behavior as much less predictable than expected. This further reinforces the idea that investments with short-term results are more reliable than the long-term investments traditionally associated with brand building.
Pardon the cliche, but what we’re talking about here is a paradigm shift. If consumers don’t follow a predictable buying pattern, then brand value models based on such a pattern are not justifiable. Marketers need a fundamentally new framework to predict how their activities will affect consumer behavior. This framework may owe more to chaos theory than a linear process flow. I don’t know what they new model will look like, but recognizing that one is necessary is the first step towards creating it. If anybody out there has some candidates to offer, I’d love to hear about them.
Labels:
brand value,
marketing measurement
Monday, February 23, 2009
Vizu Measures the Brand Impact of Online Ads with Just One Question
I wrote last week about a general framework for measuring the marketing impact of social media. This proposed a general hierarchy of:
1. tracking mentions
2. identifying mentioners
3. measuring influence
4. understanding sentiment
5. measuring impact
As with all marketing measurement, the hardest task is the last one: measuring impact. This requires connecting the messages that people receive with their actual subsequent behavior, and hopefully establishing a causal relationship between the two. The fundamental problem is the separation between those two events: unless the message and purchase are part of the same interaction, you need some way to link the two events to the same person. A second problem is the difficulty of isolating the impact of a single event from all the other events that could influence someone’s behavior.
These problems are especially acute for brand advertising, which pretty much by definition is not connected with an immediate purchase. Brand advertisers have long dealt with this by imagining buyers moving through a sequence of stages before they make the actual purchase. A typical set of stages is awareness, interest, knowledge, trial (the first actual purchase) and regular use (repeat purchases).
Even though these stages exist only inside the customer’s head, they can be measured through surveys. So can more detailed attitudes towards a product such as feelings about value or specific attributes. For both types of measurement, marketers can define at least a loose connection between the survey results and eventual product purchases. Although the resulting predictions are far from precise, they offer a way to measure subtle factors, such as the impact of different advertising messages, that techniques based on actual purchases cannot.
The Internet is uniquely well suited for this type of survey-based analysis, since people can be asked the questions immediately after seeing an advertisement. One vendor that does this is Factor TG, which I wrote about last year (click here for the post.) Another, which I mentioned last week, is Vizu .
What makes Vizu different from other online brand advertising surveys is that each Vizu survey asks just one question. The question itself changes with each survey, and is based on the specific goal for the particular campaign. Thus, one survey might ask about awareness, while another might ask about purchase intentions. Vizu asks its question to a small sample of people who saw an advertisement and also to a control group of people who were shown something else. It assumes that the difference in answers between the two groups is the result of seeing the advertisement itself.
Although asking a single question may seem like a fairly trivial approach, it actually has some profound implications. The most important one is that it greatly increases response rate: Vizu founder Dan Beltramo told me participation can be upwards of 3 percent, compared with tenths or hundredths of a percent for longer traditional surveys.
This in turn means statistically significant survey results become available much sooner, giving marketers quick answers and letting them watch trends over relatively short time periods. It also provides significant results for much smaller ad campaigns or for panels within larger campaigns. This lets marketers compare results from different Web sites and for different versions of an ad, allowing them to fine tune their media selections and messages ways that traditional surveys cannot.
Another benefit of simplicity is lower costs. Vizu can charge just $5,000 to $10,000 per campaign, allowing marketers to use it on a regular basis rather than only for special projects. Vizu also has little impact on the performance of the Web sites running the surveys, reducing cost from the site owner's perspective.
The disadvantage of asking just one question is that you get just one answer. This prevents detailed analysis of results by audience segments, or exploration of how an ad affects multiple brand attributes. Vizu actually does provide a little information about the impact of frequency, drawn from cookies that track how often a given person has been exposed to a particular advertisement. Vizu also tracks where the person saw the ad, allowing some inferences about respondents based on the demographics of the host sites. Mostly, however, Vizu argues that a single answer is a good thing in itself because it keeps everyone involved focused on the ad campaign’s primary objective.
According to Beltramo, Vizu’s main customers are online ad networks and site publishers, who use the Vizu results as a way to show their accountability to ad agencies and brand advertisers. Some agencies and advertisers also contract with the firm directly.
What, you may be asking, has all this to do with social media measurement? Vizu’s approach applies not just to display advertising but also to social media projects such as downloadable widgets and micro sites.
Even though Vizu can’t fully bridge the measurement gap between exposure and actual purchases, it does offer more insights than simply counting downloads, clickthroughs or traffic. In a world where so little measurement is available, every improvement is welcome.
1. tracking mentions
2. identifying mentioners
3. measuring influence
4. understanding sentiment
5. measuring impact
As with all marketing measurement, the hardest task is the last one: measuring impact. This requires connecting the messages that people receive with their actual subsequent behavior, and hopefully establishing a causal relationship between the two. The fundamental problem is the separation between those two events: unless the message and purchase are part of the same interaction, you need some way to link the two events to the same person. A second problem is the difficulty of isolating the impact of a single event from all the other events that could influence someone’s behavior.
These problems are especially acute for brand advertising, which pretty much by definition is not connected with an immediate purchase. Brand advertisers have long dealt with this by imagining buyers moving through a sequence of stages before they make the actual purchase. A typical set of stages is awareness, interest, knowledge, trial (the first actual purchase) and regular use (repeat purchases).
Even though these stages exist only inside the customer’s head, they can be measured through surveys. So can more detailed attitudes towards a product such as feelings about value or specific attributes. For both types of measurement, marketers can define at least a loose connection between the survey results and eventual product purchases. Although the resulting predictions are far from precise, they offer a way to measure subtle factors, such as the impact of different advertising messages, that techniques based on actual purchases cannot.
The Internet is uniquely well suited for this type of survey-based analysis, since people can be asked the questions immediately after seeing an advertisement. One vendor that does this is Factor TG, which I wrote about last year (click here for the post.) Another, which I mentioned last week, is Vizu .
What makes Vizu different from other online brand advertising surveys is that each Vizu survey asks just one question. The question itself changes with each survey, and is based on the specific goal for the particular campaign. Thus, one survey might ask about awareness, while another might ask about purchase intentions. Vizu asks its question to a small sample of people who saw an advertisement and also to a control group of people who were shown something else. It assumes that the difference in answers between the two groups is the result of seeing the advertisement itself.
Although asking a single question may seem like a fairly trivial approach, it actually has some profound implications. The most important one is that it greatly increases response rate: Vizu founder Dan Beltramo told me participation can be upwards of 3 percent, compared with tenths or hundredths of a percent for longer traditional surveys.
This in turn means statistically significant survey results become available much sooner, giving marketers quick answers and letting them watch trends over relatively short time periods. It also provides significant results for much smaller ad campaigns or for panels within larger campaigns. This lets marketers compare results from different Web sites and for different versions of an ad, allowing them to fine tune their media selections and messages ways that traditional surveys cannot.
Another benefit of simplicity is lower costs. Vizu can charge just $5,000 to $10,000 per campaign, allowing marketers to use it on a regular basis rather than only for special projects. Vizu also has little impact on the performance of the Web sites running the surveys, reducing cost from the site owner's perspective.
The disadvantage of asking just one question is that you get just one answer. This prevents detailed analysis of results by audience segments, or exploration of how an ad affects multiple brand attributes. Vizu actually does provide a little information about the impact of frequency, drawn from cookies that track how often a given person has been exposed to a particular advertisement. Vizu also tracks where the person saw the ad, allowing some inferences about respondents based on the demographics of the host sites. Mostly, however, Vizu argues that a single answer is a good thing in itself because it keeps everyone involved focused on the ad campaign’s primary objective.
According to Beltramo, Vizu’s main customers are online ad networks and site publishers, who use the Vizu results as a way to show their accountability to ad agencies and brand advertisers. Some agencies and advertisers also contract with the firm directly.
What, you may be asking, has all this to do with social media measurement? Vizu’s approach applies not just to display advertising but also to social media projects such as downloadable widgets and micro sites.
Even though Vizu can’t fully bridge the measurement gap between exposure and actual purchases, it does offer more insights than simply counting downloads, clickthroughs or traffic. In a world where so little measurement is available, every improvement is welcome.
Labels:
brand value,
marketing measurement,
social media
Sunday, February 1, 2009
Razorfish Study Measures Direct Response to Social Media
I’ve been spending more time than I should recently on Twitter (follow me at @draab). It provides a fascinating peek into the communal stream-of-consciousness, which would be pretty horrifying (“Britney…Brad…Jen…Obama…groceries…Britney…Britney…Britney”) if you couldn’t choose the people and search terms you follow. This filtering (which I do via a great product called Tweetdeck) turns Twitter into a very efficient source of information I wouldn’t see otherwise.
Naturally, my interest in Twitter also extends to how you measure its business value, and by extension the value of social media in general. Since the people I follow on Twitter are both marketers and Twitter users, they discuss this fairly often. One recent post (technically a “tweet” but the term seems so childish) pointed to a study Social Media Measurement: Widgets and Applications by interactive marketing agency Razorfish.
The study turns out to be a very brief and straightforward presentation of two projects, both involving creation of downloadable widgets. One was promoted largely through conventional media and the other through widget distribution service Gigya. For each project, we’re told the costs, number of visitors and/or downloads, how much time and money they spent, and the return on investment. The not-very-surprising findings were that people who spent more time also spent more money and, more broadly, that “social media may be used effectively as a way of engaging users and potential customers.” A less predictable and potentially more significant finding from the first project was that people who were referred by a friend downloaded more often and spent much more money than people who were attracted by the media. The numbers were: downloads, 23% vs. 8%; spend any money, 9% vs. 1%; and amount spent, $23.00 vs $3.14. But the study points out that the numbers were very small—only 216 individuals arrived at the landing page as a result of a friend’s email, vs. 41,599 from media sources. These figures are drawn only from the first project because the second project couldn’t be measured this way.
From a marketing measurement standpoint, none of this seems break any new ground. Visitors are tracked by their source URLs and subsequent behavior is tracked through cookies. The ROI is calculated on straight revenue (it really should be profit) and seems to include only immediate purchases. This is particularly problematic for the second project, which promoted a $399 product with very limited supply that sold out in one minute. (The study doesn’t say, but based on this award citation it seems to be a special edition Nike Air Jordan shoe.) Clearly the point of such Air Jordan promotions isn’t immediate revenue, but brand building at its hard-to-measure best. The real challenge of evaluating social media is measuring this type of indirect impact. This study makes no claim to do that, but I’ll keep my eyes out for others that do.
Naturally, my interest in Twitter also extends to how you measure its business value, and by extension the value of social media in general. Since the people I follow on Twitter are both marketers and Twitter users, they discuss this fairly often. One recent post (technically a “tweet” but the term seems so childish) pointed to a study Social Media Measurement: Widgets and Applications by interactive marketing agency Razorfish.
The study turns out to be a very brief and straightforward presentation of two projects, both involving creation of downloadable widgets. One was promoted largely through conventional media and the other through widget distribution service Gigya. For each project, we’re told the costs, number of visitors and/or downloads, how much time and money they spent, and the return on investment. The not-very-surprising findings were that people who spent more time also spent more money and, more broadly, that “social media may be used effectively as a way of engaging users and potential customers.” A less predictable and potentially more significant finding from the first project was that people who were referred by a friend downloaded more often and spent much more money than people who were attracted by the media. The numbers were: downloads, 23% vs. 8%; spend any money, 9% vs. 1%; and amount spent, $23.00 vs $3.14. But the study points out that the numbers were very small—only 216 individuals arrived at the landing page as a result of a friend’s email, vs. 41,599 from media sources. These figures are drawn only from the first project because the second project couldn’t be measured this way.
From a marketing measurement standpoint, none of this seems break any new ground. Visitors are tracked by their source URLs and subsequent behavior is tracked through cookies. The ROI is calculated on straight revenue (it really should be profit) and seems to include only immediate purchases. This is particularly problematic for the second project, which promoted a $399 product with very limited supply that sold out in one minute. (The study doesn’t say, but based on this award citation it seems to be a special edition Nike Air Jordan shoe.) Clearly the point of such Air Jordan promotions isn’t immediate revenue, but brand building at its hard-to-measure best. The real challenge of evaluating social media is measuring this type of indirect impact. This study makes no claim to do that, but I’ll keep my eyes out for others that do.
Labels:
brand value,
marketing measurement,
social media
Wednesday, November 12, 2008
No Silver Bullets for Social Media Measurement
The editor of my forthcoming book on marketing measurement asked me to add something on social media, which led to several days of research. Although there are many smart and articulate people writing on the topic, the bottom line is, well, you can’t really measure the bottom line.
There are plenty of activity measures such as numbers of page views, comments and subscribers. Sometimes there are specific benefits such as reduced costs if technical questions are answered through a user forum instead of company staff. Sometimes you can compare behavior of social media participants vs. non-participants, although that raises a self-selection problem – obviously those people are more engaged to begin with.
But measuring the impact of social media on attitudes in the population as a whole—that is, on brand value—is even harder than measuring the impact of traditional marketing and advertising methods because the audience size is so small. Measuring the impact of brand value on actual sales is already a problem, what you have with social media could be considered the brand value problem, squared.
In fact, the closest analogy is measuring the value of traditional public relations, which is notoriously difficult. Social media is more like a subset of public relations than anything else, although it feels odd to describe it that way because social media is so much larger and more complicated than traditional PR. Maybe we'll need to think of PR as a subset of social media.
The best advice I saw boiled down to setting targets for something measurable, and then watching whether you reach them. This is pretty much the best practice for measuring public relations and other marketing programs without a direct impact on sales. I guess there’s nothing surprising about this, although I was still a bit disappointed.
Still, as I say, there is plenty of interesting material available if you want to learn about concrete measurements and how people use them. Just about every hit on the first two pages of a Google search on “social media marketing measurement” was valuable. In particular, I kept tripping across Jeremiah Owyang, currently an analyst with Forrester Research, who has created many useful lists on his Web Strategy by Jeremiah blog. For example, the post Social Media FAQ #3: How Do I Measure ROI? provides a good overview of the subject. You can also search his category of Social Media Measurement. Another post I found helpful was What Is The ROI For Social Media? from Jason Falls’ Social Media Explorer blog.
There are plenty of activity measures such as numbers of page views, comments and subscribers. Sometimes there are specific benefits such as reduced costs if technical questions are answered through a user forum instead of company staff. Sometimes you can compare behavior of social media participants vs. non-participants, although that raises a self-selection problem – obviously those people are more engaged to begin with.
But measuring the impact of social media on attitudes in the population as a whole—that is, on brand value—is even harder than measuring the impact of traditional marketing and advertising methods because the audience size is so small. Measuring the impact of brand value on actual sales is already a problem, what you have with social media could be considered the brand value problem, squared.
In fact, the closest analogy is measuring the value of traditional public relations, which is notoriously difficult. Social media is more like a subset of public relations than anything else, although it feels odd to describe it that way because social media is so much larger and more complicated than traditional PR. Maybe we'll need to think of PR as a subset of social media.
The best advice I saw boiled down to setting targets for something measurable, and then watching whether you reach them. This is pretty much the best practice for measuring public relations and other marketing programs without a direct impact on sales. I guess there’s nothing surprising about this, although I was still a bit disappointed.
Still, as I say, there is plenty of interesting material available if you want to learn about concrete measurements and how people use them. Just about every hit on the first two pages of a Google search on “social media marketing measurement” was valuable. In particular, I kept tripping across Jeremiah Owyang, currently an analyst with Forrester Research, who has created many useful lists on his Web Strategy by Jeremiah blog. For example, the post Social Media FAQ #3: How Do I Measure ROI? provides a good overview of the subject. You can also search his category of Social Media Measurement. Another post I found helpful was What Is The ROI For Social Media? from Jason Falls’ Social Media Explorer blog.
Labels:
brand value,
data,
marketing measurement
Wednesday, October 29, 2008
Is Measuring Brand Value Worth the Effort?
Hello, blogosphere! Did you miss me?
Probably not, but, whatever. Launch of the new Raab Guide to Demand Generation Systems http://www.raabguide.com/ is largely complete, so I can now find some time for this blog. Also, the publisher of my MPM Toolkit book seems to have settled on a January publication date, so I need to pay more attention to this side of the industry.
I’ll ease back into this blog with a survey on brand value from the Association of National Advertisers (ANA) and Interbrand consultancy. The press release is available here .
Key findings of the survey were that 55% of senior marketers “lack a quantitative understanding of brand value” and that 64% said “brands do not influence decisions made at their organizations.” Bear in mind that these are ANA members, who tend to be large media consumers. If they can’t measure or use brand value, nobody can.
Taken together, these two figures mean that brands don’t influence decisions even at some companies which are able to measure their value. The survey explored this a bit, and found that at companies where brands lack influence, the most common reason (cited by 51%) was that “incentives do not support importance of brand”. In other words, if I interpret that correctly, people are not rewarded for increasing brand value—so they don’t work to do that, even if they do have the ability to measure it. The next most common reason, at 49%, was the more expected “inability to prove brand’s financial benefit”. Other answers ranked at 40% or below.
This wouldn’t matter to anyone who's not a brand valuation consultant, except for one thing: 80% of the responders report that “demands from the C-suite and boardroom were steadily increasing” to demonstrate that branding initiatives add profit. That means even existing branding budgets are at risk.
If you accept, as I do, that branding programs do add value, then not being able to justify them is a serious problem. But there’s a difference between knowing something has value and knowing what that value is. As I’ve pointed out prevously and the good people at MarketingNPV recently wrote at more length, different brand valuation methodologies give widely varying results, and even the same methodology gives different results from year to year.
This has important practical implications: specifically, brand measurements are not precise enough to guide tactical decisions. Yet that is exactly what the ANA survey says marketers want: 93% felt a quantified understanding would allow “more focused investment in marketing” and 82% felt it would provide “an opportunity to custom out underperforming initiatives”. Frankly, I’d say those are unrealistic expectations.
The MarketingNPV paper argues that marketers should not attempt to measure brand value by itself, and instead focus on “quantifying the impact of marketing on cash flows”. That may seem like begging the question: after all, the value of a brand is precisely its impact on future cash flows. But I think of brand value as a residual factor which accounts for future cash flows that cannot be attributed to more direct influences such as promotions, distribution and pricing. So it does make sense to say, first let’s do a better job of predicting the impact on cash flows of those directly measurable items. Once we've taken that as far as we can--and I'd say most firms are nowhere near--then we can spend energy on brand value to explain the rest.
Probably not, but, whatever. Launch of the new Raab Guide to Demand Generation Systems http://www.raabguide.com/ is largely complete, so I can now find some time for this blog. Also, the publisher of my MPM Toolkit book seems to have settled on a January publication date, so I need to pay more attention to this side of the industry.
I’ll ease back into this blog with a survey on brand value from the Association of National Advertisers (ANA) and Interbrand consultancy. The press release is available here .
Key findings of the survey were that 55% of senior marketers “lack a quantitative understanding of brand value” and that 64% said “brands do not influence decisions made at their organizations.” Bear in mind that these are ANA members, who tend to be large media consumers. If they can’t measure or use brand value, nobody can.
Taken together, these two figures mean that brands don’t influence decisions even at some companies which are able to measure their value. The survey explored this a bit, and found that at companies where brands lack influence, the most common reason (cited by 51%) was that “incentives do not support importance of brand”. In other words, if I interpret that correctly, people are not rewarded for increasing brand value—so they don’t work to do that, even if they do have the ability to measure it. The next most common reason, at 49%, was the more expected “inability to prove brand’s financial benefit”. Other answers ranked at 40% or below.
This wouldn’t matter to anyone who's not a brand valuation consultant, except for one thing: 80% of the responders report that “demands from the C-suite and boardroom were steadily increasing” to demonstrate that branding initiatives add profit. That means even existing branding budgets are at risk.
If you accept, as I do, that branding programs do add value, then not being able to justify them is a serious problem. But there’s a difference between knowing something has value and knowing what that value is. As I’ve pointed out prevously and the good people at MarketingNPV recently wrote at more length, different brand valuation methodologies give widely varying results, and even the same methodology gives different results from year to year.
This has important practical implications: specifically, brand measurements are not precise enough to guide tactical decisions. Yet that is exactly what the ANA survey says marketers want: 93% felt a quantified understanding would allow “more focused investment in marketing” and 82% felt it would provide “an opportunity to custom out underperforming initiatives”. Frankly, I’d say those are unrealistic expectations.
The MarketingNPV paper argues that marketers should not attempt to measure brand value by itself, and instead focus on “quantifying the impact of marketing on cash flows”. That may seem like begging the question: after all, the value of a brand is precisely its impact on future cash flows. But I think of brand value as a residual factor which accounts for future cash flows that cannot be attributed to more direct influences such as promotions, distribution and pricing. So it does make sense to say, first let’s do a better job of predicting the impact on cash flows of those directly measurable items. Once we've taken that as far as we can--and I'd say most firms are nowhere near--then we can spend energy on brand value to explain the rest.
Labels:
brand value,
marketing measurement
Friday, July 11, 2008
Content Emerges as a Common Thread in Marketing Measurement
Last week’s post marked the completion of my initial survey of marketing reporting systems. So it’s a good time to step back and assess the state of the art in general.
As you’ll have noticed if you followed the series closely, there were several vendors on my initial list who were not really focused on marketing measurement. Once they were removed, I think it’s fair to say that most of the others have built their business primarily on marketing mix modeling. These are not products that build mix models, although many of their providers are in that business. Rather, these products make mix models more useful by combining them and applying them to planning, reporting, forecasting and optimization. So that’s one trend I've observed: conversion of mix models from stand-alone analyses to part of an integrated measurement process. But most of those products were several years old, so the trend is not a new one.
A fresher trend was efforts to provide more sensitive measures of brand value drivers. Specifically, I am referring to systems that tie changes in brand value to changes in consumer attitudes and behaviors. This contrasts with traditional brand value studies, which look at consumer attitudes at a specific point in time. As I’ve noted previously, the absolute values generated by these studies are somewhat questionable. But relative changes in these values are still probably good indicators of whether things are getting better or worse. (There’s an irony here someplace—an unreliable indicator becomes useful if applied more frequently.) Of course, there’s nothing new about tracking trends in consumer attitudes or about linking those trends to marketing programs. What’s being added is the conversion of attitude changes to brand value changes. This provides a much-sought link between marketing efforts and brand value.
Changes in consumer attitudes affect more than brand value: they also impact near-term sales. This relationship is reflected in relatively new (to me) efforts to use consumer attitudes as inputs to marketing mix models. Traditionally, the main inputs to these models have been spending levels for each mix element, with only minor adjustments for program content or effectiveness. Marketers would like to analyze more detailed inputs, but few marketing programs are large or long-running enough to have a distinguishable impact on total results. Consumer attitudes, on the other hand, do have a continuous presence that can be correlated with changes in short-term sales. The trick is linking the attitudes with marketing programs.
One solution being attempted is to aggregate marketing programs according to the messages they deliver, and then assess the impact of those messages on attitudes. That is, the mix model inputs are spending against different marketing messages. This could be used to predict sales changes directly, or to predict changes in consumer attitudes which in turn predict sales results. It isn’t quite as good as measuring the direct impact of individual marketing campaigns directly, but it does give some idea of their likely relative effectiveness, and therefore how future funds are best invested.
All of these changes show movement towards understanding the connection between individual marketing decisions and ultimate business results. The common thread is content: using content to aggregate individual marketing programs; assessing the impact of content on short-term sales results; and assessing the content-driven changes in consumer attitudes on long-term brand value. Today, these attributes of content are often measured separately. But I think we can expect them to be part of a single, unified analytical process over time.
As you’ll have noticed if you followed the series closely, there were several vendors on my initial list who were not really focused on marketing measurement. Once they were removed, I think it’s fair to say that most of the others have built their business primarily on marketing mix modeling. These are not products that build mix models, although many of their providers are in that business. Rather, these products make mix models more useful by combining them and applying them to planning, reporting, forecasting and optimization. So that’s one trend I've observed: conversion of mix models from stand-alone analyses to part of an integrated measurement process. But most of those products were several years old, so the trend is not a new one.
A fresher trend was efforts to provide more sensitive measures of brand value drivers. Specifically, I am referring to systems that tie changes in brand value to changes in consumer attitudes and behaviors. This contrasts with traditional brand value studies, which look at consumer attitudes at a specific point in time. As I’ve noted previously, the absolute values generated by these studies are somewhat questionable. But relative changes in these values are still probably good indicators of whether things are getting better or worse. (There’s an irony here someplace—an unreliable indicator becomes useful if applied more frequently.) Of course, there’s nothing new about tracking trends in consumer attitudes or about linking those trends to marketing programs. What’s being added is the conversion of attitude changes to brand value changes. This provides a much-sought link between marketing efforts and brand value.
Changes in consumer attitudes affect more than brand value: they also impact near-term sales. This relationship is reflected in relatively new (to me) efforts to use consumer attitudes as inputs to marketing mix models. Traditionally, the main inputs to these models have been spending levels for each mix element, with only minor adjustments for program content or effectiveness. Marketers would like to analyze more detailed inputs, but few marketing programs are large or long-running enough to have a distinguishable impact on total results. Consumer attitudes, on the other hand, do have a continuous presence that can be correlated with changes in short-term sales. The trick is linking the attitudes with marketing programs.
One solution being attempted is to aggregate marketing programs according to the messages they deliver, and then assess the impact of those messages on attitudes. That is, the mix model inputs are spending against different marketing messages. This could be used to predict sales changes directly, or to predict changes in consumer attitudes which in turn predict sales results. It isn’t quite as good as measuring the direct impact of individual marketing campaigns directly, but it does give some idea of their likely relative effectiveness, and therefore how future funds are best invested.
All of these changes show movement towards understanding the connection between individual marketing decisions and ultimate business results. The common thread is content: using content to aggregate individual marketing programs; assessing the impact of content on short-term sales results; and assessing the content-driven changes in consumer attitudes on long-term brand value. Today, these attributes of content are often measured separately. But I think we can expect them to be part of a single, unified analytical process over time.
Labels:
brand value,
marketing measurement
Wednesday, July 2, 2008
MMA Avista DSS ...and more
I originally contacted Marketing Management Analytics (MMA) to discuss Avista DSS, a hosted service that helps its clients use MMA-built mix models for planning, forecasting and optimization. But while MMA Vice President Douglas Brooks seemed happy to discuss Avista, he said the most excitement today is being generated by BrandView, a newer offering that shows the long-term impact of marketing messages on financial results.
This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.
BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.
In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”
That’s about all I know about BrandView. Now, back to Avista.
As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.
The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.
The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.
Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.
Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.
Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.
This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.
BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.
In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”
That’s about all I know about BrandView. Now, back to Avista.
As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.
The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.
The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.
Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.
Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.
Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.
Labels:
brand value,
marketing measurement,
reviews,
software
Friday, May 30, 2008
Can Brand Value Really Measure Effectiveness?
One more comment on the ANA’s Integrated Marketing survey that I wrote about yesterday. I was struck that brand tracking studies ranked second among effectiveness measures, and brand equity measures ranked fourth. (Numbers are in yesterday’s post.) This is the more respect than brand measurement usually gets.
I suppose this reflects the nature of the survey respondents, who are mostly consumer marketers and (this being the Association of National Advertisers) are largely focused on conventional advertising. I suspect a survey of, say, Direct Marketing Association members would get very different results.
But it seems that brand value is also accepted as an effectiveness measure by people outside of marketing at the survey respondents’ companies. This suggests these people live in a very brand-oriented culture. Indeed, although a couple of speakers yesterday said they had trouble getting their company to believe ROI calculations based on marketing mix models, no one mentioned any problems gaining acceptance for brand metrics.
Lest you think the respondents are all packaged goods marketers, 20% of the survey responders worked in financial services and insurance. (One nice thing people used to good research is they publish all the details.) Computers and technology accounted for another 10%. The traditional brand-centric categories of consumer packaged goods were 11% and food, beverage and tobacco were 9% of the total.
One reason the high ranking of brand value measures caught my eye was that I had just compared brand valuations from two different sources: Millward Brown Optimor and Interbrand. Taking Google in 2007 as an example, Millward Brown gave it a value of $66.4 billion and Interbrand gave it a value $17.8 billion (Millward Brown’s 2008 figure for Google is $86.1 billion; Interbrand 2008 is not yet available.)
Any way you slice it, this is a very big difference. Rankings also diverged: Millward Brown placed Google first among all brands while Interbrand had it at number 20.
My point here is the financial values produced by brand valuation methodologies are very imprecise. It’s actually a bit frightening to think that advertisers would use them to measure effectiveness. The consumer attitudes captured in brand tracking studies are probably much more reliable, even though they cannot be directly converted into a financial measure.
Side note: I had no sooner finished this post than I received an email survey from ANA asking my opinion of the conference. These are definitely people who take their research seriously. Good for them.
I suppose this reflects the nature of the survey respondents, who are mostly consumer marketers and (this being the Association of National Advertisers) are largely focused on conventional advertising. I suspect a survey of, say, Direct Marketing Association members would get very different results.
But it seems that brand value is also accepted as an effectiveness measure by people outside of marketing at the survey respondents’ companies. This suggests these people live in a very brand-oriented culture. Indeed, although a couple of speakers yesterday said they had trouble getting their company to believe ROI calculations based on marketing mix models, no one mentioned any problems gaining acceptance for brand metrics.
Lest you think the respondents are all packaged goods marketers, 20% of the survey responders worked in financial services and insurance. (One nice thing people used to good research is they publish all the details.) Computers and technology accounted for another 10%. The traditional brand-centric categories of consumer packaged goods were 11% and food, beverage and tobacco were 9% of the total.
One reason the high ranking of brand value measures caught my eye was that I had just compared brand valuations from two different sources: Millward Brown Optimor and Interbrand. Taking Google in 2007 as an example, Millward Brown gave it a value of $66.4 billion and Interbrand gave it a value $17.8 billion (Millward Brown’s 2008 figure for Google is $86.1 billion; Interbrand 2008 is not yet available.)
Any way you slice it, this is a very big difference. Rankings also diverged: Millward Brown placed Google first among all brands while Interbrand had it at number 20.
My point here is the financial values produced by brand valuation methodologies are very imprecise. It’s actually a bit frightening to think that advertisers would use them to measure effectiveness. The consumer attitudes captured in brand tracking studies are probably much more reliable, even though they cannot be directly converted into a financial measure.
Side note: I had no sooner finished this post than I received an email survey from ANA asking my opinion of the conference. These are definitely people who take their research seriously. Good for them.
Labels:
brand value,
data,
marketing
Subscribe to:
Comments (Atom)
