A Forrester Group study How To Derive Value From B2B Blogging gained quite a bit of attention last month by reporting that the number of new business-to-business had “plummeted” in 2007 vs. 2006. I haven’t read it ($379 for a 19 page document seems a bit rich). but the gist seems to have been that blogs haven’t shown much value for many businesses.
From a marketing measurement point of view, the interesting question is how you measure the value of a blog in the first place? Without giving away any trade secrets, I can safely say that the answer is, “It depends.” Specifically, it depends on the objective you’ve set for your blog.
You did set an objective, right? I mean, no one clever enough to be reading this blog would be so foolish as to set up their own blog without thinking things through.
Typical objectives might be to attract an audience, educate readers, build a community, or establish expertise. The standard advice would be to first select an objective, then to select metrics and goals that reflect progress towards that objective, and then to measure actual results against the goals.
That’s perfectly sound advice so far as it goes. But it doesn’t address the truly critical question of whether a blog is the best way to meet the objectives you’ve selected. Maybe the resources invested in a blog would be better spent on a newsletter, or paid search, or trade shows.
This leads to the question of just what resources are being invested. It’s easy to underestimate the cost of a blog because so much of the expense is in the time to write it, which seems like it’s free. But in fact, time is often the most constrained resource of all, particularly for the senior executives who are often expected to be company bloggers. Certainly an organization must ask itself how the time spent writing a blog would otherwise be used. If a more productive alternative is available, the blog should go, or at least be reassigned to someone else with (literally) nothing better to do.
I asked myself that question some time ago, with the result that I changed from my Customer Experience Matrix blog from a daily to weekly posting cycle. (Incidentally, traffic did not especially decline.) Starting the MPM Toolkit blog pushed me back to a twice-weekly schedule, but it serves strategic purposes that make the extra effort worthwhile.
Interestingly, other consultants in the marketing measurement industry—whom you have to assume are measuring their blog value carefully—have decided not to bother. Pat LaPointe of MarketingNPV dropped his blog in April, explicitly stating that he could find more effective ways to deliver his message. In a side conversation. Jim Lenskold of the Lenskold Group told me he blogs very rarely because he can reach a larger audience in other ways for the same amount of work. Jim also made the particularly astute point that the kinds of information he wants to convey doesn’t really lend itself to the “quick opinion” format of most blogs. Another consultant whose judgement I respect, Laura Patterson of VisionEdge Marketing, seems to have no blog at all.
The point is not "blogs are good" or "blogs are bad". It's that blogs are tools which must be evaluated like any other marketing investment. The actual cost of a blog is a little harder to measure than many other marketing projects because so much of the cost is implicit (time) rather than explicit (cash outlay). But that’s not a reason to avoid the measurement effort—it’s simply a caution that you must be careful to do it right.
Thursday, July 24, 2008
Thursday, July 17, 2008
Social Target LLC Tracks Social Media Measurement Vendors (So I Don't Have To)
So far this blog has mostly been about the general issues of marketing measurement, rather than measurements for specific media. This is largely because there is already so much detailed information available from specialists in each field that I’m unlikely to add anything useful. On the other hand, medium-specific measurements are inputs to the broader performance measurements that I’ve been writing about. So I can’t ignore them altogether.
Measurement methods for traditional media, including broadcast, print advertising and direct response, are fairly well established. This is not to say they are simple or easy, but most of the interesting activity is taking place elsewhere. Not surprisingly, much of that “elsewhere” is on the Internet.
I see three main areas of continuing innovation in marketing measurement: public relations, Web site analytics, and social media analysis. Public relations obviously predates the Internet, but it makes the list because so much of the data gathering now occurs online, and so much content analysis is now automated because online content makes this possible. Web site analytics is certainly the most mature of these sectors, but despite the huge base of experience, I think it’s fair to say that practices are still evolving rapidly. Social media is the newest sector, especially as a discipline that’s distinct from other Web activities. It too has a large base of techniques and services in place—more than a casual observer might realize—but is certainly not yet stable. (You might add mobile marketing as a fourth area, but it is just getting established--although there is probably more going on that I've heard of.)
I may yet dig into these areas in depth. For today, though, let me point you to a company I found during my preliminary research (a.k.a., Googling). This is Social Target LLC, apparently a small independent consultancy run by one Nathan Gilliatt . I’ve only poked around briefly in the site and associated blog The Net-Savvy Executive, but it impresses me as a good source of information on social media analysis. The firm also publishes a Guide to Social Media Analysis, which profiles 31 or so vendors. I wasn’t overly impressed with the sample entry that I downloaded—it was just two pages and (obviously) didn’t get into many details. But it looks like a reasonable starting point for anyone getting oriented in the field.
Measurement methods for traditional media, including broadcast, print advertising and direct response, are fairly well established. This is not to say they are simple or easy, but most of the interesting activity is taking place elsewhere. Not surprisingly, much of that “elsewhere” is on the Internet.
I see three main areas of continuing innovation in marketing measurement: public relations, Web site analytics, and social media analysis. Public relations obviously predates the Internet, but it makes the list because so much of the data gathering now occurs online, and so much content analysis is now automated because online content makes this possible. Web site analytics is certainly the most mature of these sectors, but despite the huge base of experience, I think it’s fair to say that practices are still evolving rapidly. Social media is the newest sector, especially as a discipline that’s distinct from other Web activities. It too has a large base of techniques and services in place—more than a casual observer might realize—but is certainly not yet stable. (You might add mobile marketing as a fourth area, but it is just getting established--although there is probably more going on that I've heard of.)
I may yet dig into these areas in depth. For today, though, let me point you to a company I found during my preliminary research (a.k.a., Googling). This is Social Target LLC, apparently a small independent consultancy run by one Nathan Gilliatt . I’ve only poked around briefly in the site and associated blog The Net-Savvy Executive, but it impresses me as a good source of information on social media analysis. The firm also publishes a Guide to Social Media Analysis, which profiles 31 or so vendors. I wasn’t overly impressed with the sample entry that I downloaded—it was just two pages and (obviously) didn’t get into many details. But it looks like a reasonable starting point for anyone getting oriented in the field.
Labels:
marketing measurement
Friday, July 11, 2008
Content Emerges as a Common Thread in Marketing Measurement
Last week’s post marked the completion of my initial survey of marketing reporting systems. So it’s a good time to step back and assess the state of the art in general.
As you’ll have noticed if you followed the series closely, there were several vendors on my initial list who were not really focused on marketing measurement. Once they were removed, I think it’s fair to say that most of the others have built their business primarily on marketing mix modeling. These are not products that build mix models, although many of their providers are in that business. Rather, these products make mix models more useful by combining them and applying them to planning, reporting, forecasting and optimization. So that’s one trend I've observed: conversion of mix models from stand-alone analyses to part of an integrated measurement process. But most of those products were several years old, so the trend is not a new one.
A fresher trend was efforts to provide more sensitive measures of brand value drivers. Specifically, I am referring to systems that tie changes in brand value to changes in consumer attitudes and behaviors. This contrasts with traditional brand value studies, which look at consumer attitudes at a specific point in time. As I’ve noted previously, the absolute values generated by these studies are somewhat questionable. But relative changes in these values are still probably good indicators of whether things are getting better or worse. (There’s an irony here someplace—an unreliable indicator becomes useful if applied more frequently.) Of course, there’s nothing new about tracking trends in consumer attitudes or about linking those trends to marketing programs. What’s being added is the conversion of attitude changes to brand value changes. This provides a much-sought link between marketing efforts and brand value.
Changes in consumer attitudes affect more than brand value: they also impact near-term sales. This relationship is reflected in relatively new (to me) efforts to use consumer attitudes as inputs to marketing mix models. Traditionally, the main inputs to these models have been spending levels for each mix element, with only minor adjustments for program content or effectiveness. Marketers would like to analyze more detailed inputs, but few marketing programs are large or long-running enough to have a distinguishable impact on total results. Consumer attitudes, on the other hand, do have a continuous presence that can be correlated with changes in short-term sales. The trick is linking the attitudes with marketing programs.
One solution being attempted is to aggregate marketing programs according to the messages they deliver, and then assess the impact of those messages on attitudes. That is, the mix model inputs are spending against different marketing messages. This could be used to predict sales changes directly, or to predict changes in consumer attitudes which in turn predict sales results. It isn’t quite as good as measuring the direct impact of individual marketing campaigns directly, but it does give some idea of their likely relative effectiveness, and therefore how future funds are best invested.
All of these changes show movement towards understanding the connection between individual marketing decisions and ultimate business results. The common thread is content: using content to aggregate individual marketing programs; assessing the impact of content on short-term sales results; and assessing the content-driven changes in consumer attitudes on long-term brand value. Today, these attributes of content are often measured separately. But I think we can expect them to be part of a single, unified analytical process over time.
As you’ll have noticed if you followed the series closely, there were several vendors on my initial list who were not really focused on marketing measurement. Once they were removed, I think it’s fair to say that most of the others have built their business primarily on marketing mix modeling. These are not products that build mix models, although many of their providers are in that business. Rather, these products make mix models more useful by combining them and applying them to planning, reporting, forecasting and optimization. So that’s one trend I've observed: conversion of mix models from stand-alone analyses to part of an integrated measurement process. But most of those products were several years old, so the trend is not a new one.
A fresher trend was efforts to provide more sensitive measures of brand value drivers. Specifically, I am referring to systems that tie changes in brand value to changes in consumer attitudes and behaviors. This contrasts with traditional brand value studies, which look at consumer attitudes at a specific point in time. As I’ve noted previously, the absolute values generated by these studies are somewhat questionable. But relative changes in these values are still probably good indicators of whether things are getting better or worse. (There’s an irony here someplace—an unreliable indicator becomes useful if applied more frequently.) Of course, there’s nothing new about tracking trends in consumer attitudes or about linking those trends to marketing programs. What’s being added is the conversion of attitude changes to brand value changes. This provides a much-sought link between marketing efforts and brand value.
Changes in consumer attitudes affect more than brand value: they also impact near-term sales. This relationship is reflected in relatively new (to me) efforts to use consumer attitudes as inputs to marketing mix models. Traditionally, the main inputs to these models have been spending levels for each mix element, with only minor adjustments for program content or effectiveness. Marketers would like to analyze more detailed inputs, but few marketing programs are large or long-running enough to have a distinguishable impact on total results. Consumer attitudes, on the other hand, do have a continuous presence that can be correlated with changes in short-term sales. The trick is linking the attitudes with marketing programs.
One solution being attempted is to aggregate marketing programs according to the messages they deliver, and then assess the impact of those messages on attitudes. That is, the mix model inputs are spending against different marketing messages. This could be used to predict sales changes directly, or to predict changes in consumer attitudes which in turn predict sales results. It isn’t quite as good as measuring the direct impact of individual marketing campaigns directly, but it does give some idea of their likely relative effectiveness, and therefore how future funds are best invested.
All of these changes show movement towards understanding the connection between individual marketing decisions and ultimate business results. The common thread is content: using content to aggregate individual marketing programs; assessing the impact of content on short-term sales results; and assessing the content-driven changes in consumer attitudes on long-term brand value. Today, these attributes of content are often measured separately. But I think we can expect them to be part of a single, unified analytical process over time.
Labels:
brand value,
marketing measurement
Wednesday, July 2, 2008
MMA Avista DSS ...and more
I originally contacted Marketing Management Analytics (MMA) to discuss Avista DSS, a hosted service that helps its clients use MMA-built mix models for planning, forecasting and optimization. But while MMA Vice President Douglas Brooks seemed happy to discuss Avista, he said the most excitement today is being generated by BrandView, a newer offering that shows the long-term impact of marketing messages on financial results.
This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.
BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.
In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”
That’s about all I know about BrandView. Now, back to Avista.
As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.
The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.
The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.
Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.
Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.
Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.
This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.
BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.
In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”
That’s about all I know about BrandView. Now, back to Avista.
As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.
The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.
The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.
Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.
Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.
Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.
Labels:
brand value,
marketing measurement,
reviews,
software
Subscribe to:
Posts (Atom)