Last week’s post on ranking demand generation vendors highlighted a fundamental challenge in marketing measurement: the data you want often isn’t available. So a great deal of marketing measurement comes down to deciding which of the available data best suits your needs, and ultimately whether that data is better than nothing.
It’s probably obvious why using bad data can be worse than doing nothing, but in case this is read by, say, a creature from Mars: we humans tend to assume others are telling the truth unless we have a specific reason to question them. This innate optimism is probably a good thing for society as a whole. But it also means we’ll use bad data to make decisions which we would approach more cautiously if we had no data at all.
But how do you judge a piece of data? Here is a list of criteria presented in my book The MPM Toolkit, due in late January.
· Existence. Ok, this is pretty basic, but the information does have to exist. Let’s avoid the deeper philosophical issues and just say that data exists if it is recorded somewhere, or can be derived from something that’s recorded. So the color of your customers’ eyes only exists as data if you’ve stored it on their records or can look it up somewhere else. If the data doesn’t exist, you may be able to capture it. Then you have to compare the cost of capturing it with its value. But that’s a topic for another day.
· Accessibility. Can you actually access the data? To get back to last week’s post, we’d love to know the revenue of each demand generation vendor. This data certainly exists in their accounting systems, but they haven’t shared it with us so we can’t use it. Again, it’s often possible to gain access to information if you’re willing to pay the price, and you must once more compare the price with the value. In fact, the price / value tradeoff will apply to every factor in this list, so I won’t bother to mention it from here on out.
· Coverage. What portion of the universe is covered by the data? In the case of demand generation vendors, the number of blog posts was a poor measure of market attention because the available sources clearly didn’t capture all the posts. In itself, this isn’t necessarily fatal flaw, since a fair sample could still give a useful relative ranking. But we can’t judge whether the coverage was a fair sample because we don’t know why it was incomplete. This is a critical issue when assessing whether, or more precisely how, to use incomplete data. (In the demand generation case, the very small numbers of blog posts added another issue, which is that the statistical noise of a few random posts could distort the results. This is also something to consider, although hopefully most of your marketing data deals with larger quantities.)
· Accuracy. Data may not have been accurate to begin with or it may be outdated. Data can be inaccurate because someone purposely provided false information or because the mechanism is inherently flawed. Survey replies can have both problems: people lie for various reasons and they may not actually know the correct answers. Even seemingly objective data can be incorrect: a simple temperature reading may inaccurate because the thermometer was miscalibrated, someone read it wrong, or the scale was Celsius rather than Fahrenheit. Errors can also be introduced after the data is captured, such as incorrect conversions (e.g., inflation adjustments used to create “constant dollar” values) or incorrect aggregation (e.g., customer value statistics that do not associate transactions with the correct customers). In our demand generation example, statistics on search volume were highly inaccurate because the counts for some terms included results that were clearly irrelevant. As with other factors listed here, you need to determine the level of accuracy that’s required for your specific purpose and assess whether the particular source is adequate.
· Consistency. Individually accurate items can be collectively incorrect. To continue with the thermometer example, readings from some stations may be in Celsius and others in Fahrenheit, or readings from a single station may have changed from Fahrenheit to Celsius over time. This particular difference would be obvious to anyone examining the data, although it could easily be overlooked in a large data set that combined information from many sources. Other inconsistencies are much more subtle, such as changes in wording of survey questions or the collection mechanism (e.g., media consumption diaries vs. automated “people meters”). As with coverage, it’s important to understand any bias introduced by these factors. In our demand generation analysis, Compete.com used several different techniques to measure Web traffic, and it appeared that these yielded inconsistent results for sites with different traffic levels.
· Timeliness. The primary issue with timeliness is how quickly data becomes available. In the past, it often took weeks or months to gather marketing information. Today, data in general moves much more quickly, although some information still take months to assemble. There is a danger that quickly available data will overwhelm higher-quality data that appears later. For example, initial response rate to a promotion is immediately available, but the value of those responses can only be measured over time. Decisions based only on gross response often turn out to be incorrect once the later performance is included in the analysis. Still, timely data can be extremely important when it can lead to adjustments that improve results, such as moving funds from one promotion to another. Online marketing in particular often allows for such reactions because changes can be made in hours or minutes, rather than the weeks and months needed for traditional marketing programs.
I haven’t listed cost as a separates consideration only because there are often incremental investments that can made to change a data element’s existence, accessibility, coverage, etc. Those investments would change its value as well. But you will ultimately still need to assess the total cost and value of a particular element, and then compare it with the cost and value of other elements that could serve a similar purpose. This assessment will often be fairly informal, as it was in last week’s blog post. But you still need to do it: while an unexamined life may or not be worth living, unexamined marketing data will get you in trouble for sure.
Showing posts with label data. Show all posts
Showing posts with label data. Show all posts
Friday, November 28, 2008
Judging the Value of Marketing Data
Labels:
checklists,
data,
marketing measurement
Tuesday, November 18, 2008
Comparing Web Activity Measures for Demand Generation Vendors
I recently wanted to measure the relative popularity of several demand generation vendors, as part of a deciding how to expand the Raab Guide to Demand Generation Systems. This led to an interesting little journey which I think is worth sharing.
I started with a list of 23 marketing system vendors. A couple are fairly large but most are quite small. These were grouped into three categories: five demand generation vendors already in the Guide; four marketing automation vendors with significant demand generation market presence; and fourteen other demand generation vendors. (See http://www.raabguide.com/ for definitions of demand generation and marketing automation.)
My first thought was to look at their Web site traffic directly. The easiest way to do this is at Alexa.com, which tracks site visits of people who download its search toolbar. The number of users in this base is apparently a well-guarded secret, or at least well enough guarded that I would have had to look beyond the first Google search page for the answer. Alexa was originally classified by many experts as spyware, and is still somewhat controversial. But it was purchased by Amazon.com in 1999 and has since become more or less grudgingly accepted.
Be that as it may. I captured two statistics for each of my sites from Alexa: a ranking which basically reflects the number of pages viewed by unique visitors each month (the busiest site gets rank 1, next busiest gets rank 2, etc.); and a share figure that shows the percentage of total toolbar users who visit each site each month. (I think I have that correct; you can check the definitions at Alexa.com.) Ranking on either figure gives the same sequence (except for Pardot; I have no idea why). If you’re creating ratios or an index, the difference in the share figures is probably a better indicator of relative popularity, since a company with twice the share of another has twice as many visitors, but will not necessarily a rank number that is twice as low. (Lower rank means more traffic.)
Here are the ranks I came up with, broken into the three segments I mentioned earlier:
These rankings were more or less as I expected. Within the first group, Eloqua is definitely the largest vendor, while Marketo is probably the most aggressive marketer at the moment. Vtrenz is the second-largest demand generation company, based on number of clients and almost certainly on revenue. But it is a subsidiary of Silverpop, so its traffic is split between Vtrenz.com and visits to Silverpop.com. This means that the Vtrenz.com ranking understates the company’s position, whie Silverpop ranking includes traffic unrelated to demand generation. I’ve therefore tracked both here. Manticore and Market2Lead get much less attention than the other three, so it makes sense that they have much less traffic.
Figures for the next group also seem to be ranked about correctly. Unica is certainly the most prominent of this group, with Alterian, Aprimo and Neolane trailing quite far behind. I would have expected a bit more traffic for Neolane, but it is definitely the new kid on this block and only entered the U.S. market about one year ago. The real surprise here is that this group as a whole ranks so far below the big demand generation vendors, even though the marketing automation firms are in fact larger and probably do more promotion. Perhaps the marketing automation vendors appeal to a smaller number of potential users (primarily, marketers in large companies with direct customer contact, such as financial services, retail, travel and telecommunications) and generate less traffic as a result.
I didn’t have much sense of the relative positions of the other demand generation vendors, although I would have guessed that MarketBright and Pardot were near the top. Marqui has had little attention recently, perhaps because they’ve been through financial difficulties culminating in the purchase of their assets by a private investor group this past August. ActiveConversions I do know, only because I’ve spoken with them, and they rank about where I expected given their number of clients. The other names were somewhat familiar but the only one I’d ever spoken with was OfficeAutoPilot, which I knew to be small. Since I had no fully formed expectations, the rankings couldn’t surprise me.
In other words, the rankings provided by Alexa seemed generally reasonable given my knowledge of the companies concerned.
But Web traffic is just one measure. Where else could I look to confirm or challenge these impressions?
Well, there is another Web traffic site that is somewhat similar to Alexa, called Compete.com. I actually hadn’t heard of them before but they came up in my research. They apparently use their own toolbar but also some other Web traffic measures such as volumes reported by Internet Service Providers (ISPs). You’d expect them to pretty much match the Alexa figures. But do they? Here is a chart comparing the two, with the Alexa multiplied by 10 ^7 to make them more legible.
You don’t need Sherlock Holmes to spot the problem: the Compete.com figures for Eloqua and Silverpop seem much too high compared with the others. I could concoct a theory that this reflects the difference between counting unique visitors in Compete.com and counting page views in Alexa, and throw in the fact that Eloqua and Silverpop/Vtrenz host landing pages for their clients. But the other demand generation vendors also host their clients’ pages, so this shouldn’t really matter. I suspect what really happens is that Compete measures low volumes differently from higher volumes (remember, they use a combination of techniques), and thus the figures for high-volume Eloqua and Silverpop are inconsistent with figures for the other, much lower-volume domains.
Anyway, if we throw away those two, the rest of the Compete figures seem more or less in line with the Alexa figures, apart from some small exceptions (Bulldog in particular ranks higher). All told, it doesn’t seem that Compete adds much value to what I already got from Alexis.
So much for Web traffic. How about search volume? Google Keywords will give that to me. Again, we’ll compare to Alexa as a reference:
If we limit ourselves to the first two groups, the search numbers look mostly plausible. The low figure for Manticore could have to do with checking specifically for “Manticore Technology”, since a looser “Manticore” would incorporate an unrelated company and references to the mythical beast. The high value for Unica probably reflects some unrelated uses of the word in other languages or as an acronym. I have no particular explanation for the relatively low value for Alterian or the substantial flattening of the range between Eloqua and its competitors. Perhaps Eloqua’s traffic is less search-driven than other vendors’. Or not. In any event, I think the implicit rankings here are about as plausible as the Alexa rankings.
But things get crazier in the Other Demand Generation vendor segment. I understand the Marqui number, which is high because Marqui can be a misspelling of other words (marquis, marque, marquee) and has some unrelated non-English meanings. Similarly, Conversen is a verb form in Spanish. I think that Bulldog Solutions, FirstWave and LoopFuse also gain some hits because of their component words, even though I tried to keep them out of the search results. The bottom line here is you have to throw away so many terms that the remaining rankings don’t signify much. So, in general, search keyword rankings need close consideration before you can accept them as a meaningful measure of importance.
How about Google hits? I’ll show them alongside the Google Keywords as well as Alexa rank.
Here the impact of limiting Manticore to “Manticore Technology” shows up even more clearly (although Manticore truly doesn’t get much Web attention). I limited the Unica test to “Unica Affinium” since the number of hits is otherwise over 100 million; but this seems to excessively depress the results. Note that the low ranking for Alterian has now been reversed; in fact, Alterian has the most hits of all, and the marketing automation group in general shows more activity than the demand generation vendors. That could be true – those vendors have been around longer. Or it could be a fluke.
Once again, the Other Demand Generation group has a big problem with Marqui and perhaps smaller problems with LoopFuse and FirstReef. Even excluding those, the numbers jump around a great deal. As with keywords, these figures don’t seem to be a reliable measure of anything.
Let’s try one more measure: the blogosphere. Here I tried three different services: Technorati, BlogPulse and Ice Rocket.
Results for all three services are roughly consistent, although Technorati gets many fewer hits and Ice Rocket finds a few more than Blogpulse. The major anomaly is the low value for Unica, but that happens because I actually searched on Unica Affinium, to avoid all the irrelevant hits on Unica alone. Similarly, I searched on Marqui Software to avoid unrelated hits on Marqui. The high values for Bulldog Solutions and Loopfuse are valid (I scanned the actual hits); these two vendors just managed to snag a relatively high number of blog mentions. Remember we are looking at very small numbers here: it doesn’t take much to get 40 blog mentions. Nor, if we trust the Alexa, do they translate into much Web traffic. However, the blog hits might explain the relatively high keyword search counts for those two vendors.
Well, I hope you enjoyed the trip. This is far from an exhaustive analysis of the issue, but based on the information available, I’d say that Alexa Web traffic is the most useful measure for assessing the market presence of different demand generation vendors, and blog mentions have at least some value. Google hits and keyword searches capture too many unrelated items to be reliable.
I started with a list of 23 marketing system vendors. A couple are fairly large but most are quite small. These were grouped into three categories: five demand generation vendors already in the Guide; four marketing automation vendors with significant demand generation market presence; and fourteen other demand generation vendors. (See http://www.raabguide.com/ for definitions of demand generation and marketing automation.)
My first thought was to look at their Web site traffic directly. The easiest way to do this is at Alexa.com, which tracks site visits of people who download its search toolbar. The number of users in this base is apparently a well-guarded secret, or at least well enough guarded that I would have had to look beyond the first Google search page for the answer. Alexa was originally classified by many experts as spyware, and is still somewhat controversial. But it was purchased by Amazon.com in 1999 and has since become more or less grudgingly accepted.
Be that as it may. I captured two statistics for each of my sites from Alexa: a ranking which basically reflects the number of pages viewed by unique visitors each month (the busiest site gets rank 1, next busiest gets rank 2, etc.); and a share figure that shows the percentage of total toolbar users who visit each site each month. (I think I have that correct; you can check the definitions at Alexa.com.) Ranking on either figure gives the same sequence (except for Pardot; I have no idea why). If you’re creating ratios or an index, the difference in the share figures is probably a better indicator of relative popularity, since a company with twice the share of another has twice as many visitors, but will not necessarily a rank number that is twice as low. (Lower rank means more traffic.)
Here are the ranks I came up with, broken into the three segments I mentioned earlier:
| Alexa - 3 mo average | |||
| Already in Guide: | rank | share | |
| Eloqua | 20,234 | 0.0070700 | |
| Silverpop | 29,080 | 0.0030500 | |
| Marketo | 68,088 | 0.0017000 | |
| Manticore Technology | 213,546 | 0.0006100 | |
| Market2Lead | 235,244 | 0.0004800 | |
| Vtrenz | 295,636 | 0.0003600 | |
| Marketing Automation Vendors: | |||
| Unica / Affinium* | 126,215 | 0.0008500 | |
| Alterian | 345,543 | 0.0002500 | |
| Aprimo | 416,446 | 0.0002200 | |
| Neolane | 566,977 | 0.0001690 | |
| Other Demand Generation: | |||
| MarketBright | 167,306 | 0.0005400 | |
| Pardot | 211,309 | 0.0003600 | |
| Marqui * | 211,767 | 0.0004400 | |
| ActiveConversion | 257,058 | 0.0003400 | |
| Bulldog Solutions | 338,337 | 0.0003200 | |
| OfficeAutoPilot | 509,868 | 0.0002000 | |
| Lead Genesys | 557,199 | 0.0001450 | |
| LoopFuse | 734,098 | 0.0001090 | |
| PredictiveResponse | 2,313,880 | 0.0000330 | |
| FirstWave Technologies | 2,872,765 | 0.0000170 | |
| NurtureMyLeads | 4,157,304 | 0.0000140 | |
| Customer Portfolios | 5,097,525 | 0.0000090 | |
| Conversen* | 6,062,462 | 0.0000070 | |
| FirstReef | 11,688,817 | 0.0000010 | |
These rankings were more or less as I expected. Within the first group, Eloqua is definitely the largest vendor, while Marketo is probably the most aggressive marketer at the moment. Vtrenz is the second-largest demand generation company, based on number of clients and almost certainly on revenue. But it is a subsidiary of Silverpop, so its traffic is split between Vtrenz.com and visits to Silverpop.com. This means that the Vtrenz.com ranking understates the company’s position, whie Silverpop ranking includes traffic unrelated to demand generation. I’ve therefore tracked both here. Manticore and Market2Lead get much less attention than the other three, so it makes sense that they have much less traffic.
Figures for the next group also seem to be ranked about correctly. Unica is certainly the most prominent of this group, with Alterian, Aprimo and Neolane trailing quite far behind. I would have expected a bit more traffic for Neolane, but it is definitely the new kid on this block and only entered the U.S. market about one year ago. The real surprise here is that this group as a whole ranks so far below the big demand generation vendors, even though the marketing automation firms are in fact larger and probably do more promotion. Perhaps the marketing automation vendors appeal to a smaller number of potential users (primarily, marketers in large companies with direct customer contact, such as financial services, retail, travel and telecommunications) and generate less traffic as a result.
I didn’t have much sense of the relative positions of the other demand generation vendors, although I would have guessed that MarketBright and Pardot were near the top. Marqui has had little attention recently, perhaps because they’ve been through financial difficulties culminating in the purchase of their assets by a private investor group this past August. ActiveConversions I do know, only because I’ve spoken with them, and they rank about where I expected given their number of clients. The other names were somewhat familiar but the only one I’d ever spoken with was OfficeAutoPilot, which I knew to be small. Since I had no fully formed expectations, the rankings couldn’t surprise me.
In other words, the rankings provided by Alexa seemed generally reasonable given my knowledge of the companies concerned.
But Web traffic is just one measure. Where else could I look to confirm or challenge these impressions?
Well, there is another Web traffic site that is somewhat similar to Alexa, called Compete.com. I actually hadn’t heard of them before but they came up in my research. They apparently use their own toolbar but also some other Web traffic measures such as volumes reported by Internet Service Providers (ISPs). You’d expect them to pretty much match the Alexa figures. But do they? Here is a chart comparing the two, with the Alexa multiplied by 10 ^7 to make them more legible.
| Compete.com | Alexa.com | |
| unique visitors / month | share x 10^7 | |
| Already in Guide: | ||
| Eloqua | 560,288 | 70,700 |
| Silverpop | 293,580 | 30,500 |
| Marketo | 34,244 | 17,000 |
| Manticore Technology | 15,789 | 6,100 |
| Market2Lead | 10,689 | 4,800 |
| Vtrenz | 5,313 | 3,600 |
| Marketing Automation Vendors: | ||
| Unica / Affinium* | 23,138 | 8,500 |
| Alterian | 4,497 | 2,500 |
| Aprimo | 5,131 | 2,200 |
| Neolane | 3,927 | 1,690 |
| Other Demand Generation: | ||
| MarketBright | 13,993 | 5,400 |
| Pardot | 7,339 | 3,600 |
| Marqui * | 3,282 | 4,400 |
| ActiveConversion | 1,503 | 3,400 |
| Bulldog Solutions | 6,408 | 3,200 |
| OfficeAutoPilot | 1,567 | 2,000 |
| Lead Genesys | 2,630 | 1,450 |
| LoopFuse | 1,930 | 1,090 |
| PredictiveResponse | 1,099 | 330 |
| FirstWave Technologies | - | 170 |
| NurtureMyLeads | - | 140 |
| Customer Portfolios | - | 90 |
| Conversen* | - | 70 |
| FirstReef | - | 10 |
You don’t need Sherlock Holmes to spot the problem: the Compete.com figures for Eloqua and Silverpop seem much too high compared with the others. I could concoct a theory that this reflects the difference between counting unique visitors in Compete.com and counting page views in Alexa, and throw in the fact that Eloqua and Silverpop/Vtrenz host landing pages for their clients. But the other demand generation vendors also host their clients’ pages, so this shouldn’t really matter. I suspect what really happens is that Compete measures low volumes differently from higher volumes (remember, they use a combination of techniques), and thus the figures for high-volume Eloqua and Silverpop are inconsistent with figures for the other, much lower-volume domains.
Anyway, if we throw away those two, the rest of the Compete figures seem more or less in line with the Alexa figures, apart from some small exceptions (Bulldog in particular ranks higher). All told, it doesn’t seem that Compete adds much value to what I already got from Alexis.
So much for Web traffic. How about search volume? Google Keywords will give that to me. Again, we’ll compare to Alexa as a reference:
| Google Keywords | Alexa | |
| avg search volume | share x 10^7 | |
| Already in Guide: | ||
| Eloqua | 1,900 | 70,700 |
| Silverpop | 1,790 | 30,500 |
| Marketo | 839 | 17,000 |
| Manticore Technology | 113 | 6,100 |
| Market2Lead | 318 | 4,800 |
| Vtrenz | 752 | 3,600 |
| Marketing Automation: | - | |
| Unica / Affinium* | 6,600 | 8,500 |
| Alterian | 861 | 2,500 |
| Aprimo | 1,600 | 2,200 |
| Neolane | 1,340 | 1,690 |
| Other Demand Generation: | - | |
| MarketBright | 186 | 5,400 |
| Pardot | 210 | 3,600 |
| Marqui * | 1,300 | 4,400 |
| ActiveConversion | 46 | 3,400 |
| Bulldog Solutions | 442 | 3,200 |
| OfficeAutoPilot | 0 | 2,000 |
| Lead Genesys | 74 | 1,450 |
| LoopFuse | 260 | 1,090 |
| PredictiveResponse | 36 | 330 |
| FirstWave Technologies | 386 | 170 |
| NurtureMyLeads | 0 | 140 |
| Customer Portfolios | 0 | 90 |
| Conversen* | 170 | 70 |
| FirstReef | 12 | 10 |
If we limit ourselves to the first two groups, the search numbers look mostly plausible. The low figure for Manticore could have to do with checking specifically for “Manticore Technology”, since a looser “Manticore” would incorporate an unrelated company and references to the mythical beast. The high value for Unica probably reflects some unrelated uses of the word in other languages or as an acronym. I have no particular explanation for the relatively low value for Alterian or the substantial flattening of the range between Eloqua and its competitors. Perhaps Eloqua’s traffic is less search-driven than other vendors’. Or not. In any event, I think the implicit rankings here are about as plausible as the Alexa rankings.
But things get crazier in the Other Demand Generation vendor segment. I understand the Marqui number, which is high because Marqui can be a misspelling of other words (marquis, marque, marquee) and has some unrelated non-English meanings. Similarly, Conversen is a verb form in Spanish. I think that Bulldog Solutions, FirstWave and LoopFuse also gain some hits because of their component words, even though I tried to keep them out of the search results. The bottom line here is you have to throw away so many terms that the remaining rankings don’t signify much. So, in general, search keyword rankings need close consideration before you can accept them as a meaningful measure of importance.
How about Google hits? I’ll show them alongside the Google Keywords as well as Alexa rank.
| Google hits | Google Keywords | Alexa | |
| avg search volume | share x 10^7 | ||
| Already in Guide: | |||
| Eloqua | 118,000 | 1,900 | 70,700 |
| Silverpop | 111,000 | 1,790 | 30,500 |
| Marketo | 103,000 | 839 | 17,000 |
| Manticore Technology | 9,620 | 113 | 6,100 |
| Market2Lead | 25,900 | 318 | 4,800 |
| Vtrenz | 35,200 | 752 | 3,600 |
| Marketing Automation: | - | ||
| Unica / Affinium* | 7,750 | 6,600 | 8,500 |
| Alterian | 262,000 | 861 | 2,500 |
| Aprimo | 161,000 | 1,600 | 2,200 |
| Neolane | 40,200 | 1,340 | 1,690 |
| Other Demand Generation: | - | ||
| MarketBright | 34,500 | 186 | 5,400 |
| Pardot | 27,600 | 210 | 3,600 |
| Marqui * | 1,370,000 | 1,300 | 4,400 |
| ActiveConversion | 16,800 | 46/p> | 3,400 |
| Bulldog Solutions | 9,340 | 442 | 3,200 |
| OfficeAutoPilot | 777 | 0 | 2,000 |
| Lead Genesys | 5,880 | 74 | 1,450 |
| LoopFuse | 95,400 | 260 | 1,090 |
| PredictiveResponse | 21,800 | 36 | 330 |
| FirstWave Technologies | 13,400 | 386 | 170 |
| NurtureMyLeads | 1,050 | 0 | 140 |
| Customer Portfolios | 12,200 | 0 | 90 |
| Conversen* | 2,790 | 170 | 70 |
| FirstReef | 18,100 | 12 | 10 |
Here the impact of limiting Manticore to “Manticore Technology” shows up even more clearly (although Manticore truly doesn’t get much Web attention). I limited the Unica test to “Unica Affinium” since the number of hits is otherwise over 100 million; but this seems to excessively depress the results. Note that the low ranking for Alterian has now been reversed; in fact, Alterian has the most hits of all, and the marketing automation group in general shows more activity than the demand generation vendors. That could be true – those vendors have been around longer. Or it could be a fluke.
Once again, the Other Demand Generation group has a big problem with Marqui and perhaps smaller problems with LoopFuse and FirstReef. Even excluding those, the numbers jump around a great deal. As with keywords, these figures don’t seem to be a reliable measure of anything.
Let’s try one more measure: the blogosphere. Here I tried three different services: Technorati, BlogPulse and Ice Rocket.
| Technorati | Blogpulse | Ice Rocket | Alexa | |
| blog posts | blog posts | all posts | share x 10^7 | |
| Already in Guide: | ||||
| Eloqua | 130 | 267 | 286 | 70,700 |
| Silverpop | 70 | 119 | 188 | 30,500 |
| Marketo | 3 | 179 | 229 | 17,000 |
| Manticore Technology | 0 | 12 | 56 | 6,100 |
| Market2Lead | 0 | 7 | 25 | 4,800 |
| Vtrenz | 0 | 30 | 53 | 3,600 |
| Marketing Automation: | - | |||
| Unica / Affinium* | 0 | 6 | 43 | 8,500 |
| Alterian | 8 | 119 | 145 | 2,500 |
| Aprimo | 0 | 118 | 139 | 2,200 |
| Neolane | 0 | 33 | 64 | 1,690 |
| Other Demand Generation: | - | |||
| MarketBright | 1 | 23 | 33 | 5,400 |
| Pardot | 0 | 32 | 33 | 3,600 |
| Marqui software* | 5 | 15 | 19 | 4,400 |
| ActiveConversion | 0 | 6 | 12 | 3,400 |
| Bulldog Solutions | 0 | 30 | 43 | 3,200 |
| OfficeAutoPilot | 0 | 5 | 5 | 2,000 |
| Lead Genesys | 0 | 1 | 5 | 1,450 |
| LoopFuse | 4 | 48 | 43 | 1,090 |
| PredictiveResponse | 0 | 0 | 0 | 330 |
| FirstWave Technologies | 0 | 5 | 11 | 170 |
| NurtureMyLeads | 0 | 1 | 5 | 140 |
| Customer Portfolios | 0 | 0 | 3 | 90 |
| Conversen* | 0 | 2 | 0 | 70 |
| FirstReef | 0 | 0 | 0 | 10 |
Results for all three services are roughly consistent, although Technorati gets many fewer hits and Ice Rocket finds a few more than Blogpulse. The major anomaly is the low value for Unica, but that happens because I actually searched on Unica Affinium, to avoid all the irrelevant hits on Unica alone. Similarly, I searched on Marqui Software to avoid unrelated hits on Marqui. The high values for Bulldog Solutions and Loopfuse are valid (I scanned the actual hits); these two vendors just managed to snag a relatively high number of blog mentions. Remember we are looking at very small numbers here: it doesn’t take much to get 40 blog mentions. Nor, if we trust the Alexa, do they translate into much Web traffic. However, the blog hits might explain the relatively high keyword search counts for those two vendors.
Well, I hope you enjoyed the trip. This is far from an exhaustive analysis of the issue, but based on the information available, I’d say that Alexa Web traffic is the most useful measure for assessing the market presence of different demand generation vendors, and blog mentions have at least some value. Google hits and keyword searches capture too many unrelated items to be reliable.
Labels:
data,
marketing measurement
Wednesday, November 12, 2008
No Silver Bullets for Social Media Measurement
The editor of my forthcoming book on marketing measurement asked me to add something on social media, which led to several days of research. Although there are many smart and articulate people writing on the topic, the bottom line is, well, you can’t really measure the bottom line.
There are plenty of activity measures such as numbers of page views, comments and subscribers. Sometimes there are specific benefits such as reduced costs if technical questions are answered through a user forum instead of company staff. Sometimes you can compare behavior of social media participants vs. non-participants, although that raises a self-selection problem – obviously those people are more engaged to begin with.
But measuring the impact of social media on attitudes in the population as a whole—that is, on brand value—is even harder than measuring the impact of traditional marketing and advertising methods because the audience size is so small. Measuring the impact of brand value on actual sales is already a problem, what you have with social media could be considered the brand value problem, squared.
In fact, the closest analogy is measuring the value of traditional public relations, which is notoriously difficult. Social media is more like a subset of public relations than anything else, although it feels odd to describe it that way because social media is so much larger and more complicated than traditional PR. Maybe we'll need to think of PR as a subset of social media.
The best advice I saw boiled down to setting targets for something measurable, and then watching whether you reach them. This is pretty much the best practice for measuring public relations and other marketing programs without a direct impact on sales. I guess there’s nothing surprising about this, although I was still a bit disappointed.
Still, as I say, there is plenty of interesting material available if you want to learn about concrete measurements and how people use them. Just about every hit on the first two pages of a Google search on “social media marketing measurement” was valuable. In particular, I kept tripping across Jeremiah Owyang, currently an analyst with Forrester Research, who has created many useful lists on his Web Strategy by Jeremiah blog. For example, the post Social Media FAQ #3: How Do I Measure ROI? provides a good overview of the subject. You can also search his category of Social Media Measurement. Another post I found helpful was What Is The ROI For Social Media? from Jason Falls’ Social Media Explorer blog.
There are plenty of activity measures such as numbers of page views, comments and subscribers. Sometimes there are specific benefits such as reduced costs if technical questions are answered through a user forum instead of company staff. Sometimes you can compare behavior of social media participants vs. non-participants, although that raises a self-selection problem – obviously those people are more engaged to begin with.
But measuring the impact of social media on attitudes in the population as a whole—that is, on brand value—is even harder than measuring the impact of traditional marketing and advertising methods because the audience size is so small. Measuring the impact of brand value on actual sales is already a problem, what you have with social media could be considered the brand value problem, squared.
In fact, the closest analogy is measuring the value of traditional public relations, which is notoriously difficult. Social media is more like a subset of public relations than anything else, although it feels odd to describe it that way because social media is so much larger and more complicated than traditional PR. Maybe we'll need to think of PR as a subset of social media.
The best advice I saw boiled down to setting targets for something measurable, and then watching whether you reach them. This is pretty much the best practice for measuring public relations and other marketing programs without a direct impact on sales. I guess there’s nothing surprising about this, although I was still a bit disappointed.
Still, as I say, there is plenty of interesting material available if you want to learn about concrete measurements and how people use them. Just about every hit on the first two pages of a Google search on “social media marketing measurement” was valuable. In particular, I kept tripping across Jeremiah Owyang, currently an analyst with Forrester Research, who has created many useful lists on his Web Strategy by Jeremiah blog. For example, the post Social Media FAQ #3: How Do I Measure ROI? provides a good overview of the subject. You can also search his category of Social Media Measurement. Another post I found helpful was What Is The ROI For Social Media? from Jason Falls’ Social Media Explorer blog.
Labels:
brand value,
data,
marketing measurement
Thursday, August 7, 2008
External Data Will Help To Explain Customer Behavior
Let’s continue a bit with last week’s post about capturing all the marketing messages received by customers and prospects. Although this is becoming easier and less expensive, it will never be completely simple or free. So there will always be a need to measure the cost of gathering the information against its value.
The value will ultimately be two-fold. Retrospectively, the information will help to measure the impact of past messages, enabling optimal allocation of marketing investments. This works at the aggregate level, and is traditional marketing measurement.
But we also want to use information proactively, to help target individuals. Consumption of marketing messages, particularly in online media, is closely related to customer choices about which Web pages to visit or which offers to accept. This behavior provides vital insights into the customer’s current state of mind and, thus, which future treatments are most likely to be productive. Here we’re moving into a different type of marketing measurement, which involves predictions of individual behavior.
I suppose we could debate whether this type of activity, which itself is not new, really should be labeled as marketing measurement. But the particular point I had in mind today was even complete information about a customer’s interactions with the company will not be enough to accurately predict behavior. There are many external factors as well. So individual-level predictions will be much more useful if they can be based on a combination of internal and external data.
This isn’t a particularly brilliant insight, or a new one. But what is new is the greater availability of external information, such as data about individuals’ backgrounds, news about their companies, and public attitudes revealed through online discussions such as forums, blogs, and reviews. I haven’t compiled a comprehensive list of the information sources, but companies like Jigsaw (online directory of individuals and companies), InsideView (aggregation of news about companies) and Twing (tracking of online discussions) keep popping up. Any predictive system would benefit from incorporating their contents. Benefits include gaining background on customers and prospects (which should yield insights into the approaches most likely to be effective), and identifying specific events (which might prompt new needs for particular products).
The most important value from these sources will come from better sales results. But retrospective marketing measurement will also benefit because it will more precisely identify the factors that impact the results of a given marketing effort. Thus, the focus will increasingly shift from “whether” a particular marketing program worked, to “when” (i.e., under which conditions) it works. This will guide both treatments of specific individuals and the over-all allocation of marketing resources. Once that happens, the distinction between retrospective and proactive marketing measurement will be less important: they will effectively be the same thing.
The value will ultimately be two-fold. Retrospectively, the information will help to measure the impact of past messages, enabling optimal allocation of marketing investments. This works at the aggregate level, and is traditional marketing measurement.
But we also want to use information proactively, to help target individuals. Consumption of marketing messages, particularly in online media, is closely related to customer choices about which Web pages to visit or which offers to accept. This behavior provides vital insights into the customer’s current state of mind and, thus, which future treatments are most likely to be productive. Here we’re moving into a different type of marketing measurement, which involves predictions of individual behavior.
I suppose we could debate whether this type of activity, which itself is not new, really should be labeled as marketing measurement. But the particular point I had in mind today was even complete information about a customer’s interactions with the company will not be enough to accurately predict behavior. There are many external factors as well. So individual-level predictions will be much more useful if they can be based on a combination of internal and external data.
This isn’t a particularly brilliant insight, or a new one. But what is new is the greater availability of external information, such as data about individuals’ backgrounds, news about their companies, and public attitudes revealed through online discussions such as forums, blogs, and reviews. I haven’t compiled a comprehensive list of the information sources, but companies like Jigsaw (online directory of individuals and companies), InsideView (aggregation of news about companies) and Twing (tracking of online discussions) keep popping up. Any predictive system would benefit from incorporating their contents. Benefits include gaining background on customers and prospects (which should yield insights into the approaches most likely to be effective), and identifying specific events (which might prompt new needs for particular products).
The most important value from these sources will come from better sales results. But retrospective marketing measurement will also benefit because it will more precisely identify the factors that impact the results of a given marketing effort. Thus, the focus will increasingly shift from “whether” a particular marketing program worked, to “when” (i.e., under which conditions) it works. This will guide both treatments of specific individuals and the over-all allocation of marketing resources. Once that happens, the distinction between retrospective and proactive marketing measurement will be less important: they will effectively be the same thing.
Labels:
data,
marketing measurement
Friday, August 1, 2008
'Total Marketing Measurement' Is Closer Than You Think
As I’ve mentioned previously, most of my research these days is targeted at demand generation systems. So I was a little surprised that one of the demand generation products, ActiveConversion, positioned itself as a “total marketing measurement” system, complete with three letter abbreviation of TMM. A quick glance at the company Web site, followed by a conversation with president Fred Yee, confirmed that ActiveConversion has functions similar to other demand generation products: it sends email campaigns, tracks the resulting Web visits, nurtures leads until they are mature, and then turns them over to sales. ActiveConversion is a little unusual in not providing tools to build Web pages. Instead, it tracks behavior by embedding tags pages built externally. But this makes little practical difference, and the vendor will add page generation functions later this year.
Why, then, does ActiveConversion call itself a TMM? I think it’s mostly a bit of innocent marketing patter, but it also reflects the system’s ability to track Web visitors’ behavior in great detail and report it to salespeople. This is common among demand management products, but not a feature of traditional Web analytics, which is more about group behaviors such as how many times each page is viewed. In some sense, this detailed individual could reasonably be described as “total” measurement.
In practice, the “total” measurement by demand generation systems is limited to behavior on company Web pages. This is far from the complete set of interactions between a company and its customers. But as the scope of recorded transactions expands relentlessly—something I personally consider an Orwellian nightmare, but see as inevitable—it’s worth contemplating a world where “total” measurement truly does capture all behaviors of each individual. At that point, marketers will no longer be able to hide behind the traditional barrier of not knowing which messages reached each customer. This will leave them face to face with the need to take all this data and make sense of it.
Much as I love technology, I suspect there are significant limits to how accurately it will be able to measure marketing results at the individual level. But meaningful predictions should be possible for groups of people, and will yield substantial improvements in marketing effectiveness compared with what we have today. But this will only happen if marketers make a substantial investment in new measurement techniques, which in turn requires that marketers believe those techniques are important. The only way they will believe this is to see early successes, which is why marketers must start working today to find effective approaches, even if they are based on partial data. After all, it’s certain that the quality of the information will improve. What’s in question is whether marketers make good use of it.
Why, then, does ActiveConversion call itself a TMM? I think it’s mostly a bit of innocent marketing patter, but it also reflects the system’s ability to track Web visitors’ behavior in great detail and report it to salespeople. This is common among demand management products, but not a feature of traditional Web analytics, which is more about group behaviors such as how many times each page is viewed. In some sense, this detailed individual could reasonably be described as “total” measurement.
In practice, the “total” measurement by demand generation systems is limited to behavior on company Web pages. This is far from the complete set of interactions between a company and its customers. But as the scope of recorded transactions expands relentlessly—something I personally consider an Orwellian nightmare, but see as inevitable—it’s worth contemplating a world where “total” measurement truly does capture all behaviors of each individual. At that point, marketers will no longer be able to hide behind the traditional barrier of not knowing which messages reached each customer. This will leave them face to face with the need to take all this data and make sense of it.
Much as I love technology, I suspect there are significant limits to how accurately it will be able to measure marketing results at the individual level. But meaningful predictions should be possible for groups of people, and will yield substantial improvements in marketing effectiveness compared with what we have today. But this will only happen if marketers make a substantial investment in new measurement techniques, which in turn requires that marketers believe those techniques are important. The only way they will believe this is to see early successes, which is why marketers must start working today to find effective approaches, even if they are based on partial data. After all, it’s certain that the quality of the information will improve. What’s in question is whether marketers make good use of it.
Labels:
data,
marketing measurement,
software
Friday, May 30, 2008
Can Brand Value Really Measure Effectiveness?
One more comment on the ANA’s Integrated Marketing survey that I wrote about yesterday. I was struck that brand tracking studies ranked second among effectiveness measures, and brand equity measures ranked fourth. (Numbers are in yesterday’s post.) This is the more respect than brand measurement usually gets.
I suppose this reflects the nature of the survey respondents, who are mostly consumer marketers and (this being the Association of National Advertisers) are largely focused on conventional advertising. I suspect a survey of, say, Direct Marketing Association members would get very different results.
But it seems that brand value is also accepted as an effectiveness measure by people outside of marketing at the survey respondents’ companies. This suggests these people live in a very brand-oriented culture. Indeed, although a couple of speakers yesterday said they had trouble getting their company to believe ROI calculations based on marketing mix models, no one mentioned any problems gaining acceptance for brand metrics.
Lest you think the respondents are all packaged goods marketers, 20% of the survey responders worked in financial services and insurance. (One nice thing people used to good research is they publish all the details.) Computers and technology accounted for another 10%. The traditional brand-centric categories of consumer packaged goods were 11% and food, beverage and tobacco were 9% of the total.
One reason the high ranking of brand value measures caught my eye was that I had just compared brand valuations from two different sources: Millward Brown Optimor and Interbrand. Taking Google in 2007 as an example, Millward Brown gave it a value of $66.4 billion and Interbrand gave it a value $17.8 billion (Millward Brown’s 2008 figure for Google is $86.1 billion; Interbrand 2008 is not yet available.)
Any way you slice it, this is a very big difference. Rankings also diverged: Millward Brown placed Google first among all brands while Interbrand had it at number 20.
My point here is the financial values produced by brand valuation methodologies are very imprecise. It’s actually a bit frightening to think that advertisers would use them to measure effectiveness. The consumer attitudes captured in brand tracking studies are probably much more reliable, even though they cannot be directly converted into a financial measure.
Side note: I had no sooner finished this post than I received an email survey from ANA asking my opinion of the conference. These are definitely people who take their research seriously. Good for them.
I suppose this reflects the nature of the survey respondents, who are mostly consumer marketers and (this being the Association of National Advertisers) are largely focused on conventional advertising. I suspect a survey of, say, Direct Marketing Association members would get very different results.
But it seems that brand value is also accepted as an effectiveness measure by people outside of marketing at the survey respondents’ companies. This suggests these people live in a very brand-oriented culture. Indeed, although a couple of speakers yesterday said they had trouble getting their company to believe ROI calculations based on marketing mix models, no one mentioned any problems gaining acceptance for brand metrics.
Lest you think the respondents are all packaged goods marketers, 20% of the survey responders worked in financial services and insurance. (One nice thing people used to good research is they publish all the details.) Computers and technology accounted for another 10%. The traditional brand-centric categories of consumer packaged goods were 11% and food, beverage and tobacco were 9% of the total.
One reason the high ranking of brand value measures caught my eye was that I had just compared brand valuations from two different sources: Millward Brown Optimor and Interbrand. Taking Google in 2007 as an example, Millward Brown gave it a value of $66.4 billion and Interbrand gave it a value $17.8 billion (Millward Brown’s 2008 figure for Google is $86.1 billion; Interbrand 2008 is not yet available.)
Any way you slice it, this is a very big difference. Rankings also diverged: Millward Brown placed Google first among all brands while Interbrand had it at number 20.
My point here is the financial values produced by brand valuation methodologies are very imprecise. It’s actually a bit frightening to think that advertisers would use them to measure effectiveness. The consumer attitudes captured in brand tracking studies are probably much more reliable, even though they cannot be directly converted into a financial measure.
Side note: I had no sooner finished this post than I received an email survey from ANA asking my opinion of the conference. These are definitely people who take their research seriously. Good for them.
Labels:
brand value,
data,
marketing
Thursday, May 1, 2008
What's the Biggest Obstacle to Marketing Performance Measurement?
In work with my own clients and discussions with other people, I have found that the biggest obstacle to marketing performance measurement is the lack of necessary data. But when I mentioned this in passing to a marketing measurement expert whose opinion I greatly respect, he challenged me. He said that his experience had shown cultural, skills and experience gaps to be more important.
This really didn’t sound right, but there’s no point debating opinions. So I looked around for some more objective evidence in the form of survey results. I found two sets of results:
- In its 2007 Marketing Performance Measurement Benchmarks for Midsized Companies (free with registration), The Marketing Leadership Roundtable found that “lack of quality data” was by far the biggest factor contributing to dissatisfaction with marketing measurements. It was cited by 84% of respondents, while the next most common factor, ‘inability to generate predictive results’ was cited by only 56%. Score one for me.
I do have to note, though, that when the same survey asked about challenges to improving performance measurement, improved data came up in third place (70%), behind improved reporting systems (81%) and improved linkages to financial results (74%). Quite frankly, I don’t know what to make of the discrepancy. I suppose that marketers answering the second question were addressing the practical issue of what could be done within existing constraints.
Incidentally, this study also proves a very detailed list of the metrics people use in different areas, with percentages to show how often they are employed. Such lists are apparently quite popular, so if you’re into that sort of thing, it’s well worth a look.
- Aberdeen Group reported in its February 2008 study CMO Strategic Agenda: Demystifying ROI in Marketing
(requires payment) that the number one challenge to identifying marketing return on investment was lack of data (48%). Again this was substantially ahead of the next-ranked issue, timely access to relevant information, cited by 36%. Other issues ranked fairly close behind: lack of human resources, difficulty in identifying the right metrics, and communication between sales and marketing.
For what it’s worth, there is also another Aberdeen study from the same series, CMO Strategic Agenda: Automating Closed Loop Marketing, which is available for free for a limited time. Data also comes up as the top issue in this one, although the question is implementation challenges for closed-loop marketing. Specifically, the study shows 47% of respondents citing data consolidation as a top challenge, compared with 36% citing lack of technical skills and 32% citing continuous access to actionable information. Although this is a significantly different issue, it does confirm the general proposition that lack of data is a big problem for marketers.
Of course, none of these studies is definitive. But I think that despite my respected colleague’s comments, I’ll continue to maintain that lack of data is performance measurement Issue #1.
This really didn’t sound right, but there’s no point debating opinions. So I looked around for some more objective evidence in the form of survey results. I found two sets of results:
- In its 2007 Marketing Performance Measurement Benchmarks for Midsized Companies (free with registration), The Marketing Leadership Roundtable found that “lack of quality data” was by far the biggest factor contributing to dissatisfaction with marketing measurements. It was cited by 84% of respondents, while the next most common factor, ‘inability to generate predictive results’ was cited by only 56%. Score one for me.
I do have to note, though, that when the same survey asked about challenges to improving performance measurement, improved data came up in third place (70%), behind improved reporting systems (81%) and improved linkages to financial results (74%). Quite frankly, I don’t know what to make of the discrepancy. I suppose that marketers answering the second question were addressing the practical issue of what could be done within existing constraints.
Incidentally, this study also proves a very detailed list of the metrics people use in different areas, with percentages to show how often they are employed. Such lists are apparently quite popular, so if you’re into that sort of thing, it’s well worth a look.
- Aberdeen Group reported in its February 2008 study CMO Strategic Agenda: Demystifying ROI in Marketing
(requires payment) that the number one challenge to identifying marketing return on investment was lack of data (48%). Again this was substantially ahead of the next-ranked issue, timely access to relevant information, cited by 36%. Other issues ranked fairly close behind: lack of human resources, difficulty in identifying the right metrics, and communication between sales and marketing.
For what it’s worth, there is also another Aberdeen study from the same series, CMO Strategic Agenda: Automating Closed Loop Marketing, which is available for free for a limited time. Data also comes up as the top issue in this one, although the question is implementation challenges for closed-loop marketing. Specifically, the study shows 47% of respondents citing data consolidation as a top challenge, compared with 36% citing lack of technical skills and 32% citing continuous access to actionable information. Although this is a significantly different issue, it does confirm the general proposition that lack of data is a big problem for marketers.
Of course, none of these studies is definitive. But I think that despite my respected colleague’s comments, I’ll continue to maintain that lack of data is performance measurement Issue #1.
Subscribe to:
Comments (Atom)
