Thursday, December 18, 2008

Aberdeen Reports Show Varied Roles for Performance Measurement

Our friends at Aberdeen Group apply a highly standardized research process to technology issues. They take a survey that asks companies about their business performance and the business processes, organization, knowledge management, technologies and performance measures related to a technology. They then divide the companies into leaders (“best-in-class”), laggards and industry average based on their business performance, and compare replies for the different groups. The not-quite-stated implication is that the differences in performance are caused by differences in the other factors. This is not necessarily correct (the ever-popular post hoc ergo propter hoc fallacy) and you could also wonder about the sample size (usually around 200) and how accurately people can answer such detailed questions. But so long as you don’t take the studies too seriously, they always give an interesting look at how firms at different maturity levels manage the technologies at hand.

It so happens that three of the Aberdeen studies have been sitting on my desk for some time, so I had a chance to look at them together. The topics were Lead Nurturing, Trigger Marketing and Cross-Channel Campaign Management. All are currently available for free although the sponsors may contact you in exchange.

Since the Aberdeen reports all follow a similar format, it’s easy to compare their contents. From the perspective of marketing performance measurement, they contain two elements of interest. These are the performance measures highlighted as distinguishing best-in-class companies, and the role of measurement among recommended strategic actions. Here’s a brief look at each of these in the three reports:

Lead Nurturing. The report highlighted number of qualified leads and lead-to-close ratio as critical performance measures, and found that 77% of best-in-class companies were tracking them. It also recommended tracking revenue associated with leads, although it found only 35% of best-in-class companies could do this. But otherwise, it didn’t see performance measurement as a central issue: the primary focus was on matching marketing messages to the prospect’s current stage in the buying cycle. Other important strategies were leveraging multiple channels, identifying prospect buying cycle and needs, and using automated lead scoring to move customers through the cycle.

Trigger Marketing. This report did not identify particular marketing measures as critical, although it did say that having defined performance goals for trigger marketing programs is important. It reported the most common measure is change in response rates, used by 69% of all respondents. (The next most common measure, change in retention rates, was used by just 54%.) I take this as a sign of immaturity (among the respondents, not Aberdeen), since response rate is a primitive measure compared with profitability and return on marketing investment, which were used by 43% and 42% respectively. This is consistent with another finding: the most common strategic action is to “link trigger marketing activities to increased revenues and other business results” (32%). I interpret that as meaning people are just learning to do make that linkage and are simply using response rate until they figure it out. It might be worth noting that the Aberdeen analyst highlighted digital dashboards as next step for best-in-class companies wishing to do still better, although I didn’t see a particularly compelling case for selecting that over other possible activities. But I’m all in favor of dashboards, so I’m glad to see it.

Cross-Channel Campaign Management. Again, the report doesn’t specify particular performance measures. It does say that it’s important to optimize future campaigns based on past performance (pretty obvious) and highlight real-time tracking of results across channels (less obvious, although I’m not so sure I agree. Immediate results may not in fact correlate with long-term profitability). This report did include segmentation and analytics as a strategic actions. (I consider these as part of performance measurement.) In particular, it stressed that best-in-class companies were focused on identifying their high value customers and treating them uniquely. Most of the recommendations, however, were about building the infrastructure needed to coordinate marketing messages across channels, and then executing those coordinated campaigns.

So where does this leave us? I don’t draw any grand lessons from these three reports, except to note that financial measures (i.e., customer profitability and return on investment) don’t play much of a role in any of them. Even that probably just confirms that such measures not widely available, which we already knew. But it’s good to know that people are working on performance measurement and that Aberdeen is baking it into its research.

Thursday, December 11, 2008

Survey: Marketing Accountability Measures Remain Weak

Every year since 2005, the Association of National Advertisers and vendor MMA (Marketing Management Analytics) have joined forces to produce a survey on marketing accountability. Although the details change each year, the general results have been sadly consistent: marketers, finance executives and senior management are very unhappy with their marketing measurement capabilities.

In the 2008 study, released in July and just recapitulated in a new MMA white paper, only 23% of the marketers were satisfied with their metrics for marketing’s impact on sales, and just 19% were satisfied with metrics showing marketing impact on ROI and brand equity.

Furthermore, only 14% of the marketers felt their senior management had confidence in marketing’s forecasts of sales impact. And even this is probably optimistic: a separate MMA-funded study, also cited in the new white paper, found that only 10% of financial executives use marketing forecasts to help set the marketing budget.

The obvious question is why so little progress has been made. Marketers consistently rank performance measurement as their top priority (for example, see the CMO Council’s Marketing Outlook 2008 survey). Nor are marketers doing this out of the goodness of their hearts: they know that being able to show the impact of their expenditures is the best way to protect and grow their budgets. So marketers have every reason to work hard at developing performance measures that finance and senior management will accept.

And yet...when the ANA survey asked marketers to rank their accountability challenges, the top score (45%) went to “understanding the impact of changes in consumer attitudes and perceptions on sales”. This strikes me as odd, if the marketers’ ultimate goal is to understand the impact of marketing programs on sales. Measuring the impact of marketing programs and measuring the impact of customer attitudes are not the same thing.

Nor is this a simple fluke of the wording. A separate question showed the most common accountability investment was in “brand and customer equity models” (53%). These also measure the link between attitudes and sales.

One explanation for the disconnect would be that marketers can already measure the relationship between marketing programs and consumer attitudes, so they can complete the analysis by adding the link between attitudes and sales. This seems a bit optimistic, especially since it also assumes that marketers also understand the impact on sales of marketing programs that are not aimed at consumer attitudes, such as price and trade promotions.

A more plausible explanation would be that the link between attitudes and sales is the hardest thing to measure, so that’s where marketers put their effort. Or, maybe that relationship is the question that marketers find most intriguing because, well, that’s the sort of thing they care about. A cynic might suggest that marketers don’t want to measure the link between marketing programs and sales because they don’t want to know the answer. But even the cynic would acknowledge that marketers need a way to justify their budgets, so that can’t be it.

None of these answers really satisfies me, but let’s put this question aside. I think we can safely assume that marketers really do want to measure their performance. This leaves the question of why they haven’t made much progress in doing it.

One reason could be that they simply don’t know how. Marketing measurement is truly difficult, so that’s surely part of it.

Another possibility is that they know how, but lack the resources. Since good marketing measurement can be quite expensive, this is probably part of the problem as well. Remember that the resources involved will ultimately come from the corporate budget, so finance departments and senior management must also agree that marketing measurement is the best thing to spend them on. And, indeed, this doesn’t seem to be their priority. The white paper states that “the number of CEOs and CFOs championing marketing accountability programs within their firms remained negligible and unchanged from 2007.”

This is a pretty depressing conclusion, although to me it has the ring of truth. Fuss though they may, CEOs and CFOs are not willing to invest money to solve the problem. Indeed our friend the cynic might argue that they are the ones with a motivation to avoid measurement, since it gives them more flexibility to allocate funds as they prefer.

The white paper doesn’t dwell on this. It just lists lack of senior management involvement as one of many obstacles. The paper authors then go on to propose a four step process for developing an accountability program:

- assess and benchmark existing capabilities and resources
- define an achievable future state, in terms of the business questions to answer and the resources required to answer them
- work with stakeholders to align metrics with corporate goals and key business questions
- establish a roadmap with a multi-year phased approach

There’s not much to argue with here. The paper also provides a reasonable list of success factors, including:

- realistic stakeholders expectations
- agreement on scope at the start of the project
- cross-functional team with clearly defined roles, responsibilities and communication points
- simple math and analytics
- integration of analytics for pricing, ROI, and brand analysis

Again, it’s all sound advice. Let’s hope you can get the resources to follow it.

Friday, December 5, 2008

TraceWorks' Headlight Integrates Online Measurement and Execution

I’ve been looking at an interesting product called Headlight from a Danish firm TraceWorks. Headlight is an online advertising management system, which means that it helps marketers to plan, execute and measure paid and unpaid Web advertising.

According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)

These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.

All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.

The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.

Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.

The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.

Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.

Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.

Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.

TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.

Friday, November 28, 2008

Judging the Value of Marketing Data

Last week’s post on ranking demand generation vendors highlighted a fundamental challenge in marketing measurement: the data you want often isn’t available. So a great deal of marketing measurement comes down to deciding which of the available data best suits your needs, and ultimately whether that data is better than nothing.

It’s probably obvious why using bad data can be worse than doing nothing, but in case this is read by, say, a creature from Mars: we humans tend to assume others are telling the truth unless we have a specific reason to question them. This innate optimism is probably a good thing for society as a whole. But it also means we’ll use bad data to make decisions which we would approach more cautiously if we had no data at all.

But how do you judge a piece of data? Here is a list of criteria presented in my book The MPM Toolkit, due in late January.

· Existence. Ok, this is pretty basic, but the information does have to exist. Let’s avoid the deeper philosophical issues and just say that data exists if it is recorded somewhere, or can be derived from something that’s recorded. So the color of your customers’ eyes only exists as data if you’ve stored it on their records or can look it up somewhere else. If the data doesn’t exist, you may be able to capture it. Then you have to compare the cost of capturing it with its value. But that’s a topic for another day.

· Accessibility. Can you actually access the data? To get back to last week’s post, we’d love to know the revenue of each demand generation vendor. This data certainly exists in their accounting systems, but they haven’t shared it with us so we can’t use it. Again, it’s often possible to gain access to information if you’re willing to pay the price, and you must once more compare the price with the value. In fact, the price / value tradeoff will apply to every factor in this list, so I won’t bother to mention it from here on out.

· Coverage. What portion of the universe is covered by the data? In the case of demand generation vendors, the number of blog posts was a poor measure of market attention because the available sources clearly didn’t capture all the posts. In itself, this isn’t necessarily fatal flaw, since a fair sample could still give a useful relative ranking. But we can’t judge whether the coverage was a fair sample because we don’t know why it was incomplete. This is a critical issue when assessing whether, or more precisely how, to use incomplete data. (In the demand generation case, the very small numbers of blog posts added another issue, which is that the statistical noise of a few random posts could distort the results. This is also something to consider, although hopefully most of your marketing data deals with larger quantities.)

· Accuracy. Data may not have been accurate to begin with or it may be outdated. Data can be inaccurate because someone purposely provided false information or because the mechanism is inherently flawed. Survey replies can have both problems: people lie for various reasons and they may not actually know the correct answers. Even seemingly objective data can be incorrect: a simple temperature reading may inaccurate because the thermometer was miscalibrated, someone read it wrong, or the scale was Celsius rather than Fahrenheit. Errors can also be introduced after the data is captured, such as incorrect conversions (e.g., inflation adjustments used to create “constant dollar” values) or incorrect aggregation (e.g., customer value statistics that do not associate transactions with the correct customers). In our demand generation example, statistics on search volume were highly inaccurate because the counts for some terms included results that were clearly irrelevant. As with other factors listed here, you need to determine the level of accuracy that’s required for your specific purpose and assess whether the particular source is adequate.

· Consistency. Individually accurate items can be collectively incorrect. To continue with the thermometer example, readings from some stations may be in Celsius and others in Fahrenheit, or readings from a single station may have changed from Fahrenheit to Celsius over time. This particular difference would be obvious to anyone examining the data, although it could easily be overlooked in a large data set that combined information from many sources. Other inconsistencies are much more subtle, such as changes in wording of survey questions or the collection mechanism (e.g., media consumption diaries vs. automated “people meters”). As with coverage, it’s important to understand any bias introduced by these factors. In our demand generation analysis, Compete.com used several different techniques to measure Web traffic, and it appeared that these yielded inconsistent results for sites with different traffic levels.

· Timeliness. The primary issue with timeliness is how quickly data becomes available. In the past, it often took weeks or months to gather marketing information. Today, data in general moves much more quickly, although some information still take months to assemble. There is a danger that quickly available data will overwhelm higher-quality data that appears later. For example, initial response rate to a promotion is immediately available, but the value of those responses can only be measured over time. Decisions based only on gross response often turn out to be incorrect once the later performance is included in the analysis. Still, timely data can be extremely important when it can lead to adjustments that improve results, such as moving funds from one promotion to another. Online marketing in particular often allows for such reactions because changes can be made in hours or minutes, rather than the weeks and months needed for traditional marketing programs.

I haven’t listed cost as a separates consideration only because there are often incremental investments that can made to change a data element’s existence, accessibility, coverage, etc. Those investments would change its value as well. But you will ultimately still need to assess the total cost and value of a particular element, and then compare it with the cost and value of other elements that could serve a similar purpose. This assessment will often be fairly informal, as it was in last week’s blog post. But you still need to do it: while an unexamined life may or not be worth living, unexamined marketing data will get you in trouble for sure.

Friday, November 21, 2008

Twitter Volume for Demand Generation Vendors

A comment on my previous post suggested Twitter mentions as a possible measure of vendor market presence. That had in fact occurred to me, but I hadn't bothered to check because I assumed the volume would be too low. But since the topic had been raised, I figured I'd take a peek.

The first two Twitter monitoring sites I looked at, Twitscoop and Twitterment, seemed to confirm my suspicion: of the three most popular vendors, Eloqua had 6 Twitscoop hits and 3 Twitterment hits; Silverpop had 2 on each; and Marketo had 3 on Twitscoop and none on the other. No point in looking further here.

But then I checked Twitstat. In addition to having a slightly less childish name, it seems to either do a more thorough search or look back further in time: for whatever reason, it found 152 hits for Eloqua, 65 for Silverpop, and 133 for Marketo. Much more interesting.

Alas, the numbers dropped down considerably after that, as you can see in the table below. Everything else is in single digits except for two anomalies LoopFuse with 22 mentions and Bulldog Solutions with a whopping 217. Interestingly, both those sites also had exceptionally high blog hit numbers on IceRocket. The root cause is probably the same: one or two active bloggers or Twitter users (which seems to be the accepted term; I guess we can't call them Twits) are enough to skew the figures when volumes are so low. More particularly, LoopFuse gets a lot of attention because some of its founders are closely tied to the open source community. Bulldog Solutions just seems to have a group of employees who are serious into Twitter. In fact, I now know more about their lives than I really care to (although there was nothing indiscreet in the posts, I'm pleased to report).

A couple of side notes:

- the very short length of the messages does make them easy to read, which paradoxically means you can actually gather more information from Twitter than by scanning blog posts, because reading the blog posts takes too much time. Of course, when we're dealing with such tiny volumes, there is no way to generalize from what you read: Twitter is strictly anecdotal evidence, and perhaps even dangerous for that reason.

- there seemed to be several Tweets that were purposely sent for marketing purposes. Nothing wrong with that, and they were quite open about it. Just interesting how quickly some firms have picked up on this. (OK, not so quickly: Twitter has been around since 2006 and very popular for about a year now.)

Still, the bottom line for the purposes of measuring demand generation vendors is still the same as for blogs: too little volume to be a reliable measure of relative market interest.


Twitterment

Ice Rocket

Alexa

Alexa

twitter mentions

blog posts

rank

share x 10^7

Already in Guide:
Eloqua

152

286

20,234

70,700

Silverpop

65

188

29,080

30,500

Marketo

122

229

68,088

17,000

Manticore Technology

0

56

213,546

6,100

Market2Lead

5

5

235,244

4,800

Vtrenz

8

53

295,636

3,600

Marketing Automation:

-

Unica Affinium

6

43

126,215

8,500

Alterian

5

145

345,543

2,500

Aprimo

6

139

416,446

2,200

Neolane

5

64

566,977

1,690

Other Demand Generation:

-

Marketbright

9

167,306

5,400

Pardot

4

33

211,309

3,600

Marqui Software

2

19

211,767

4,400

ActiveConversion

2

12

257,058

3,400

Bulldog Solutions

219

43

338,337

3,200

OfficeAutoPilot

2

5

509,868

2,000

Lead Genesys

1

5

557,199

1,450

LoopFuse

22

43

734,098

1,090

eTrigue

1

1,510,207

430

PredictiveResponse

1

0

2,313,880

330

FirstWave Technologies

0

11

2,872,765

170

NurtureMyLeads

0

5

4,157,304

140

Customer Portfolios

0

3

5,097,525

90

Conversen

1

0

6,062,462

70

FirstReef

0

0

11,688,817

10

Tuesday, November 18, 2008

Comparing Web Activity Measures for Demand Generation Vendors

I recently wanted to measure the relative popularity of several demand generation vendors, as part of a deciding how to expand the Raab Guide to Demand Generation Systems. This led to an interesting little journey which I think is worth sharing.

I started with a list of 23 marketing system vendors. A couple are fairly large but most are quite small. These were grouped into three categories: five demand generation vendors already in the Guide; four marketing automation vendors with significant demand generation market presence; and fourteen other demand generation vendors. (See http://www.raabguide.com/ for definitions of demand generation and marketing automation.)

My first thought was to look at their Web site traffic directly. The easiest way to do this is at Alexa.com, which tracks site visits of people who download its search toolbar. The number of users in this base is apparently a well-guarded secret, or at least well enough guarded that I would have had to look beyond the first Google search page for the answer. Alexa was originally classified by many experts as spyware, and is still somewhat controversial. But it was purchased by Amazon.com in 1999 and has since become more or less grudgingly accepted.

Be that as it may. I captured two statistics for each of my sites from Alexa: a ranking which basically reflects the number of pages viewed by unique visitors each month (the busiest site gets rank 1, next busiest gets rank 2, etc.); and a share figure that shows the percentage of total toolbar users who visit each site each month. (I think I have that correct; you can check the definitions at Alexa.com.) Ranking on either figure gives the same sequence (except for Pardot; I have no idea why). If you’re creating ratios or an index, the difference in the share figures is probably a better indicator of relative popularity, since a company with twice the share of another has twice as many visitors, but will not necessarily a rank number that is twice as low. (Lower rank means more traffic.)

Here are the ranks I came up with, broken into the three segments I mentioned earlier:

Alexa - 3 mo average
Already in Guide:rankshare
Eloqua20,2340.0070700
Silverpop29,0800.0030500
Marketo68,0880.0017000
Manticore Technology213,5460.0006100
Market2Lead235,2440.0004800
Vtrenz295,6360.0003600
Marketing Automation Vendors:
Unica / Affinium*126,2150.0008500
Alterian345,5430.0002500
Aprimo416,4460.0002200
Neolane566,9770.0001690
Other Demand Generation:
MarketBright167,3060.0005400
Pardot211,3090.0003600
Marqui *211,7670.0004400
ActiveConversion257,0580.0003400
Bulldog Solutions338,3370.0003200
OfficeAutoPilot509,8680.0002000
Lead Genesys557,1990.0001450
LoopFuse734,0980.0001090
PredictiveResponse2,313,8800.0000330
FirstWave Technologies2,872,7650.0000170
NurtureMyLeads4,157,3040.0000140
Customer Portfolios5,097,5250.0000090
Conversen*6,062,4620.0000070
FirstReef11,688,8170.0000010

These rankings were more or less as I expected. Within the first group, Eloqua is definitely the largest vendor, while Marketo is probably the most aggressive marketer at the moment. Vtrenz is the second-largest demand generation company, based on number of clients and almost certainly on revenue. But it is a subsidiary of Silverpop, so its traffic is split between Vtrenz.com and visits to Silverpop.com. This means that the Vtrenz.com ranking understates the company’s position, whie Silverpop ranking includes traffic unrelated to demand generation. I’ve therefore tracked both here. Manticore and Market2Lead get much less attention than the other three, so it makes sense that they have much less traffic.

Figures for the next group also seem to be ranked about correctly. Unica is certainly the most prominent of this group, with Alterian, Aprimo and Neolane trailing quite far behind. I would have expected a bit more traffic for Neolane, but it is definitely the new kid on this block and only entered the U.S. market about one year ago. The real surprise here is that this group as a whole ranks so far below the big demand generation vendors, even though the marketing automation firms are in fact larger and probably do more promotion. Perhaps the marketing automation vendors appeal to a smaller number of potential users (primarily, marketers in large companies with direct customer contact, such as financial services, retail, travel and telecommunications) and generate less traffic as a result.

I didn’t have much sense of the relative positions of the other demand generation vendors, although I would have guessed that MarketBright and Pardot were near the top. Marqui has had little attention recently, perhaps because they’ve been through financial difficulties culminating in the purchase of their assets by a private investor group this past August. ActiveConversions I do know, only because I’ve spoken with them, and they rank about where I expected given their number of clients. The other names were somewhat familiar but the only one I’d ever spoken with was OfficeAutoPilot, which I knew to be small. Since I had no fully formed expectations, the rankings couldn’t surprise me.

In other words, the rankings provided by Alexa seemed generally reasonable given my knowledge of the companies concerned.

But Web traffic is just one measure. Where else could I look to confirm or challenge these impressions?

Well, there is another Web traffic site that is somewhat similar to Alexa, called Compete.com. I actually hadn’t heard of them before but they came up in my research. They apparently use their own toolbar but also some other Web traffic measures such as volumes reported by Internet Service Providers (ISPs). You’d expect them to pretty much match the Alexa figures. But do they? Here is a chart comparing the two, with the Alexa multiplied by 10 ^7 to make them more legible.

Compete.comAlexa.com
unique visitors / monthshare x 10^7
Already in Guide:
Eloqua560,28870,700
Silverpop293,58030,500
Marketo34,24417,000
Manticore Technology15,7896,100
Market2Lead10,6894,800
Vtrenz5,3133,600
Marketing Automation Vendors:
Unica / Affinium*23,1388,500
Alterian4,4972,500
Aprimo5,1312,200
Neolane3,9271,690
Other Demand Generation:
MarketBright13,9935,400
Pardot7,3393,600
Marqui *3,2824,400
ActiveConversion1,5033,400
Bulldog Solutions6,4083,200
OfficeAutoPilot1,5672,000
Lead Genesys2,6301,450
LoopFuse1,9301,090
PredictiveResponse1,099330
FirstWave Technologies-170
NurtureMyLeads-140
Customer Portfolios-90
Conversen*-70
FirstReef-10

You don’t need Sherlock Holmes to spot the problem: the Compete.com figures for Eloqua and Silverpop seem much too high compared with the others. I could concoct a theory that this reflects the difference between counting unique visitors in Compete.com and counting page views in Alexa, and throw in the fact that Eloqua and Silverpop/Vtrenz host landing pages for their clients. But the other demand generation vendors also host their clients’ pages, so this shouldn’t really matter. I suspect what really happens is that Compete measures low volumes differently from higher volumes (remember, they use a combination of techniques), and thus the figures for high-volume Eloqua and Silverpop are inconsistent with figures for the other, much lower-volume domains.

Anyway, if we throw away those two, the rest of the Compete figures seem more or less in line with the Alexa figures, apart from some small exceptions (Bulldog in particular ranks higher). All told, it doesn’t seem that Compete adds much value to what I already got from Alexis.

So much for Web traffic. How about search volume? Google Keywords will give that to me. Again, we’ll compare to Alexa as a reference:

Google KeywordsAlexa
avg search volumeshare x 10^7
Already in Guide:
Eloqua

1,900

70,700

Silverpop

1,790

30,500

Marketo

839

17,000

Manticore Technology

113

6,100

Market2Lead

318

4,800

Vtrenz

752

3,600

Marketing Automation:

-

Unica / Affinium*

6,600

8,500

Alterian

861

2,500

Aprimo

1,600

2,200

Neolane

1,340

1,690

Other Demand Generation:

-

MarketBright

186

5,400

Pardot

210

3,600

Marqui *

1,300

4,400

ActiveConversion

46

3,400

Bulldog Solutions

442

3,200

OfficeAutoPilot

0

2,000

Lead Genesys

74

1,450

LoopFuse

260

1,090

PredictiveResponse

36

330

FirstWave Technologies

386

170

NurtureMyLeads

0

140

Customer Portfolios

0

90

Conversen*

170

70

FirstReef

12

10


If we limit ourselves to the first two groups, the search numbers look mostly plausible. The low figure for Manticore could have to do with checking specifically for “Manticore Technology”, since a looser “Manticore” would incorporate an unrelated company and references to the mythical beast. The high value for Unica probably reflects some unrelated uses of the word in other languages or as an acronym. I have no particular explanation for the relatively low value for Alterian or the substantial flattening of the range between Eloqua and its competitors. Perhaps Eloqua’s traffic is less search-driven than other vendors’. Or not. In any event, I think the implicit rankings here are about as plausible as the Alexa rankings.

But things get crazier in the Other Demand Generation vendor segment. I understand the Marqui number, which is high because Marqui can be a misspelling of other words (marquis, marque, marquee) and has some unrelated non-English meanings. Similarly, Conversen is a verb form in Spanish. I think that Bulldog Solutions, FirstWave and LoopFuse also gain some hits because of their component words, even though I tried to keep them out of the search results. The bottom line here is you have to throw away so many terms that the remaining rankings don’t signify much. So, in general, search keyword rankings need close consideration before you can accept them as a meaningful measure of importance.

How about Google hits? I’ll show them alongside the Google Keywords as well as Alexa rank.

Google hitsGoogle KeywordsAlexa
avg search volumeshare x 10^7
Already in Guide:
Eloqua

118,000

1,900

70,700

Silverpop

111,000

1,790

30,500

Marketo

103,000

839

17,000

Manticore Technology

9,620

113

6,100

Market2Lead

25,900

318

4,800

Vtrenz

35,200

752

3,600

Marketing Automation:

-

Unica / Affinium*

7,750

6,600

8,500

Alterian

262,000

861

2,500

Aprimo

161,000

1,600

2,200

Neolane

40,200

1,340

1,690

Other Demand Generation:

-

MarketBright

34,500

186

5,400

Pardot

27,600

210

3,600

Marqui *

1,370,000

1,300

4,400

ActiveConversion

16,800

46/p>

3,400

Bulldog Solutions

9,340

442

3,200

OfficeAutoPilot

777

0

2,000

Lead Genesys

5,880

74

1,450

LoopFuse

95,400

260

1,090

PredictiveResponse

21,800

36

330

FirstWave Technologies

13,400

386

170

NurtureMyLeads

1,050

0

140

Customer Portfolios

12,200

0

90

Conversen*

2,790

170

70

FirstReef

18,100

12

10


Here the impact of limiting Manticore to “Manticore Technology” shows up even more clearly (although Manticore truly doesn’t get much Web attention). I limited the Unica test to “Unica Affinium” since the number of hits is otherwise over 100 million; but this seems to excessively depress the results. Note that the low ranking for Alterian has now been reversed; in fact, Alterian has the most hits of all, and the marketing automation group in general shows more activity than the demand generation vendors. That could be true – those vendors have been around longer. Or it could be a fluke.

Once again, the Other Demand Generation group has a big problem with Marqui and perhaps smaller problems with LoopFuse and FirstReef. Even excluding those, the numbers jump around a great deal. As with keywords, these figures don’t seem to be a reliable measure of anything.

Let’s try one more measure: the blogosphere. Here I tried three different services: Technorati, BlogPulse and Ice Rocket.

TechnoratiBlogpulseIce RocketAlexa
blog postsblog postsall postsshare x 10^7
Already in Guide:
Eloqua

130

267

286

70,700

Silverpop

70

119

188

30,500

Marketo

3

179

229

17,000

Manticore Technology

0

12

56

6,100

Market2Lead

0

7

25

4,800

Vtrenz

0

30

53

3,600

Marketing Automation:

-

Unica / Affinium*

0

6

43

8,500

Alterian

8

119

145

2,500

Aprimo

0

118

139

2,200

Neolane

0

33

64

1,690

Other Demand Generation:

-

MarketBright

1

23

33

5,400

Pardot

0

32

33

3,600

Marqui software*

5

15

19

4,400

ActiveConversion

0

6

12

3,400

Bulldog Solutions

0

30

43

3,200

OfficeAutoPilot

0

5

5

2,000

Lead Genesys

0

1

5

1,450

LoopFuse

4

48

43

1,090

PredictiveResponse

0

0

0

330

FirstWave Technologies

0

5

11

170

NurtureMyLeads

0

1

5

140

Customer Portfolios

0

0

3

90

Conversen*

0

2

0

70

FirstReef

0

0

0

10



Results for all three services are roughly consistent, although Technorati gets many fewer hits and Ice Rocket finds a few more than Blogpulse. The major anomaly is the low value for Unica, but that happens because I actually searched on Unica Affinium, to avoid all the irrelevant hits on Unica alone. Similarly, I searched on Marqui Software to avoid unrelated hits on Marqui. The high values for Bulldog Solutions and Loopfuse are valid (I scanned the actual hits); these two vendors just managed to snag a relatively high number of blog mentions. Remember we are looking at very small numbers here: it doesn’t take much to get 40 blog mentions. Nor, if we trust the Alexa, do they translate into much Web traffic. However, the blog hits might explain the relatively high keyword search counts for those two vendors.

Well, I hope you enjoyed the trip. This is far from an exhaustive analysis of the issue, but based on the information available, I’d say that Alexa Web traffic is the most useful measure for assessing the market presence of different demand generation vendors, and blog mentions have at least some value. Google hits and keyword searches capture too many unrelated items to be reliable.

Wednesday, November 12, 2008

No Silver Bullets for Social Media Measurement

The editor of my forthcoming book on marketing measurement asked me to add something on social media, which led to several days of research. Although there are many smart and articulate people writing on the topic, the bottom line is, well, you can’t really measure the bottom line.

There are plenty of activity measures such as numbers of page views, comments and subscribers. Sometimes there are specific benefits such as reduced costs if technical questions are answered through a user forum instead of company staff. Sometimes you can compare behavior of social media participants vs. non-participants, although that raises a self-selection problem – obviously those people are more engaged to begin with.

But measuring the impact of social media on attitudes in the population as a whole—that is, on brand value—is even harder than measuring the impact of traditional marketing and advertising methods because the audience size is so small. Measuring the impact of brand value on actual sales is already a problem, what you have with social media could be considered the brand value problem, squared.

In fact, the closest analogy is measuring the value of traditional public relations, which is notoriously difficult. Social media is more like a subset of public relations than anything else, although it feels odd to describe it that way because social media is so much larger and more complicated than traditional PR. Maybe we'll need to think of PR as a subset of social media.

The best advice I saw boiled down to setting targets for something measurable, and then watching whether you reach them. This is pretty much the best practice for measuring public relations and other marketing programs without a direct impact on sales. I guess there’s nothing surprising about this, although I was still a bit disappointed.

Still, as I say, there is plenty of interesting material available if you want to learn about concrete measurements and how people use them. Just about every hit on the first two pages of a Google search on “social media marketing measurement” was valuable. In particular, I kept tripping across Jeremiah Owyang, currently an analyst with Forrester Research, who has created many useful lists on his Web Strategy by Jeremiah blog. For example, the post Social Media FAQ #3: How Do I Measure ROI? provides a good overview of the subject. You can also search his category of Social Media Measurement. Another post I found helpful was What Is The ROI For Social Media? from Jason Falls’ Social Media Explorer blog.

Friday, November 7, 2008

Cognos Papers Propose Sales and Marketing Metrics

I’ve always felt that defining a standard set of marketing measures is like prescribing medicine without first examining the patient. But people love those sorts of lists, and they offer a starting point for a more tailored analysis. So I guess they have some value.

Based on that somewhat crotchety premise, I’ll call your attention to a pair of papers from Cognos on “Delivering the reports, plans, & metrics Sales needs” and “Delivering reports, plans, and metrics for better Marketing” (idiosyncratic capitalization in the original). These are widely available on the Internet; you can find both easily if you run this search at IT Toolbox.

Since the whole point of standard measures is to be broadly applicable, I suppose it’s a compliment to say that the measures in this paper are reasonable if not particularly exciting. One point they do illustrate is the difference between marketing and sales, which are often conflated into a single entity but are in fact quite distinct. Let’s look at the metric categories for each:

- Sales: sales results; customer/product profitability; sales tactics; sales pipeline; and sales plan variance.

- Marketing: market opportunities; competitive positioning; product life cycle management; pricing; and demand generation.

It’s surely a cliché, but these measures suggest that marketing is strategic while sales is almost exclusively tactical. That’s a bit blunt but it sounds about right to me.

Given my admittedly parochial focus on demand generation these days (see www.raabguide.com), I couldn’t avoid noticing that Cognos gave demand generation just one of its five marketing slots. That seems a bit underweighted, given that it probably accounts for the bulk of most marketing budgets. But I do have to agree that strategically, marketing should be spending its time on those other topics too.

The papers list specific measures within each category. It’s going to as boring to type these as you’ll find it to read them, but I guess it’s worth the trouble to have them readily available for future reference. So here goes:

Sales metrics:

Sales results
- new customer sales
- sales growth
- sales orders

Customer/product profitability
- average customer profit, lifetime profit and net profit
- net sales
- gross profit
- customer acquisition and retention cost
- sales revenue
- units sold

Sales tactics
- average selling price
- direct cost (of sales efforts)
- discount
- sales calls and sales rep days
- sales orders
- units quoted

Sales pipeline
- pipeline ratio (they don’t define this; I’m not sure what they mean. Maybe distribution by sales stage)
- pipeline revenue
- sales orders and conversions
- cancelled order count
- active and inactive customers
- inquiries
- new customers and lost business

Sales plan variance
- sales order variance
- sales plan variance
- sales growth rate variance
- units ordered and sold variance

You’ll notice a bit of overlap across groups, and I’m not sure why “Sales plan variance” is a separate area: I would expect to measure variances against plan for everything. The list is also missing a few common measures such as profit margin (which shows the net impact of decisions regarding product mix, pricing and discounts), actual vs. potential sales (hard to measure but critical), lead-to-customer conversion rates, and win ratios in competitive deals.

Marketing metrics:

Market opportunities
- company share
- market growth
- market revenue
- profit
- sales

Competitive positioning
- competitor growth
- competitor price change
- competitor share
- competitor sales
- market growth
- market revenue and profit
- sales

Product life cycle management
- new products developed
- new product growth, share, & profit
- new competitor product sales & growth
- market growth
- brand equity score
- new product share of revenue

Pricing
- price change
- sales
- price segment share and growth
- discount ($)
- discount spread (%)
- list price, net price, & average price
- price elasticity factor
- price segment sales and value
Demand generation
- marketing campaigns (#)
- marketing spend
- marketing spend per lead
- qualified leads (#)
- promotions ROI
- baseline and incremental sales

If these weren’t two separate papers, I’d say the author had gotten tired by the time she wrote this one. We see even more redundancy (sales appears in three of the five lists) and “brand equity score” sticks out like a moose at a Sarah Palin rally. (Now there’s a joke that will age quickly.) It’s interesting that the competitive measures provide some of the relative performance information that was lacking in the sales metrics, and that reporting on profit addresses to some degree my earlier question about margins. Is the author implicitly suggesting that sales shouldn’t be held accountable for such things? I disagree. On the other hand, measures of customer value or quality are all assigned to sales. I think marketing is primarily responsible for that one.

Well, that’s interesting: I hadn’t really planned to criticize these measures when I sat down to write this, but now that I look more closely, I do have some objections. It honestly doesn’t seem fair to be harsh, since any list can be criticized. Maybe I’m just crotchety after all. In any event, you can add this list to your personal inventory of metrics to consider for your own business. Maybe something in it will prove useful.

Wednesday, October 29, 2008

Is Measuring Brand Value Worth the Effort?

Hello, blogosphere! Did you miss me?

Probably not, but, whatever. Launch of the new Raab Guide to Demand Generation Systems http://www.raabguide.com/ is largely complete, so I can now find some time for this blog. Also, the publisher of my MPM Toolkit book seems to have settled on a January publication date, so I need to pay more attention to this side of the industry.

I’ll ease back into this blog with a survey on brand value from the Association of National Advertisers (ANA) and Interbrand consultancy. The press release is available here .

Key findings of the survey were that 55% of senior marketers “lack a quantitative understanding of brand value” and that 64% said “brands do not influence decisions made at their organizations.” Bear in mind that these are ANA members, who tend to be large media consumers. If they can’t measure or use brand value, nobody can.

Taken together, these two figures mean that brands don’t influence decisions even at some companies which are able to measure their value. The survey explored this a bit, and found that at companies where brands lack influence, the most common reason (cited by 51%) was that “incentives do not support importance of brand”. In other words, if I interpret that correctly, people are not rewarded for increasing brand value—so they don’t work to do that, even if they do have the ability to measure it. The next most common reason, at 49%, was the more expected “inability to prove brand’s financial benefit”. Other answers ranked at 40% or below.

This wouldn’t matter to anyone who's not a brand valuation consultant, except for one thing: 80% of the responders report that “demands from the C-suite and boardroom were steadily increasing” to demonstrate that branding initiatives add profit. That means even existing branding budgets are at risk.

If you accept, as I do, that branding programs do add value, then not being able to justify them is a serious problem. But there’s a difference between knowing something has value and knowing what that value is. As I’ve pointed out prevously and the good people at MarketingNPV recently wrote at more length, different brand valuation methodologies give widely varying results, and even the same methodology gives different results from year to year.

This has important practical implications: specifically, brand measurements are not precise enough to guide tactical decisions. Yet that is exactly what the ANA survey says marketers want: 93% felt a quantified understanding would allow “more focused investment in marketing” and 82% felt it would provide “an opportunity to custom out underperforming initiatives”. Frankly, I’d say those are unrealistic expectations.

The MarketingNPV paper argues that marketers should not attempt to measure brand value by itself, and instead focus on “quantifying the impact of marketing on cash flows”. That may seem like begging the question: after all, the value of a brand is precisely its impact on future cash flows. But I think of brand value as a residual factor which accounts for future cash flows that cannot be attributed to more direct influences such as promotions, distribution and pricing. So it does make sense to say, first let’s do a better job of predicting the impact on cash flows of those directly measurable items. Once we've taken that as far as we can--and I'd say most firms are nowhere near--then we can spend energy on brand value to explain the rest.

Friday, September 19, 2008

I'm Taking a Break

The sad truth is that even my mother doesn't read this blog on a regular basis. (Not that I blame her--she has important water aerobics and Free Cell games to attend to.) But in case you've noticed the lack of activity, I'm taking a break right now to concentrate on finishing the Guide to Demand Generation Systems by end of the month. I hope to get back to posting regularly once that's done.

And also in case you're wondering about my planned book on marketing performance measurement: publication date is now set for January, 2009. The book was actually finished months ago, but the production process is backed up at the publisher. Apologies to those of you who thought you had found the perfect holiday gift for the Certain Someone.

Thursday, August 28, 2008

Measuring the Value of a Marketing Measurement Project - Part 3 of 3

So far this series of posts has described generic inputs to calculate the value of a marketing measurement project. Ordinarily an internal expert or a consultant would help define the values for those inputs. But let’s assume we’re in the self-service world where non-experts provide those values for themselves. What advice can we give them?

The first is to define the project scope. That is, you want to limit the analysis to the customers, revenues and costs that will actually be affected by the project. For example, inputs related to a Web analytics project should only include the number of Web customers, Web revenues and Web costs. But even this case isn’t so simple: if the project is measuring something specific, such as paid search results, then it may affect just a subset of all Web customers. It’s easy to overestimate the scope of a project’s impact, which in turn overstates the expected benefit. The more precise you can be in your understanding of the actual customers affected by a project, the more you’ll be able to build a realistic estimate of its impact.

A second useful consideration is the actual mechanism that will provide the expected change. Because we are talking about marketing measurement projects, all the project provides is better information about business results. Unless this information can be used to change something, it won’t have any impact. So, for example, a system to uncover immediate opportunities is worthless if you can’t react in less than a week. Similarly, a model that shows the value of changes in the marketing mix has no value if the mix won’t be changed for political reasons. A close look at how the new information will be used will allow a more precise estimate of the expected changes in value.

A third consideration is your ability to project the impact of your changes. Most marketing measurement projects can show the relative performance of current marketing programs, but only some can estimate the impact of incremental investments in those programs. This can be a particular issue where there are constraints on program expansion, such as limited advertising inventory. Where projections of incremental returns are not immediately available, you may be able to conduct additional research or use standards rules of thumb such as the square root rule to make reasonable estimates.

Finally, you need to consider the time horizon of your analysis. Most MPM projects will require a significant initial investment followed by lower, recurring operating costs. The benefits will probably follow a very different pattern, starting slowly and then growing over time. The formulas I’ve presented can be extended to create this sort of projection by doing separate calculations for a sequence of time periods. But in many cases a much simpler approach is acceptable, basically of estimating the benefit for a “mature” program for a fixed time period such as one year. This can be compared with program investment to calculate annual ROI or pay-back period.

Note that even a “simple” calculation may need to consider long-term impacts. For example, a program that acquires new customers more effectively will benefit from the full lifetime value of those customers, not simply their first-year revenue. This is where the idea of “mature” value comes in. It implies looking at a slice of results across all customer groups after the program has been in place for some time. Doing this math correctly can be fairly complex. But, again, you may be able to save effort by making some simplifying assumptions. In the case of a program that let’s you spend more efficiently on acquisitions, this might mean assuming the number of new customers will remain the same because you reduce your acquisition budget. Since the number of new customers doesn’t change, the later values also remain the same. The value of the program thus equals the savings in acquisition cost—a much simpler item to estimate. (Of course, this will underestimate the actual program value if you will in fact be maintaining your acquisition budget and acquiring more customers. Whether this matters depends on the situation.)

There are certainly other considerations that could apply in a given situation. Identifying them all may not be possible. But even if you could list them, the resulting document would be too long to be practical in a self-service situation. So we have to hope that our self-service consumers are able to complete the process on their own, or at least recognize when it’s time to call for expert help.

Tuesday, August 26, 2008

Measuring the Value of a Marketing Measurement Project - Part 2

The first post in this series explained why I might create a standard spreadsheet to measure the value of a marketing performance measurement (MPM) project. In a traditional consulting engagement, this measurement would be a custom analysis tailored specifically to the project at hand. In the self-service world, this is an unavailable luxury. I’m starting with value measurement because I’m incurably linear and the first question to answer about a project is what value the client hopes to receive. The answer drives everything else.

In creating a generic project value form, the trick is to define a set of categories that are specific enough to be useful yet broad enough to cover all the possible cases. My inner Platonist wants to start with a general value formula such as value=revenue – costs, and then subdivide each element. But how would you know if the subcomponents corresponded to the value drivers of actual MPM projects? It’s better to start with a sample of MPM projects, identify their value drivers, and then see if these can be joined as components of a single formula. (Philosophy majors everywhere will recognize the difference between deductive and inductive reasoning. But I digress.)

A reasonable list of typical MPM projects would include marketing mix models, brand value studies, response measurements, Web analytics, social media measurement, and operational process measurements. Except for the final category, these all help allocate marketing resources to the most effective use. In contrast, operational processes help the department perform its internal functions more efficiently. This distinction immediately suggests breaking the value formula into two primary components: value received and marketing operations.

Of course, value received is the more important of the two, particularly if the calculation includes non-overhead marketing costs such as advertising, discounts and channel promotions. One way to subdivide value is to consider that a typical marketing plan will be divided among customer acquisition, development and retention programs. Of these, acquisition and retention focus on number of customers, while development focuses on value per customer. It therefore makes sense to calculate value as the product of these factors (i.e., value= number of customers x value per customer). Since many companies are more product-oriented than customer-oriented, value per customer could further be divided into value per unit and units per customer. Value per unit, in turn, could be split into revenue per unit, product cost per unit (cost of goods sold, shipping, etc.), and marketing cost per unit.

A single value for “number of customers” doesn’t really capture the dynamic between acquisition and retention rates, so it too must be broken into pieces. The basic formula is number of customers = (existing customers + customers added – customers lost).

The result of all this is a value formula with the following elements:

net value = value received – marketing operations cost

value received = number of customers x units per customer x value per unit

number of customers = existing customers + customers added - customers lost
units per customer (possibly broken down by product mix)
value per unit =revenue per unit – product cost per unit – marketing cost per unit

Now let’s do a reality check against our list of MPM projects:

- marketing mix models include product mix, pricing, advertising, and channel promotions as their major components.

- product mix is covered by revenue per unit and/or units per customer.

- pricing is covered by revenue per unit and/or marketing cost per unit, depending on how you treat discounts, coupons, etc.

- advertising is covered by marketing cost per unit

- channel promotions are covered by product cost per unit and/or marketing cost per unit

- brand value studies measure the relation of consumer attitudes to behaviors such as trial, retention and consumption rates. These are covered by existing customers, customers added, customers lost, and possibly by units per customer. A more formal sales funnel could easily fit into this section of the formula if appropriate.

- response measurements are covered by customers added and marketing cost per unit.

- Web analytics projects encompass a range of objectives such as lower cost per order, improved conversion rates and higher revenue per visitor. These are covered respectively by marketing cost per unit, number of new customers, and a combination of units per customer and revenue per unit. Other objectives could also probably be covered by the formula components.

- social media measurements are like brand value measurements: they relate messages to attitudes to behaviors. They would also be covered by changes in customer numbers and units per customer.

- operational process measurements are covered directly by marketing operations cost

So it looks like the proposed set of variables lines up reasonably well with the value drivers of typical MPM projects. This means that project planners should be able to define the expected benefits in terms of these variables without too many intermediate calculations. Once they’ve done this, the actual value calculation is purely mechanical.

The final set of inputs would look like this (with apologies for poor formatting):

inputs..................current value....expected value......change

Number of Customers:
existing customers........xxxx............xxxx...................xxxx
+ customers added........xxxx............xxxx...................xxxx
- customers lost.............xxxx............xxxx...................xxxx

Units per customer:......xxxx............xxxx...................xxxx

Value per Unit:
revenue per unit............xxxx............xxxx...................xxxx
- product cost per unit...xxxx............xxxx...................xxxx
- marketing cost/unit.....xxxx............xxxx...................xxxx

Marketing Ops Cost.......xxxx............xxxx...................xxxx

Based on those inputs, the final calculation is:

Value Received (= Number of Customers x Units per Customer x Value per Unit)
- Marketing Ops Cost
= Net Value

OK, so now we have a formula that calculates business value using inputs relevant to marketing measurement projects. But the real work is deciding what values to apply to those inputs. Part 3 of this sequence will talk about that.

Sunday, August 24, 2008

Measuring the Value of a Marketing Measurement Project - Part 1

I’ve been toying recently with the notion that traditional consulting is being replaced by a self-service model. In this vision, “clients” would fill out standard scorecards to guide them through an analysis, and then use the results to tell them what to do next. For example, a “gap analysis” scorecard might list various Web analytics capabilities; seeing which were missing from the client’s own company would show where it is weak. Obviously an expert must still create the scorecards and define the actions implied by different answers. But once this is done, the consultant is out of the picture and largely out of a job.

I’m not yet certain whether this truly makes sense. Consultants have always had forms of their own, so that part isn’t new. Creating “intelligent” forms that present recommendations based on user entries is something that takes a bit of technology, but nothing major. You could do it in Excel. In any case, it’s logically no different from looking up the answers in a book. So there’s nothing significantly new there, either.

If anything material has actually changed, it’s the attitudes of the erstwhile clients. People today are simply used to looking up information for themselves instead of relying on experts for the answers. So maybe they’ve just now become ready for a self-service solution that could have been provided long ago.

As a professional consultant, I don’t find this a pleasant prospect. I certainly believe that my experience, intuition and judgment can’t be captured in a set of simple decision rules. But what I believe doesn’t matter: it’s what the paying customers believe. If they convince themselves that selecting a vendor can as automated as selecting an airline ticket, then I will be as obsolete as your local travel agent.

Of course, the way to avoid that fate is to demonstrate added value, and I do try. Still, it never hurts to hedge your bets. So I’ve been thinking about what kind of forms I’d need if my business evolved away from traditional consulting towards a self-service model.

(Incidentally, I’m very eager to coin the phrase “Consulting as a Service” to describe this approach. It’s a bit redundant, since consulting has always been a service. But it does capture the remote-access, plug-and-play nature of the thing, not to mention sounding delightfully trendy. Or is the whole “[Whatever]-as-a-Service” patter already passé? How about “cloud-based consulting” instead?)

The next post in this series will look at a specific spreadsheet: one to measure the value of a proposed marketing performance measurement project.

Thursday, August 7, 2008

External Data Will Help To Explain Customer Behavior

Let’s continue a bit with last week’s post about capturing all the marketing messages received by customers and prospects. Although this is becoming easier and less expensive, it will never be completely simple or free. So there will always be a need to measure the cost of gathering the information against its value.

The value will ultimately be two-fold. Retrospectively, the information will help to measure the impact of past messages, enabling optimal allocation of marketing investments. This works at the aggregate level, and is traditional marketing measurement.

But we also want to use information proactively, to help target individuals. Consumption of marketing messages, particularly in online media, is closely related to customer choices about which Web pages to visit or which offers to accept. This behavior provides vital insights into the customer’s current state of mind and, thus, which future treatments are most likely to be productive. Here we’re moving into a different type of marketing measurement, which involves predictions of individual behavior.

I suppose we could debate whether this type of activity, which itself is not new, really should be labeled as marketing measurement. But the particular point I had in mind today was even complete information about a customer’s interactions with the company will not be enough to accurately predict behavior. There are many external factors as well. So individual-level predictions will be much more useful if they can be based on a combination of internal and external data.

This isn’t a particularly brilliant insight, or a new one. But what is new is the greater availability of external information, such as data about individuals’ backgrounds, news about their companies, and public attitudes revealed through online discussions such as forums, blogs, and reviews. I haven’t compiled a comprehensive list of the information sources, but companies like Jigsaw (online directory of individuals and companies), InsideView (aggregation of news about companies) and Twing (tracking of online discussions) keep popping up. Any predictive system would benefit from incorporating their contents. Benefits include gaining background on customers and prospects (which should yield insights into the approaches most likely to be effective), and identifying specific events (which might prompt new needs for particular products).

The most important value from these sources will come from better sales results. But retrospective marketing measurement will also benefit because it will more precisely identify the factors that impact the results of a given marketing effort. Thus, the focus will increasingly shift from “whether” a particular marketing program worked, to “when” (i.e., under which conditions) it works. This will guide both treatments of specific individuals and the over-all allocation of marketing resources. Once that happens, the distinction between retrospective and proactive marketing measurement will be less important: they will effectively be the same thing.

Friday, August 1, 2008

'Total Marketing Measurement' Is Closer Than You Think

As I’ve mentioned previously, most of my research these days is targeted at demand generation systems. So I was a little surprised that one of the demand generation products, ActiveConversion, positioned itself as a “total marketing measurement” system, complete with three letter abbreviation of TMM. A quick glance at the company Web site, followed by a conversation with president Fred Yee, confirmed that ActiveConversion has functions similar to other demand generation products: it sends email campaigns, tracks the resulting Web visits, nurtures leads until they are mature, and then turns them over to sales. ActiveConversion is a little unusual in not providing tools to build Web pages. Instead, it tracks behavior by embedding tags pages built externally. But this makes little practical difference, and the vendor will add page generation functions later this year.

Why, then, does ActiveConversion call itself a TMM? I think it’s mostly a bit of innocent marketing patter, but it also reflects the system’s ability to track Web visitors’ behavior in great detail and report it to salespeople. This is common among demand management products, but not a feature of traditional Web analytics, which is more about group behaviors such as how many times each page is viewed. In some sense, this detailed individual could reasonably be described as “total” measurement.

In practice, the “total” measurement by demand generation systems is limited to behavior on company Web pages. This is far from the complete set of interactions between a company and its customers. But as the scope of recorded transactions expands relentlessly—something I personally consider an Orwellian nightmare, but see as inevitable—it’s worth contemplating a world where “total” measurement truly does capture all behaviors of each individual. At that point, marketers will no longer be able to hide behind the traditional barrier of not knowing which messages reached each customer. This will leave them face to face with the need to take all this data and make sense of it.

Much as I love technology, I suspect there are significant limits to how accurately it will be able to measure marketing results at the individual level. But meaningful predictions should be possible for groups of people, and will yield substantial improvements in marketing effectiveness compared with what we have today. But this will only happen if marketers make a substantial investment in new measurement techniques, which in turn requires that marketers believe those techniques are important. The only way they will believe this is to see early successes, which is why marketers must start working today to find effective approaches, even if they are based on partial data. After all, it’s certain that the quality of the information will improve. What’s in question is whether marketers make good use of it.