Showing posts with label marketing measurement. Show all posts
Showing posts with label marketing measurement. Show all posts

Sunday, October 11, 2009

New Study, Same Results: Minority of Companies Do Effective Marketing Performance Measurement

The latest addition to my collection of surveys about marketing measurement is The Marketing Performance Advantage, a joint effort from strategic marketing consultants CMG Partners and market researcher Chadwick Martin Bailey. Based on 400 online interviews among CFOs, CEOs and marketing employees of companies with 100+ employees, this is one of the larger and more sophisticated studies on the topic.

One-Quarter of Companies Measure Marketing Performance Effectively

The main finding is that about one-quarter of marketers feel they do an adequate job of measurement. This matches other studies on the topic. The survey asked several questions along those lines:

- 20% say their company "excels" at measurement
- 22% "excel" at using measurement-based insights to drive improvement
- 24% see a positive impact from measurement, and
- 27% have fully integrated measurement into marketing planning

The study also resembled other research in showing that many more marketers list measurement as a top priority (44%) than actually do it.

Marketing VP's Are More Satisfied with Measurement Than Anyone Else

One intriguing detail was that senior marketers seem eerily "overconfident" (the authors' word) compared with those above and below them in the organization.

- 13% of marketing vice presidents consider marketing performance measurement a "huge challenge", compared with 34% to 38% of CEOs, marketing directors and marketing managers, and 61% of CFOs.

- 38% of marketing vice presidents felt that measurement has a "huge impact" on their business, compared with 15% to 29% of CEOs, marketing directors and marketing managers, and 7% of CFOs.

How well is your organization performing with respect to measuring the performance of marketing initiatives? How well are you using insights to improve the performance of marketing initiatives?
This is a huge challenge...CEOCFOVP MarketingDirector MarketingManager Marketing
Measuring MP 36% 61% 13%38%34%
Improving MP 28% 61% 9%38%38%

To what extent, if at all, has measuring the performance of your marketing initiatives improved your business?
Impact of MP on your businessCEOCFOVP MarketingDirector MarketingManager Marketing
No impact 29%52%0% 21%21%
Neutral 42%41%62%64%50%
Huge Impact 29%7%38%15%29%

Although the authors don't make the connection, these results help to explain why more money isn't invested in marketing measurement: the marketing vice presidents who control the purse strings are the least convinced they have a major problem.

Barriers to Measurement: Data, Technology and Process

The survey also asked about major barriers to marketing measurement. These include all the usual suspects. If anything, what was intriguing was that executive support is such a small issue compared with the others:

Barriers to improvement (% answers 1-4 on scale of 1-10):

- 40% collecting the right data
- 40% technology/systems
- 39% clear & effective processes
- 36% use of customer analytics
- 36% organizational alignment
- 26% skills sets
- 20% senior level buy-in

Effective Companies Have Clear Process to Apply Measurements, Invest in Measurement Capabilities and Hold Marketing Accountable for Results

Another set of questions covered adoption of best practices, and compared answers from companies reporting positive impact from marketing measurement with answers from the others. The biggest differences were in having clear processes to ensure that measurement-based insights are applied to decisions; the next tier included marketing targeted investments and holding marketing accountable for measured results. Senior level buy-in, strategic alignment and usage outside of marketing were less prominent.

Best practice adoption (index of use by companies reporting positive impact, where 100=average of all companies)

- 251 clear process to ensure measurements are applied to decisions
- 215 targeted investments in measurement technology/systems, skills and data
- 206 marketing held accountable on performance metrics
- 161 alignment of marketing activities to strategic business objectives
- 159 senior level buy-in
- 145 usage beyond marketing

Tuesday, September 29, 2009

eMarketer Report Details Next Steps for Online Brand Measurement

eMarketer recently released a deeply researched report on Online Brand Measurement. Since it touched on several topics I’ve been pondering recently (see Web Analytics Is Dead… on my Customer Experience Matrix blog) , I read it with particular care.

This is a long report (58 pages), so I won’t review it in detail. But here are what struck me as the critical points:

- Web measurement has largely focused on counting views and clicks, not measuring long-term brand impact. Counting is much easier but it doesn’t capture the full value of any Web advertisement. One result has been that marketers overspend on search ads, which are great at generating immediate response, and underspend on Web display ads which influence long term behavior even if they don’t generate as many click-throughs.

- Media buyers want Web publishers to provide the equivalent of Gross Rating Points (GRPs), so they can effectively compare Web ad buys with purchases in other media. That’s okay as far as it goes, but it’s still just about counting, not about measuring the quality or impact of the impressions. As the paper points out, even engagement measures such as time on site or mentions in social media, don’t necessarily equate to positive brand impact.

- Just about everyone agrees that the right way to measure brand impact is to tailor measurements to the goal of a particular marketing program. This may sound like a conflict with the desire for a standard GRP-like measure, but it really reflects the distinction between counting the audience and measuring impact. GRPs work fine for buying media but not for assessing results. Traditional media face precisely the same dichotomy, which is why marketing measurement is still a puzzle for them as well. And just as most offline brand measures are ultimately based on surveys and panels, I'd expect most online brand measures will be too.

- Meaningful impact measurement will integrate several data types, including online behaviors, visitor demographics, offline marketing activities and actual purchase behavior. These will come from a combination of direct online sources (i.e., traditional Web analytics), panel-based research and surveys (for audience and attitudinal information), and offline databases (for demographics and purchases). Ideally these would be meshed within marketing mix models and response attribution models that would estimate the incremental impact of each marketing program and allow optimization. But such sophisticated models won’t appear tomorrow.

To me, this final point is the most important because it points to a “grand unification theory” of marketing measurement that combines the existing distinct disciplines and sources. The paper cites numerous current efforts, including:

- multimedia databases being created (separately) by panel-based measurement firms including comScore, Nielsen, Quantcast and TNS Media Compete;

- Datatran Media’s Aperture, which combines email and postal addresses with Acxiom household data, IXI financial data, MindSet Marketing healthcare data and NextAction retail data;

- a joint effort between Omniture and WPP’s Kantar Group that combines data from email, search, display ads and traditional media;

- another Nielsen project combining TV ad effectiveness information from Nielsen IAG with panel purchase data from Nielsen Homescan.

These all reinforce the claim I made in last week’s blog post that individual data will increasingly be combined with panel- and survey-based information to provide community-level insights that are actually more valuable than individual data alone.

Tuesday, September 1, 2009

Mzinga Survey Shows Most Companies Don't Measure Social Media ROI

Toute le blogosphere is in love with social media, which of course means that some contrarians have to argue that it’s over-hyped. So it was interesting to see a survey (available here; registration required) show that social technologies are indeed widely adopted: 86% of 555 respondents said they are currently using them for business purposes, and 61% said it was an ongoing component of their business.

Caveat: the survey was sponsored by social technology vendor Mzinga in conjunction with the Babson Executive Education program, so they had a stake in the outcome. But I didn't see any obvious problems with it, and even allowing for some bias, the results still suggest wide social technology usage among a broad spectrum of businesses.

Probably the most interesting result from a marketing measurement perspective was that just 16% of respondents reported measuring ROI on their social media programs. No surprise, alas, but worrisome because programs that can’t prove ROI are subject to cancellation when money is tight.

Somewhat supporting this line of reasoning, the survey showed that just 40% of respondents had budget dedicated to social media and 57% had employees assigned to it. Perhaps many of those employees work for free, but a more likely explanation is that their costs are not part of project budgets because they're part of a vaguely fixed "overhead". This makes it easier to sustain a social media effort without formal economic justification. But it can’t be a permanent situation – managers will eventually realize that time spent on social media has a real cost. So justification of some sort will ultimately be needed.

Of course, that justification won’t necessarily be ROI. We all know that many traditional marketing investments are not justified on the basis of ROI, and marketing is by far the most common social media application (57%, vs 39% for internal collaboration, 31% other, 29% customer service & support, 25% sales, 21% human resources, 16% strategy and 14% product development). Marketing in social media could easily go unmeasured as well.

Indeed, just 8% of respondents said their social technology system could showcase ROI, vs. 41% who said it couldn’t. An impressively large 44% didn’t know, which I interpret to mean that they didn't care enough to find out. So I think it’s safe to say that ROI measurement hasn’t been a major priority.

The other intriguing figure in this survey was that 55% of respondents said there was no feature/function that they'd like added to their social media platform. REALLY? They can't be trying very hard: I mean, I can think of features I’d like added to a light bulb.*

If people are satisfied with their tools in such a rapidly evolving space, they probably aren’t using them for much. Or, to put it more charitably, maybe they recognize that they’re not taking advantage of what’s already available and feel they should master that before looking for anything more. Either way, this suggests that most deployments are quite immature.

One final factoid: 61% are integrating social media within their Web site or other sites, vs. 40% running standalone community sites and 39% deploying as social widgets in third party sites such as Facebook. I’m surprised that community sites and widgets are so popular. Maybe these are signs of experimentation. Anyway, it’s food for thought.

My general take, then: the survey shows wide testing of social technologies, but little deep engagement. Without a firm economic or other justification, there’s a good chance that the efforts won’t be sustained. So it’s up to social technology gurus, and vendors like Mzinga, to start demonstrating not just what social technology can do, but what makes it worth an investment.

__________________________________________
* How about an indicator that shows how long until it burns out? Preferably with a wireless Internet connection that alerts me when failure is imminent.

Monday, July 6, 2009

CMO Council Study: Customer Loyalty Is Fleeting

The CMO Council and Catalina Marketing’s Pointer Media Network recently released a major study on consumer loyalty in packaged goods brands. The study, Losing Loyalty: The Consumer Defection Dilemma™, draws on Catalina’s vast loyalty card transaction database to analyze the individual buying patterns of more than 32 million consumers in 2007 and 2008 across 685 leading CPG brands.

The bottom line is that “loyal” consumers are not as reliable as most of us might have guessed. “For the average brand in this study, 52% of highly loyal consumers in 2007 either reduced loyalty or completely defected from the brand in 2008.” You can download the 12 page report for details.

Not surprisingly, the report proposes to use individualized targeting services like Pointer Media Network to reduce churn by making carefully selected offers to at-risk consumers. Although the recommendation is obviously self-serving, I do think it’s correct.

But it seems to me that the implications are more fundamental. In the eternal debate about brand value, finding that loyalty evaporates more quickly than expected makes it even harder to justify marketing programs that don’t bring about an immediate, measurable return.

I’ve seen arguments (sorry, I can’t recall where) that the traditional buying model of awareness – interest – trial – purchase doesn’t correspond to reality. The survey results seem consistent with that position, in that they present consumer behavior as much less predictable than expected. This further reinforces the idea that investments with short-term results are more reliable than the long-term investments traditionally associated with brand building.

Pardon the cliche, but what we’re talking about here is a paradigm shift. If consumers don’t follow a predictable buying pattern, then brand value models based on such a pattern are not justifiable. Marketers need a fundamentally new framework to predict how their activities will affect consumer behavior. This framework may owe more to chaos theory than a linear process flow. I don’t know what they new model will look like, but recognizing that one is necessary is the first step towards creating it. If anybody out there has some candidates to offer, I’d love to hear about them.

Saturday, May 30, 2009

Two More Surveys Confirm that Most Marketers Don't Track ROI

The Sales Lead Management Association and Velos Group published their annual lead management practices survey last week. (Read it here; free registration required.) The survy had a relatively small sample (just over 140 responses) and was weighted towards smaller companies (80% had fewer than 25 sales reps). But it still provides some insight into how many companies actually do business.

The key finding from a marketing measurement viewpoint was that 62.5% of respondents do not track ROI on marketing programs. This is not especially surprising; in fact, it’s better than the 76% reporting they do not use ROI another, larger study released last week by Lenskold Group. (Click here for the Lenskold study.) But it’s still bad news.

Perhaps even more distressing is that just 19.3% of the respondents listed their inability to track ROI as a major sales lead management concern. Subtracting those from the 62.5%, this means that more than 40% were not particularly concerned about their failure to track ROI. It MIGHT also mean that many of those 40% actually could track ROI if they wanted to, although it’s more likely that most don’t have the capability but don’t consider that a problem.

The other findings from the survey also generally confirm that dismal state of the art, at least among smaller firms.

- More than half the respondents (55.5%) said they do not qualify their marketing inquiries before sending them to sales. This implies a huge waste of time by salespeople who then do the qualifications themselves, or, more likely, cherry pick the leads that look superficially promising and ignore the rest. Unless a company has managed to staff its sales team with clairvoyants, this is a guarantee that it will discard some good leads and spend more than it should on some bad ones.

- One-third (33.8%) don’t use sales automation or customer relationship management systems. Again, this is fundamental efficiency-killer. The survey also found that companies using these systems were not terribly satisfied with the results: 54% rated their satisfaction at 5 or less on a scale of 1 to 10. Maybe the problem is with the software itself, but I suspect the issue is lack of training and other supporting investments.

- Half had no formal sales forecasting process (27.2%) or used Excel only (23.8%). Again, this shows very immature sales management at these companies.

I must say I find these results quite sad, given how long these tools have been available and how well their benefits are established.

But perhaps it’s best to adopt the more positive attitude of the survey authors and see this as an opportunity. As they put it, respondents “have a lot of room for improvement in their sales and marketing best practices. By spending time and resources in this critical business area, companies will be able to increase sales, allocate marketing resources more efficiently and will be able to forecast their sales more accurately. All of which will help them survive these difficult economic times.”

Wednesday, May 27, 2009

Whopper Freak-Out Wins Ad Effectiveness Award

I received a mailing with the agenda for the Association of National Advertisers' Marketing Accountability and Effectiveness Conference in New York on June 2. This looks like a good event, covering all the usual-but-useful bases: proving the value of marketing (Enterprise Rent-a-Car), earning a place at the "C-suite" table (panel led by Ernst & Young), advanced analytics (VG Corporation) and media optimization (Citizens Bank).

But my favorite is an "EFFIE" Award for Burger King, for its "Whopper Freak-out" campaign, which "explored deprivation to see what would happen if America's most beloved burger was removed from the menu forever without any announcement." Since I avoid both television and Burger King, this was news to me, but I gather a bunch of TV commercials were involved. Interesting.

Tuesday, March 24, 2009

Marketing Measurement Book Includes Free Online Forms

I'm not usually quite so self-promotional but suppose it's reasonable to announce final publication of my long-promised book The Marketing Measurement Toolkit. It's a step-by-step tutorial on the process of building a marketing measurement system, from initial project definition through deployment. The idea was to move beyond the theories (important as they are) to help people with the practical details. You can order from the publisher at www.racombooks.com.

My favorite feature of the book (especially since they didn't put my picture on the cover) is a collection of forms and scorecards that help people to organize their project and assess risk factors. I've put these online here where anyone can download them. Obviously they make more sense in the context of the book, but even without that I think they'll provide useful checklists at different project stages.

Here, for example, is an extract from the Analytics Readiness Scorecard in chapter 8. The extract covers only Response Measurement, while the full scorecard includes similar sections on Segmentation Models, Predictive Models, Marketing Mix Models, Simulation Models and Optimization Models. The idea is to figure out which types of analytics your company can build with its current resources, or, looking at it slightly differently, which resources it must add to do the analytics you want. Users enter a 1-5 score for the existing and needed columns, and the system then calculates a gap. This isn't intended to provide much more than conventional wisdom, but a big, well-organized pile of conventional wisdom can be very useful.

Analytics Readiness Scorecard
Response Measurement existing needed gap comment
source captured directly

0
contact history available

0
response survey available

0
pre/post analysis possible

0
test/control possible

0
multi-variate test possible

0
total 0 0 0

So, by all means, check out the forms and, if you're so inclined, purchase the book. Any comments are more than welcome.

Monday, February 23, 2009

Vizu Measures the Brand Impact of Online Ads with Just One Question

I wrote last week about a general framework for measuring the marketing impact of social media. This proposed a general hierarchy of:

1. tracking mentions
2. identifying mentioners
3. measuring influence
4. understanding sentiment
5. measuring impact

As with all marketing measurement, the hardest task is the last one: measuring impact. This requires connecting the messages that people receive with their actual subsequent behavior, and hopefully establishing a causal relationship between the two. The fundamental problem is the separation between those two events: unless the message and purchase are part of the same interaction, you need some way to link the two events to the same person. A second problem is the difficulty of isolating the impact of a single event from all the other events that could influence someone’s behavior.

These problems are especially acute for brand advertising, which pretty much by definition is not connected with an immediate purchase. Brand advertisers have long dealt with this by imagining buyers moving through a sequence of stages before they make the actual purchase. A typical set of stages is awareness, interest, knowledge, trial (the first actual purchase) and regular use (repeat purchases).

Even though these stages exist only inside the customer’s head, they can be measured through surveys. So can more detailed attitudes towards a product such as feelings about value or specific attributes. For both types of measurement, marketers can define at least a loose connection between the survey results and eventual product purchases. Although the resulting predictions are far from precise, they offer a way to measure subtle factors, such as the impact of different advertising messages, that techniques based on actual purchases cannot.

The Internet is uniquely well suited for this type of survey-based analysis, since people can be asked the questions immediately after seeing an advertisement. One vendor that does this is Factor TG, which I wrote about last year (click here for the post.) Another, which I mentioned last week, is Vizu .

What makes Vizu different from other online brand advertising surveys is that each Vizu survey asks just one question. The question itself changes with each survey, and is based on the specific goal for the particular campaign. Thus, one survey might ask about awareness, while another might ask about purchase intentions. Vizu asks its question to a small sample of people who saw an advertisement and also to a control group of people who were shown something else. It assumes that the difference in answers between the two groups is the result of seeing the advertisement itself.

Although asking a single question may seem like a fairly trivial approach, it actually has some profound implications. The most important one is that it greatly increases response rate: Vizu founder Dan Beltramo told me participation can be upwards of 3 percent, compared with tenths or hundredths of a percent for longer traditional surveys.

This in turn means statistically significant survey results become available much sooner, giving marketers quick answers and letting them watch trends over relatively short time periods. It also provides significant results for much smaller ad campaigns or for panels within larger campaigns. This lets marketers compare results from different Web sites and for different versions of an ad, allowing them to fine tune their media selections and messages ways that traditional surveys cannot.

Another benefit of simplicity is lower costs. Vizu can charge just $5,000 to $10,000 per campaign, allowing marketers to use it on a regular basis rather than only for special projects. Vizu also has little impact on the performance of the Web sites running the surveys, reducing cost from the site owner's perspective.

The disadvantage of asking just one question is that you get just one answer. This prevents detailed analysis of results by audience segments, or exploration of how an ad affects multiple brand attributes. Vizu actually does provide a little information about the impact of frequency, drawn from cookies that track how often a given person has been exposed to a particular advertisement. Vizu also tracks where the person saw the ad, allowing some inferences about respondents based on the demographics of the host sites. Mostly, however, Vizu argues that a single answer is a good thing in itself because it keeps everyone involved focused on the ad campaign’s primary objective.

According to Beltramo, Vizu’s main customers are online ad networks and site publishers, who use the Vizu results as a way to show their accountability to ad agencies and brand advertisers. Some agencies and advertisers also contract with the firm directly.

What, you may be asking, has all this to do with social media measurement? Vizu’s approach applies not just to display advertising but also to social media projects such as downloadable widgets and micro sites.

Even though Vizu can’t fully bridge the measurement gap between exposure and actual purchases, it does offer more insights than simply counting downloads, clickthroughs or traffic. In a world where so little measurement is available, every improvement is welcome.

Thursday, February 19, 2009

Tools for Social Media Measurement

I was whining last week on my other blog about the lack of integrated solutions for social media analytics. No sooner had I written that, of course, than up popped several interesting solutions to prove me wrong. I plan to write soon about a couple of specific products, but will use this post to set a framework for evaluation.

I suppose I should start with a definition of “social media”. By this I simply mean any communication method that allow users to interact directly with each other, as opposed to a broadcast medium where only a few people can send messages. I’m not intending to be especially restrictive here – I’d include blogs, public forums, Facebook , Myspace, YouTube , Flickr , Twitter, LinkedIn, Plaxo and many others. These all provide a huge stream of public chatter that marketers can tap into both to monitor what is being said about their products and to proactively spread their preferred messages.

From a measurement perspective, I see several distinct functions. Today, these are largely served by separate point solutions. Integrated systems are beginning to emerge that combine at least a few. The ultimate integrated system would service them all. The functions are:

- tracking mentions. This is the simplest goal; it simply means uncovering and reporting on social media events that relate to your product, brand or company. The fundamental tool here is the keyword search. Many systems do these, and some even combine different social network sources to provide a consolidated report. Google Alerts is probably the best known, although it doesn’t do much with social media aside from blogs. BoardTracker and Linqia are more focused on social communities.

- identifying mentioners. Most social media comments are signed with a user ID of some sort, but the identity of the person behind that ID is often not clear. I haven’t actually seen tools that address this, but they probably exist. What’s needed is to look at whatever public profile is available, use that to find out other information about the person, and then in turn see if you can find that person in other social media. As a not-too-scary example, I recently saw a Twitter post that mentioned a vendor I follow. Checking out the poster's profile to see if she was worth “following”, I saw that she was from a small town where I used to live. Curious, I then found her in Linked In and discovered the company she worked for. Yes, this sounds uncomfortably like stalking, but it’s old news that the Internet is really good for that. What’s interesting here is the potential to help understand background of an individual and her social media profile. The steps that I took could easily be automated; indeed, products like ZoomInfo do something similar, although so far as I can tell they don't include social media other than blogs.

- measuring influence. Influence has two overlapping dimensions: the influence of an individual mentioner, and the influence of a particular event. The mentioner’s influence is related to the profile I just mentioned, but also to blog readership, “friends” and network members in various social platforms, authority as measured by links and recommendations, etc. Again, these statistics are available in a scattered fashion for individual social media, and it wouldn’t be hard to build a system to pull them together once you had linked the user IDs. Surely someone is out there doing this but I haven’t tripped over them. Maybe if I spent more time at the gym?

Measuring the influence of a particular event is actually easier. It is a matter of links, views, downloads, recommendations, ratings, etc. The statistics are often published along with the item itself. One possible tool is TrackUr, a low-cost product (from $18 to $197 per month) that scores Web sites based on “the number of backlinks pointing to a web site, the number of blog discussions, an estimate of traffic, and even the number of times the web site has discussed the phrase in the past.” Another that I suspect costs much more is Radian6, which “tracks comments, viewership, user engagement and other metrics, 24/7, so that you can clearly see the reach and affect[sic] each post has on the community.” It also can “uncover the influencers online by topic, based on user-determined formula weightings.”

- understanding sentiment. This is the domain of semantic analysis (that’s a pun, kind of), which is a long-established field with many players. One specialist applying its technology to real-time Web content is Hapax. Solutions integrated more closely with social media search include Crimson Hexagon and newly-launched Scout Labs Scout Labs is also a low-cost option, with plans starting at $99 per month and currently offering 30-day free trial.

- measuring impact. Ah, the bottom line: what did people exposed to the social media event actually do? Even the Web hasn’t yet reached the stage of universal behavior tracking that would really let you answer this, and I personally hope it never does. But one product that gets close is Tealium Social Media, which builds a list of Web URLs (both social media and regular online media) related to your product, checks which of those your Web site visitors had seen previously, and pops the results into Google Analytics so you can treat the Web events like any other visitor source. (See my earlier blog post on Tealium for details.) At the other end of the process, Vizu lets marketers embed a question in Web ads that asks about the brand attitudes, and compares this against answers of people who didn’t see the ad, thereby measuring the net impact of the ad itself. The vendor has embedded its questions in social media applications from vendors including Lotame (ads in social networks), AdNectar (social ‘gifting’) and Buddy Media (custom social applications). See their press release for details.

Friday, February 6, 2009

When All Marketing is Internet Marketing, All Agencies are Internet Agencies

A little press notice this week reported a January 20 announcement from Ogilvy North America of “strategic alliances” with marketing automation software vendor Unica and marketing database integrator Pluris. On its face, this seemed to suggest a change in strategy for all three firms, moving towards a database marketing agency approach that combines technology, marketing strategy, data and analytics. But close reading of the press release shows this is just an agreement to make referrals. When I asked one of the players involved, they confirmed that’s all there is.

Nevertheless, the announcement prompted a little flurry of speculation in the Twittersphere / blogosphere (we need a new term -- blabosphere?) about changes in the role of traditional advertising agencies. Even though the database marketing agency model has been held a relatively small niche for decades (pioneers like Epsilon were founded in the late 1960’s), the thought seems to be that it will soon become the dominant model.

I’m skeptical. In some ways, the basic technologies for customer management have actually become more accessible to non-specialist companies. In particular, the hardest part, building a customer database, has largely been taken over by customer relationship management systems. Once that’s in place, it’s not much more work to add a serious marketing automation system. In fact, all you do is buy software like Unica’s—which is why a firm like Ogilvy doesn’t need to build its own, or to have a particularly intimate relationship with Unica itself. Yes, Ogilvy and other agencies need database marketing competencies. But all they really need to do is manage a firm like Acxiom doing the actual work. This takes expertise but much less capital and human investment than doing it yourself.

So, if database marketing has become easier, there is even less need than in the past for an integrated database marketing agency. Database marketing has remained a small part of the industry because its scope is too limited, particularly in dealing with non-customers (who mostly are not in your database). (Yes, the credit card industry is an exception.)

But the Internet is changing the equation substantially. Advertising agencies marginalized database marketing because customer management is not their core business. But advertising agencies exist to buy ads, and Internet advertising is now too important for them ignore. Plus, Internet advertising is much closer to agencies’ traditional core business of regular advertising, so it’s much easier for them to conceive it as a logical extension of their offerings. Even though many specialist agencies sprung up to handle early Internet advertising, the traditional agencies are now reasserting their control.

Now here’s the key point: managing Internet ads is not the same as managing traditional advertising. Ad agencies will develop new skills and methods for the Internet, and those skills and methods will eventually spread throughout the agency as a whole. Doing a good job at creating, buying and evaluating Internet advertising requires vastly more data and analysis than doing a good job at traditional mass media. It will take a while for the agencies to develop these skills and procedures, but these are smart people with ample resources who know their survival is at stake. They will keep working at it until they get it right.

Once that happens, those skills and methods won’t stop at the door of the Internet department. Agencies will recognize that the same skills and methods can be applied to other parts of their business, and frankly I expect that they’ll find themselves frustrated to be reminded how poorly traditional marketing has been measured. Equipped with new tools and enlightened by a vision of how truly modern marketing management, agency leaders will bring the rest of their business up to Internet marketing standards of measurement and accountability. It’s like any technology: once you’ve seen color TV, you won’t go back to black and white.

We’re already seeing hints of this in public relations, where the traditional near-total lack of performance measurement is rapidly being replaced by detailed analyses of the impact of individual placements. In fact, the public relations people are even pioneering quantification of social network impact, perhaps the trickiest of all Internet marketing measurement challenges.

So, yes, I do see a great change in the role of advertising agencies. I even expect they will resemble the integrated strategy, technology, analytics and data of today’s database marketing agencies. But it won’t happen because the ad agencies adopt a database marketing mindset. It will happen because they want to keep on making ads.

Sunday, February 1, 2009

Razorfish Study Measures Direct Response to Social Media

I’ve been spending more time than I should recently on Twitter (follow me at @draab). It provides a fascinating peek into the communal stream-of-consciousness, which would be pretty horrifying (“Britney…Brad…Jen…Obama…groceries…Britney…Britney…Britney”) if you couldn’t choose the people and search terms you follow. This filtering (which I do via a great product called Tweetdeck) turns Twitter into a very efficient source of information I wouldn’t see otherwise.

Naturally, my interest in Twitter also extends to how you measure its business value, and by extension the value of social media in general. Since the people I follow on Twitter are both marketers and Twitter users, they discuss this fairly often. One recent post (technically a “tweet” but the term seems so childish) pointed to a study Social Media Measurement: Widgets and Applications by interactive marketing agency Razorfish.

The study turns out to be a very brief and straightforward presentation of two projects, both involving creation of downloadable widgets. One was promoted largely through conventional media and the other through widget distribution service Gigya. For each project, we’re told the costs, number of visitors and/or downloads, how much time and money they spent, and the return on investment. The not-very-surprising findings were that people who spent more time also spent more money and, more broadly, that “social media may be used effectively as a way of engaging users and potential customers.” A less predictable and potentially more significant finding from the first project was that people who were referred by a friend downloaded more often and spent much more money than people who were attracted by the media. The numbers were: downloads, 23% vs. 8%; spend any money, 9% vs. 1%; and amount spent, $23.00 vs $3.14. But the study points out that the numbers were very small—only 216 individuals arrived at the landing page as a result of a friend’s email, vs. 41,599 from media sources. These figures are drawn only from the first project because the second project couldn’t be measured this way.

From a marketing measurement standpoint, none of this seems break any new ground. Visitors are tracked by their source URLs and subsequent behavior is tracked through cookies. The ROI is calculated on straight revenue (it really should be profit) and seems to include only immediate purchases. This is particularly problematic for the second project, which promoted a $399 product with very limited supply that sold out in one minute. (The study doesn’t say, but based on this award citation it seems to be a special edition Nike Air Jordan shoe.) Clearly the point of such Air Jordan promotions isn’t immediate revenue, but brand building at its hard-to-measure best. The real challenge of evaluating social media is measuring this type of indirect impact. This study makes no claim to do that, but I’ll keep my eyes out for others that do.

Thursday, January 15, 2009

Interesting Conference on Real Time Communications; Great List of Tools for Reputation Monitoring

I spent yesterday morning at a conference on “Real-Time Communications” presented by the Business Development Institute and sponsored by PR Newswire. Not surprisingly, given the sponsor, this turned out to be mostly by and for public relations professionals. This group’s main concern seemed to be reacting to public criticism, and “real time media” meant primarily blogging and Twitter. There was heavy representation from the pharmaceutical industry in particular, which, as several speakers mentioned with obvious frustration, is highly constrained by regulatory rules from making proactive comments. Beyond reacting to immediate crises, it seems the main media relations strategy of this group is to reach out to better educate the press about industry issues, so any reporting will be based on a reasonably accurate understanding of the situation. Apparently even this basic approach is somewhat revolutionary in the industry: keynote Ray Kerins of Pfizer said that until he took over as VP Worldwide Communications two years ago, the company policy was to simply ignore the first phone call from any reporter. Interesting attitude, that.

Kerins also provided perhaps the most intriguing factoid of the day, which was that 15,000 journalists lost their jobs in 2008. (I traced this figure to the Web site Paper Cuts , which tracks reports of newspaper layoffs and buyouts. Apparently the total includes all newspaper employees, not just newsroom staff. But either way, it’s a big number.) Kerins’ comment was that many of the people being let go are well-trained and experienced reporters, who provide “context and analysis”. They are being replaced in many cases by bloggers and other non-professional observers who offer “speed” but are often not as knowledgeable, thorough or objective. This is a big issue, particularly for someone in a complicated industry such as pharmaceuticals.

Another, related point came from Morgan Johnston, Corporate Communications Manager of JetBlue, who described a situation where a customer complained while at the airport to 10,000 online readers about not being compensated properly when her baggage didn’t show up—only to have it appear 15 minutes later. (I’m not clear whether this was on Twitter or a conventional blog.) His point was that the damage was done, even if she posted a follow-up message saying that all was well. The original complaint will live on more or less forever, and people may not notice the final resolution. The particular moral here was the need to respond very quickly to such complaints so the company’s reaction becomes part of the permanent record.

From my own perspective, I was struck by the focus on reacting to other people’s comments in real-time media, as opposed to using those media for a company’s own marketing programs. I suppose the outbound programs are run by marketing rather than public relations.

On the specific issue of marketing measurement, no one at the conference seemed to feel they could meaningfully measure the return on investment of blogging and other projects. From the reactive PR perspective, it’s largely about being defensive and preventing damage to reputation, so it’s probably something you can’t afford not to do. The very little discussion I heard about proactive programs mentioned that it’s occasionally possible to count the direct leads or revenue, but there isn’t much of a way to measure the long-term financial value. This matches my own observations, mostly because the impact of these programs is usually too small to isolate from other factors that also affect performance. There might however be non-financial measures that are more sensitive, like Web site traffic by source.

One very specific and highly valuable product of the conference was a casual remark by one panelist to look at a Web post by Dan Schawbel at Mashable.com for tools to measure brand reputation online. I tracked this down and found two extremely valuable posts, one describing free brand monitoring tools and another describing paid reputation monitoring tools (many of which are very inexpensive). There’s no point to my listing the products here, since you can just read the posts themselves. But this is very useful information – indeed, it made the whole morning worthwhile.

Thursday, December 18, 2008

Aberdeen Reports Show Varied Roles for Performance Measurement

Our friends at Aberdeen Group apply a highly standardized research process to technology issues. They take a survey that asks companies about their business performance and the business processes, organization, knowledge management, technologies and performance measures related to a technology. They then divide the companies into leaders (“best-in-class”), laggards and industry average based on their business performance, and compare replies for the different groups. The not-quite-stated implication is that the differences in performance are caused by differences in the other factors. This is not necessarily correct (the ever-popular post hoc ergo propter hoc fallacy) and you could also wonder about the sample size (usually around 200) and how accurately people can answer such detailed questions. But so long as you don’t take the studies too seriously, they always give an interesting look at how firms at different maturity levels manage the technologies at hand.

It so happens that three of the Aberdeen studies have been sitting on my desk for some time, so I had a chance to look at them together. The topics were Lead Nurturing, Trigger Marketing and Cross-Channel Campaign Management. All are currently available for free although the sponsors may contact you in exchange.

Since the Aberdeen reports all follow a similar format, it’s easy to compare their contents. From the perspective of marketing performance measurement, they contain two elements of interest. These are the performance measures highlighted as distinguishing best-in-class companies, and the role of measurement among recommended strategic actions. Here’s a brief look at each of these in the three reports:

Lead Nurturing. The report highlighted number of qualified leads and lead-to-close ratio as critical performance measures, and found that 77% of best-in-class companies were tracking them. It also recommended tracking revenue associated with leads, although it found only 35% of best-in-class companies could do this. But otherwise, it didn’t see performance measurement as a central issue: the primary focus was on matching marketing messages to the prospect’s current stage in the buying cycle. Other important strategies were leveraging multiple channels, identifying prospect buying cycle and needs, and using automated lead scoring to move customers through the cycle.

Trigger Marketing. This report did not identify particular marketing measures as critical, although it did say that having defined performance goals for trigger marketing programs is important. It reported the most common measure is change in response rates, used by 69% of all respondents. (The next most common measure, change in retention rates, was used by just 54%.) I take this as a sign of immaturity (among the respondents, not Aberdeen), since response rate is a primitive measure compared with profitability and return on marketing investment, which were used by 43% and 42% respectively. This is consistent with another finding: the most common strategic action is to “link trigger marketing activities to increased revenues and other business results” (32%). I interpret that as meaning people are just learning to do make that linkage and are simply using response rate until they figure it out. It might be worth noting that the Aberdeen analyst highlighted digital dashboards as next step for best-in-class companies wishing to do still better, although I didn’t see a particularly compelling case for selecting that over other possible activities. But I’m all in favor of dashboards, so I’m glad to see it.

Cross-Channel Campaign Management. Again, the report doesn’t specify particular performance measures. It does say that it’s important to optimize future campaigns based on past performance (pretty obvious) and highlight real-time tracking of results across channels (less obvious, although I’m not so sure I agree. Immediate results may not in fact correlate with long-term profitability). This report did include segmentation and analytics as a strategic actions. (I consider these as part of performance measurement.) In particular, it stressed that best-in-class companies were focused on identifying their high value customers and treating them uniquely. Most of the recommendations, however, were about building the infrastructure needed to coordinate marketing messages across channels, and then executing those coordinated campaigns.

So where does this leave us? I don’t draw any grand lessons from these three reports, except to note that financial measures (i.e., customer profitability and return on investment) don’t play much of a role in any of them. Even that probably just confirms that such measures not widely available, which we already knew. But it’s good to know that people are working on performance measurement and that Aberdeen is baking it into its research.

Thursday, December 11, 2008

Survey: Marketing Accountability Measures Remain Weak

Every year since 2005, the Association of National Advertisers and vendor MMA (Marketing Management Analytics) have joined forces to produce a survey on marketing accountability. Although the details change each year, the general results have been sadly consistent: marketers, finance executives and senior management are very unhappy with their marketing measurement capabilities.

In the 2008 study, released in July and just recapitulated in a new MMA white paper, only 23% of the marketers were satisfied with their metrics for marketing’s impact on sales, and just 19% were satisfied with metrics showing marketing impact on ROI and brand equity.

Furthermore, only 14% of the marketers felt their senior management had confidence in marketing’s forecasts of sales impact. And even this is probably optimistic: a separate MMA-funded study, also cited in the new white paper, found that only 10% of financial executives use marketing forecasts to help set the marketing budget.

The obvious question is why so little progress has been made. Marketers consistently rank performance measurement as their top priority (for example, see the CMO Council’s Marketing Outlook 2008 survey). Nor are marketers doing this out of the goodness of their hearts: they know that being able to show the impact of their expenditures is the best way to protect and grow their budgets. So marketers have every reason to work hard at developing performance measures that finance and senior management will accept.

And yet...when the ANA survey asked marketers to rank their accountability challenges, the top score (45%) went to “understanding the impact of changes in consumer attitudes and perceptions on sales”. This strikes me as odd, if the marketers’ ultimate goal is to understand the impact of marketing programs on sales. Measuring the impact of marketing programs and measuring the impact of customer attitudes are not the same thing.

Nor is this a simple fluke of the wording. A separate question showed the most common accountability investment was in “brand and customer equity models” (53%). These also measure the link between attitudes and sales.

One explanation for the disconnect would be that marketers can already measure the relationship between marketing programs and consumer attitudes, so they can complete the analysis by adding the link between attitudes and sales. This seems a bit optimistic, especially since it also assumes that marketers also understand the impact on sales of marketing programs that are not aimed at consumer attitudes, such as price and trade promotions.

A more plausible explanation would be that the link between attitudes and sales is the hardest thing to measure, so that’s where marketers put their effort. Or, maybe that relationship is the question that marketers find most intriguing because, well, that’s the sort of thing they care about. A cynic might suggest that marketers don’t want to measure the link between marketing programs and sales because they don’t want to know the answer. But even the cynic would acknowledge that marketers need a way to justify their budgets, so that can’t be it.

None of these answers really satisfies me, but let’s put this question aside. I think we can safely assume that marketers really do want to measure their performance. This leaves the question of why they haven’t made much progress in doing it.

One reason could be that they simply don’t know how. Marketing measurement is truly difficult, so that’s surely part of it.

Another possibility is that they know how, but lack the resources. Since good marketing measurement can be quite expensive, this is probably part of the problem as well. Remember that the resources involved will ultimately come from the corporate budget, so finance departments and senior management must also agree that marketing measurement is the best thing to spend them on. And, indeed, this doesn’t seem to be their priority. The white paper states that “the number of CEOs and CFOs championing marketing accountability programs within their firms remained negligible and unchanged from 2007.”

This is a pretty depressing conclusion, although to me it has the ring of truth. Fuss though they may, CEOs and CFOs are not willing to invest money to solve the problem. Indeed our friend the cynic might argue that they are the ones with a motivation to avoid measurement, since it gives them more flexibility to allocate funds as they prefer.

The white paper doesn’t dwell on this. It just lists lack of senior management involvement as one of many obstacles. The paper authors then go on to propose a four step process for developing an accountability program:

- assess and benchmark existing capabilities and resources
- define an achievable future state, in terms of the business questions to answer and the resources required to answer them
- work with stakeholders to align metrics with corporate goals and key business questions
- establish a roadmap with a multi-year phased approach

There’s not much to argue with here. The paper also provides a reasonable list of success factors, including:

- realistic stakeholders expectations
- agreement on scope at the start of the project
- cross-functional team with clearly defined roles, responsibilities and communication points
- simple math and analytics
- integration of analytics for pricing, ROI, and brand analysis

Again, it’s all sound advice. Let’s hope you can get the resources to follow it.

Friday, December 5, 2008

TraceWorks' Headlight Integrates Online Measurement and Execution

I’ve been looking at an interesting product called Headlight from a Danish firm TraceWorks. Headlight is an online advertising management system, which means that it helps marketers to plan, execute and measure paid and unpaid Web advertising.

According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)

These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.

All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.

The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.

Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.

The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.

Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.

Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.

Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.

TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.

Friday, November 28, 2008

Judging the Value of Marketing Data

Last week’s post on ranking demand generation vendors highlighted a fundamental challenge in marketing measurement: the data you want often isn’t available. So a great deal of marketing measurement comes down to deciding which of the available data best suits your needs, and ultimately whether that data is better than nothing.

It’s probably obvious why using bad data can be worse than doing nothing, but in case this is read by, say, a creature from Mars: we humans tend to assume others are telling the truth unless we have a specific reason to question them. This innate optimism is probably a good thing for society as a whole. But it also means we’ll use bad data to make decisions which we would approach more cautiously if we had no data at all.

But how do you judge a piece of data? Here is a list of criteria presented in my book The MPM Toolkit, due in late January.

· Existence. Ok, this is pretty basic, but the information does have to exist. Let’s avoid the deeper philosophical issues and just say that data exists if it is recorded somewhere, or can be derived from something that’s recorded. So the color of your customers’ eyes only exists as data if you’ve stored it on their records or can look it up somewhere else. If the data doesn’t exist, you may be able to capture it. Then you have to compare the cost of capturing it with its value. But that’s a topic for another day.

· Accessibility. Can you actually access the data? To get back to last week’s post, we’d love to know the revenue of each demand generation vendor. This data certainly exists in their accounting systems, but they haven’t shared it with us so we can’t use it. Again, it’s often possible to gain access to information if you’re willing to pay the price, and you must once more compare the price with the value. In fact, the price / value tradeoff will apply to every factor in this list, so I won’t bother to mention it from here on out.

· Coverage. What portion of the universe is covered by the data? In the case of demand generation vendors, the number of blog posts was a poor measure of market attention because the available sources clearly didn’t capture all the posts. In itself, this isn’t necessarily fatal flaw, since a fair sample could still give a useful relative ranking. But we can’t judge whether the coverage was a fair sample because we don’t know why it was incomplete. This is a critical issue when assessing whether, or more precisely how, to use incomplete data. (In the demand generation case, the very small numbers of blog posts added another issue, which is that the statistical noise of a few random posts could distort the results. This is also something to consider, although hopefully most of your marketing data deals with larger quantities.)

· Accuracy. Data may not have been accurate to begin with or it may be outdated. Data can be inaccurate because someone purposely provided false information or because the mechanism is inherently flawed. Survey replies can have both problems: people lie for various reasons and they may not actually know the correct answers. Even seemingly objective data can be incorrect: a simple temperature reading may inaccurate because the thermometer was miscalibrated, someone read it wrong, or the scale was Celsius rather than Fahrenheit. Errors can also be introduced after the data is captured, such as incorrect conversions (e.g., inflation adjustments used to create “constant dollar” values) or incorrect aggregation (e.g., customer value statistics that do not associate transactions with the correct customers). In our demand generation example, statistics on search volume were highly inaccurate because the counts for some terms included results that were clearly irrelevant. As with other factors listed here, you need to determine the level of accuracy that’s required for your specific purpose and assess whether the particular source is adequate.

· Consistency. Individually accurate items can be collectively incorrect. To continue with the thermometer example, readings from some stations may be in Celsius and others in Fahrenheit, or readings from a single station may have changed from Fahrenheit to Celsius over time. This particular difference would be obvious to anyone examining the data, although it could easily be overlooked in a large data set that combined information from many sources. Other inconsistencies are much more subtle, such as changes in wording of survey questions or the collection mechanism (e.g., media consumption diaries vs. automated “people meters”). As with coverage, it’s important to understand any bias introduced by these factors. In our demand generation analysis, Compete.com used several different techniques to measure Web traffic, and it appeared that these yielded inconsistent results for sites with different traffic levels.

· Timeliness. The primary issue with timeliness is how quickly data becomes available. In the past, it often took weeks or months to gather marketing information. Today, data in general moves much more quickly, although some information still take months to assemble. There is a danger that quickly available data will overwhelm higher-quality data that appears later. For example, initial response rate to a promotion is immediately available, but the value of those responses can only be measured over time. Decisions based only on gross response often turn out to be incorrect once the later performance is included in the analysis. Still, timely data can be extremely important when it can lead to adjustments that improve results, such as moving funds from one promotion to another. Online marketing in particular often allows for such reactions because changes can be made in hours or minutes, rather than the weeks and months needed for traditional marketing programs.

I haven’t listed cost as a separates consideration only because there are often incremental investments that can made to change a data element’s existence, accessibility, coverage, etc. Those investments would change its value as well. But you will ultimately still need to assess the total cost and value of a particular element, and then compare it with the cost and value of other elements that could serve a similar purpose. This assessment will often be fairly informal, as it was in last week’s blog post. But you still need to do it: while an unexamined life may or not be worth living, unexamined marketing data will get you in trouble for sure.

Friday, November 21, 2008

Twitter Volume for Demand Generation Vendors

A comment on my previous post suggested Twitter mentions as a possible measure of vendor market presence. That had in fact occurred to me, but I hadn't bothered to check because I assumed the volume would be too low. But since the topic had been raised, I figured I'd take a peek.

The first two Twitter monitoring sites I looked at, Twitscoop and Twitterment, seemed to confirm my suspicion: of the three most popular vendors, Eloqua had 6 Twitscoop hits and 3 Twitterment hits; Silverpop had 2 on each; and Marketo had 3 on Twitscoop and none on the other. No point in looking further here.

But then I checked Twitstat. In addition to having a slightly less childish name, it seems to either do a more thorough search or look back further in time: for whatever reason, it found 152 hits for Eloqua, 65 for Silverpop, and 133 for Marketo. Much more interesting.

Alas, the numbers dropped down considerably after that, as you can see in the table below. Everything else is in single digits except for two anomalies LoopFuse with 22 mentions and Bulldog Solutions with a whopping 217. Interestingly, both those sites also had exceptionally high blog hit numbers on IceRocket. The root cause is probably the same: one or two active bloggers or Twitter users (which seems to be the accepted term; I guess we can't call them Twits) are enough to skew the figures when volumes are so low. More particularly, LoopFuse gets a lot of attention because some of its founders are closely tied to the open source community. Bulldog Solutions just seems to have a group of employees who are serious into Twitter. In fact, I now know more about their lives than I really care to (although there was nothing indiscreet in the posts, I'm pleased to report).

A couple of side notes:

- the very short length of the messages does make them easy to read, which paradoxically means you can actually gather more information from Twitter than by scanning blog posts, because reading the blog posts takes too much time. Of course, when we're dealing with such tiny volumes, there is no way to generalize from what you read: Twitter is strictly anecdotal evidence, and perhaps even dangerous for that reason.

- there seemed to be several Tweets that were purposely sent for marketing purposes. Nothing wrong with that, and they were quite open about it. Just interesting how quickly some firms have picked up on this. (OK, not so quickly: Twitter has been around since 2006 and very popular for about a year now.)

Still, the bottom line for the purposes of measuring demand generation vendors is still the same as for blogs: too little volume to be a reliable measure of relative market interest.


Twitterment

Ice Rocket

Alexa

Alexa

twitter mentions

blog posts

rank

share x 10^7

Already in Guide:
Eloqua

152

286

20,234

70,700

Silverpop

65

188

29,080

30,500

Marketo

122

229

68,088

17,000

Manticore Technology

0

56

213,546

6,100

Market2Lead

5

5

235,244

4,800

Vtrenz

8

53

295,636

3,600

Marketing Automation:

-

Unica Affinium

6

43

126,215

8,500

Alterian

5

145

345,543

2,500

Aprimo

6

139

416,446

2,200

Neolane

5

64

566,977

1,690

Other Demand Generation:

-

Marketbright

9

167,306

5,400

Pardot

4

33

211,309

3,600

Marqui Software

2

19

211,767

4,400

ActiveConversion

2

12

257,058

3,400

Bulldog Solutions

219

43

338,337

3,200

OfficeAutoPilot

2

5

509,868

2,000

Lead Genesys

1

5

557,199

1,450

LoopFuse

22

43

734,098

1,090

eTrigue

1

1,510,207

430

PredictiveResponse

1

0

2,313,880

330

FirstWave Technologies

0

11

2,872,765

170

NurtureMyLeads

0

5

4,157,304

140

Customer Portfolios

0

3

5,097,525

90

Conversen

1

0

6,062,462

70

FirstReef

0

0

11,688,817

10

Tuesday, November 18, 2008

Comparing Web Activity Measures for Demand Generation Vendors

I recently wanted to measure the relative popularity of several demand generation vendors, as part of a deciding how to expand the Raab Guide to Demand Generation Systems. This led to an interesting little journey which I think is worth sharing.

I started with a list of 23 marketing system vendors. A couple are fairly large but most are quite small. These were grouped into three categories: five demand generation vendors already in the Guide; four marketing automation vendors with significant demand generation market presence; and fourteen other demand generation vendors. (See http://www.raabguide.com/ for definitions of demand generation and marketing automation.)

My first thought was to look at their Web site traffic directly. The easiest way to do this is at Alexa.com, which tracks site visits of people who download its search toolbar. The number of users in this base is apparently a well-guarded secret, or at least well enough guarded that I would have had to look beyond the first Google search page for the answer. Alexa was originally classified by many experts as spyware, and is still somewhat controversial. But it was purchased by Amazon.com in 1999 and has since become more or less grudgingly accepted.

Be that as it may. I captured two statistics for each of my sites from Alexa: a ranking which basically reflects the number of pages viewed by unique visitors each month (the busiest site gets rank 1, next busiest gets rank 2, etc.); and a share figure that shows the percentage of total toolbar users who visit each site each month. (I think I have that correct; you can check the definitions at Alexa.com.) Ranking on either figure gives the same sequence (except for Pardot; I have no idea why). If you’re creating ratios or an index, the difference in the share figures is probably a better indicator of relative popularity, since a company with twice the share of another has twice as many visitors, but will not necessarily a rank number that is twice as low. (Lower rank means more traffic.)

Here are the ranks I came up with, broken into the three segments I mentioned earlier:

Alexa - 3 mo average
Already in Guide:rankshare
Eloqua20,2340.0070700
Silverpop29,0800.0030500
Marketo68,0880.0017000
Manticore Technology213,5460.0006100
Market2Lead235,2440.0004800
Vtrenz295,6360.0003600
Marketing Automation Vendors:
Unica / Affinium*126,2150.0008500
Alterian345,5430.0002500
Aprimo416,4460.0002200
Neolane566,9770.0001690
Other Demand Generation:
MarketBright167,3060.0005400
Pardot211,3090.0003600
Marqui *211,7670.0004400
ActiveConversion257,0580.0003400
Bulldog Solutions338,3370.0003200
OfficeAutoPilot509,8680.0002000
Lead Genesys557,1990.0001450
LoopFuse734,0980.0001090
PredictiveResponse2,313,8800.0000330
FirstWave Technologies2,872,7650.0000170
NurtureMyLeads4,157,3040.0000140
Customer Portfolios5,097,5250.0000090
Conversen*6,062,4620.0000070
FirstReef11,688,8170.0000010

These rankings were more or less as I expected. Within the first group, Eloqua is definitely the largest vendor, while Marketo is probably the most aggressive marketer at the moment. Vtrenz is the second-largest demand generation company, based on number of clients and almost certainly on revenue. But it is a subsidiary of Silverpop, so its traffic is split between Vtrenz.com and visits to Silverpop.com. This means that the Vtrenz.com ranking understates the company’s position, whie Silverpop ranking includes traffic unrelated to demand generation. I’ve therefore tracked both here. Manticore and Market2Lead get much less attention than the other three, so it makes sense that they have much less traffic.

Figures for the next group also seem to be ranked about correctly. Unica is certainly the most prominent of this group, with Alterian, Aprimo and Neolane trailing quite far behind. I would have expected a bit more traffic for Neolane, but it is definitely the new kid on this block and only entered the U.S. market about one year ago. The real surprise here is that this group as a whole ranks so far below the big demand generation vendors, even though the marketing automation firms are in fact larger and probably do more promotion. Perhaps the marketing automation vendors appeal to a smaller number of potential users (primarily, marketers in large companies with direct customer contact, such as financial services, retail, travel and telecommunications) and generate less traffic as a result.

I didn’t have much sense of the relative positions of the other demand generation vendors, although I would have guessed that MarketBright and Pardot were near the top. Marqui has had little attention recently, perhaps because they’ve been through financial difficulties culminating in the purchase of their assets by a private investor group this past August. ActiveConversions I do know, only because I’ve spoken with them, and they rank about where I expected given their number of clients. The other names were somewhat familiar but the only one I’d ever spoken with was OfficeAutoPilot, which I knew to be small. Since I had no fully formed expectations, the rankings couldn’t surprise me.

In other words, the rankings provided by Alexa seemed generally reasonable given my knowledge of the companies concerned.

But Web traffic is just one measure. Where else could I look to confirm or challenge these impressions?

Well, there is another Web traffic site that is somewhat similar to Alexa, called Compete.com. I actually hadn’t heard of them before but they came up in my research. They apparently use their own toolbar but also some other Web traffic measures such as volumes reported by Internet Service Providers (ISPs). You’d expect them to pretty much match the Alexa figures. But do they? Here is a chart comparing the two, with the Alexa multiplied by 10 ^7 to make them more legible.

Compete.comAlexa.com
unique visitors / monthshare x 10^7
Already in Guide:
Eloqua560,28870,700
Silverpop293,58030,500
Marketo34,24417,000
Manticore Technology15,7896,100
Market2Lead10,6894,800
Vtrenz5,3133,600
Marketing Automation Vendors:
Unica / Affinium*23,1388,500
Alterian4,4972,500
Aprimo5,1312,200
Neolane3,9271,690
Other Demand Generation:
MarketBright13,9935,400
Pardot7,3393,600
Marqui *3,2824,400
ActiveConversion1,5033,400
Bulldog Solutions6,4083,200
OfficeAutoPilot1,5672,000
Lead Genesys2,6301,450
LoopFuse1,9301,090
PredictiveResponse1,099330
FirstWave Technologies-170
NurtureMyLeads-140
Customer Portfolios-90
Conversen*-70
FirstReef-10

You don’t need Sherlock Holmes to spot the problem: the Compete.com figures for Eloqua and Silverpop seem much too high compared with the others. I could concoct a theory that this reflects the difference between counting unique visitors in Compete.com and counting page views in Alexa, and throw in the fact that Eloqua and Silverpop/Vtrenz host landing pages for their clients. But the other demand generation vendors also host their clients’ pages, so this shouldn’t really matter. I suspect what really happens is that Compete measures low volumes differently from higher volumes (remember, they use a combination of techniques), and thus the figures for high-volume Eloqua and Silverpop are inconsistent with figures for the other, much lower-volume domains.

Anyway, if we throw away those two, the rest of the Compete figures seem more or less in line with the Alexa figures, apart from some small exceptions (Bulldog in particular ranks higher). All told, it doesn’t seem that Compete adds much value to what I already got from Alexis.

So much for Web traffic. How about search volume? Google Keywords will give that to me. Again, we’ll compare to Alexa as a reference:

Google KeywordsAlexa
avg search volumeshare x 10^7
Already in Guide:
Eloqua

1,900

70,700

Silverpop

1,790

30,500

Marketo

839

17,000

Manticore Technology

113

6,100

Market2Lead

318

4,800

Vtrenz

752

3,600

Marketing Automation:

-

Unica / Affinium*

6,600

8,500

Alterian

861

2,500

Aprimo

1,600

2,200

Neolane

1,340

1,690

Other Demand Generation:

-

MarketBright

186

5,400

Pardot

210

3,600

Marqui *

1,300

4,400

ActiveConversion

46

3,400

Bulldog Solutions

442

3,200

OfficeAutoPilot

0

2,000

Lead Genesys

74

1,450

LoopFuse

260

1,090

PredictiveResponse

36

330

FirstWave Technologies

386

170

NurtureMyLeads

0

140

Customer Portfolios

0

90

Conversen*

170

70

FirstReef

12

10


If we limit ourselves to the first two groups, the search numbers look mostly plausible. The low figure for Manticore could have to do with checking specifically for “Manticore Technology”, since a looser “Manticore” would incorporate an unrelated company and references to the mythical beast. The high value for Unica probably reflects some unrelated uses of the word in other languages or as an acronym. I have no particular explanation for the relatively low value for Alterian or the substantial flattening of the range between Eloqua and its competitors. Perhaps Eloqua’s traffic is less search-driven than other vendors’. Or not. In any event, I think the implicit rankings here are about as plausible as the Alexa rankings.

But things get crazier in the Other Demand Generation vendor segment. I understand the Marqui number, which is high because Marqui can be a misspelling of other words (marquis, marque, marquee) and has some unrelated non-English meanings. Similarly, Conversen is a verb form in Spanish. I think that Bulldog Solutions, FirstWave and LoopFuse also gain some hits because of their component words, even though I tried to keep them out of the search results. The bottom line here is you have to throw away so many terms that the remaining rankings don’t signify much. So, in general, search keyword rankings need close consideration before you can accept them as a meaningful measure of importance.

How about Google hits? I’ll show them alongside the Google Keywords as well as Alexa rank.

Google hitsGoogle KeywordsAlexa
avg search volumeshare x 10^7
Already in Guide:
Eloqua

118,000

1,900

70,700

Silverpop

111,000

1,790

30,500

Marketo

103,000

839

17,000

Manticore Technology

9,620

113

6,100

Market2Lead

25,900

318

4,800

Vtrenz

35,200

752

3,600

Marketing Automation:

-

Unica / Affinium*

7,750

6,600

8,500

Alterian

262,000

861

2,500

Aprimo

161,000

1,600

2,200

Neolane

40,200

1,340

1,690

Other Demand Generation:

-

MarketBright

34,500

186

5,400

Pardot

27,600

210

3,600

Marqui *

1,370,000

1,300

4,400

ActiveConversion

16,800

46/p>

3,400

Bulldog Solutions

9,340

442

3,200

OfficeAutoPilot

777

0

2,000

Lead Genesys

5,880

74

1,450

LoopFuse

95,400

260

1,090

PredictiveResponse

21,800

36

330

FirstWave Technologies

13,400

386

170

NurtureMyLeads

1,050

0

140

Customer Portfolios

12,200

0

90

Conversen*

2,790

170

70

FirstReef

18,100

12

10


Here the impact of limiting Manticore to “Manticore Technology” shows up even more clearly (although Manticore truly doesn’t get much Web attention). I limited the Unica test to “Unica Affinium” since the number of hits is otherwise over 100 million; but this seems to excessively depress the results. Note that the low ranking for Alterian has now been reversed; in fact, Alterian has the most hits of all, and the marketing automation group in general shows more activity than the demand generation vendors. That could be true – those vendors have been around longer. Or it could be a fluke.

Once again, the Other Demand Generation group has a big problem with Marqui and perhaps smaller problems with LoopFuse and FirstReef. Even excluding those, the numbers jump around a great deal. As with keywords, these figures don’t seem to be a reliable measure of anything.

Let’s try one more measure: the blogosphere. Here I tried three different services: Technorati, BlogPulse and Ice Rocket.

TechnoratiBlogpulseIce RocketAlexa
blog postsblog postsall postsshare x 10^7
Already in Guide:
Eloqua

130

267

286

70,700

Silverpop

70

119

188

30,500

Marketo

3

179

229

17,000

Manticore Technology

0

12

56

6,100

Market2Lead

0

7

25

4,800

Vtrenz

0

30

53

3,600

Marketing Automation:

-

Unica / Affinium*

0

6

43

8,500

Alterian

8

119

145

2,500

Aprimo

0

118

139

2,200

Neolane

0

33

64

1,690

Other Demand Generation:

-

MarketBright

1

23

33

5,400

Pardot

0

32

33

3,600

Marqui software*

5

15

19

4,400

ActiveConversion

0

6

12

3,400

Bulldog Solutions

0

30

43

3,200

OfficeAutoPilot

0

5

5

2,000

Lead Genesys

0

1

5

1,450

LoopFuse

4

48

43

1,090

PredictiveResponse

0

0

0

330

FirstWave Technologies

0

5

11

170

NurtureMyLeads

0

1

5

140

Customer Portfolios

0

0

3

90

Conversen*

0

2

0

70

FirstReef

0

0

0

10



Results for all three services are roughly consistent, although Technorati gets many fewer hits and Ice Rocket finds a few more than Blogpulse. The major anomaly is the low value for Unica, but that happens because I actually searched on Unica Affinium, to avoid all the irrelevant hits on Unica alone. Similarly, I searched on Marqui Software to avoid unrelated hits on Marqui. The high values for Bulldog Solutions and Loopfuse are valid (I scanned the actual hits); these two vendors just managed to snag a relatively high number of blog mentions. Remember we are looking at very small numbers here: it doesn’t take much to get 40 blog mentions. Nor, if we trust the Alexa, do they translate into much Web traffic. However, the blog hits might explain the relatively high keyword search counts for those two vendors.

Well, I hope you enjoyed the trip. This is far from an exhaustive analysis of the issue, but based on the information available, I’d say that Alexa Web traffic is the most useful measure for assessing the market presence of different demand generation vendors, and blog mentions have at least some value. Google hits and keyword searches capture too many unrelated items to be reliable.