Saturday, February 28, 2009

Rate This Neutral: Scout Labs Social Media Monitoring is Definitely Cool, Possibly Accurate

Sure I like flashing lights and buzzers: what technologist doesn’t? And if a product has all that plus a low price, it’s darn near irresistible. So I was quite excited when I saw Scout Labs, a very nicely packaged social media monitoring tool that combines automated search, sentiment identification, importance ranking, trend reporting, alerts, bookmarking, and collaboration for under $300 per month. What’s not to like?

A couple things, it turns out. But let’s look at the good stuff first. Scout Labs’ combines three of the five social media measures I proposed last week (tracking mentions, identifying mentioners, measuring influence, understanding sentiment and measuring impact). Specifically, it searches blogs, news feeds, video and photo sites, Twitter and some social network sites (although not yet the big ones); provides influence measures; and classifies blog posts by sentiment. It doesn’t attempt to identify mentioners (i.e., track multiple posts by the same individual), or to measure the impact of an item on its audience. But three out of five is pretty good.

More important, the things that Scout Labs does, it does well. The search feature lets users specify multiple terms and whether each term is required, relevant or excluded. Once a search is defined, the system will automatically scan the top 12 million blogs for qualified entries, rate their sentiments as positive, negative or neutral, and show them in a list. Each item on the list shows the blog headline and phrases with the search terms highlighted. A side box shows common words in all the entries, ranked by frequency. This by itself gives a quick view of what’s being said about the search target.

Users can drill into the listed items to see the full entry, details about where it came from, how many external links attach to the item and its source, and the sentiment rating. They can manually revise the rating, bookmark the item with keywords, attach a note for discussion, and email a link with a system-generated summary and the user’s own comments to anyone the user chooses. The system currently uses the link counts as an influence measure, and can rank the items by influence or date. Scout Labs is working to upgrade its influence metric by integrating Web traffic data and a measure of the source’s relevance to the search topic.

But there’s more. The system can prepare graphs showing trends in volume, sentiment, and share of total blog mentions. Graphs can compare statistics for up to four different searches. Users can specify the date ranges to report on, currently going back up to three months and soon extending to six months.

Things are a little less exciting once you move beyond the blogosphere. The system will list search results for photo sites, video sites and Twitter, but doesn’t offer sentiment tracking or graphs. Scout Labs is working on adding sentiment tracking to Twitter comments. I guess it's not fair to ask them to measure sentiment for photos or videos.

As to pricing, the smallest Scout Labs plan allows five saved searches for $99 per month, although the company thinks expects most businesses will take plans for 25 or more searches, which start at $249 per month. There are no limits on the number of users or search hits in any plan and the system continuously updates the results of the saved searches.

So far so good. There's a free 30 day trial, so I set up two test searches in Scout Labs, each for a demand generation software vendor I track closely. The system found many of the posts I expected, and it was definitely fun and convenient to dig into them. If I worked at one of those firms, I would gladly pay $249 per month for this.

But then I ran the same searchs in IceRocket, a free tool that also does searches of blogs and other sources. IceRocket found nearly twice as many hits during the same time period, and they looked legitimate. Ouch. But Scout Labs acknowledges that its 12 million blogs don’t cover the entire blogsphere (over 100 million blogs, last I heard), and it does let you add feeds if one you want is missing. Plus IceRocket doesn’t support saved searches or do any of the other cool stuff. So I’m a little worried about coverage but still willing to pay Scout Labs’ fee.

Next I took a closer look at the sentiment ratings in the Scout Labs results. I didn’t expect them to be perfect, but was seriously disappointed. On one search, 32 of 39 items were labeled as neutral. Some of those were actually pretty positive, but, as Scout Labs explains in a recent blog post, they try to be conservative by labeling items as neutral unless the tone is clear. Fair enough. But the seven positive items were all pretty much neutral too. For example, several were help wanted postings that simply specified experience with the products in question. There were no items classified as negative, although one or two of the posts arguably could have been.

In the blog post I just mentioned, Scout Labs offers a detailed discussion of its sentiment rating technique. The gist is that they don’t just count “happy” and “sad” words, but semantically analyze each entry to understand which words relate to the search topic. Sounds good in theory. They also say their automated ratings agree with college-educated humans about 75% of the time. In comparison, they say, college-educated humans agree with each other about 85% of the time. (Clearly they are not talking about married couples.)

But if the vast majority of items are neutral, that’s less useful than it sounds. Remember the basic statistics: if 80% of the items are neutral, then a system that blindly ranks everything as neutral will be correct 80% of the time. The ratings that really count are the positives and negatives, and I wonder how often a human would agree with those ratings in Scout Labs. I’d want to look at that much more closely before deciding whether to rely on Scout Labs' results.

I'd still pay for Scout Labs for the convenience of the searches, statistics and collaboration tools. As I say, it's a very nice interface. I might even find on closer examination that the sentiment ratings are useful even if they’re only somewhat accurate: after all, they might still get a trend right and call up useful samples. But much as I like the bells and whistles, I’m not as enthusiastic about Scout Labs as when I started.

Monday, February 23, 2009

Vizu Measures the Brand Impact of Online Ads with Just One Question

I wrote last week about a general framework for measuring the marketing impact of social media. This proposed a general hierarchy of:

1. tracking mentions
2. identifying mentioners
3. measuring influence
4. understanding sentiment
5. measuring impact

As with all marketing measurement, the hardest task is the last one: measuring impact. This requires connecting the messages that people receive with their actual subsequent behavior, and hopefully establishing a causal relationship between the two. The fundamental problem is the separation between those two events: unless the message and purchase are part of the same interaction, you need some way to link the two events to the same person. A second problem is the difficulty of isolating the impact of a single event from all the other events that could influence someone’s behavior.

These problems are especially acute for brand advertising, which pretty much by definition is not connected with an immediate purchase. Brand advertisers have long dealt with this by imagining buyers moving through a sequence of stages before they make the actual purchase. A typical set of stages is awareness, interest, knowledge, trial (the first actual purchase) and regular use (repeat purchases).

Even though these stages exist only inside the customer’s head, they can be measured through surveys. So can more detailed attitudes towards a product such as feelings about value or specific attributes. For both types of measurement, marketers can define at least a loose connection between the survey results and eventual product purchases. Although the resulting predictions are far from precise, they offer a way to measure subtle factors, such as the impact of different advertising messages, that techniques based on actual purchases cannot.

The Internet is uniquely well suited for this type of survey-based analysis, since people can be asked the questions immediately after seeing an advertisement. One vendor that does this is Factor TG, which I wrote about last year (click here for the post.) Another, which I mentioned last week, is Vizu .

What makes Vizu different from other online brand advertising surveys is that each Vizu survey asks just one question. The question itself changes with each survey, and is based on the specific goal for the particular campaign. Thus, one survey might ask about awareness, while another might ask about purchase intentions. Vizu asks its question to a small sample of people who saw an advertisement and also to a control group of people who were shown something else. It assumes that the difference in answers between the two groups is the result of seeing the advertisement itself.

Although asking a single question may seem like a fairly trivial approach, it actually has some profound implications. The most important one is that it greatly increases response rate: Vizu founder Dan Beltramo told me participation can be upwards of 3 percent, compared with tenths or hundredths of a percent for longer traditional surveys.

This in turn means statistically significant survey results become available much sooner, giving marketers quick answers and letting them watch trends over relatively short time periods. It also provides significant results for much smaller ad campaigns or for panels within larger campaigns. This lets marketers compare results from different Web sites and for different versions of an ad, allowing them to fine tune their media selections and messages ways that traditional surveys cannot.

Another benefit of simplicity is lower costs. Vizu can charge just $5,000 to $10,000 per campaign, allowing marketers to use it on a regular basis rather than only for special projects. Vizu also has little impact on the performance of the Web sites running the surveys, reducing cost from the site owner's perspective.

The disadvantage of asking just one question is that you get just one answer. This prevents detailed analysis of results by audience segments, or exploration of how an ad affects multiple brand attributes. Vizu actually does provide a little information about the impact of frequency, drawn from cookies that track how often a given person has been exposed to a particular advertisement. Vizu also tracks where the person saw the ad, allowing some inferences about respondents based on the demographics of the host sites. Mostly, however, Vizu argues that a single answer is a good thing in itself because it keeps everyone involved focused on the ad campaign’s primary objective.

According to Beltramo, Vizu’s main customers are online ad networks and site publishers, who use the Vizu results as a way to show their accountability to ad agencies and brand advertisers. Some agencies and advertisers also contract with the firm directly.

What, you may be asking, has all this to do with social media measurement? Vizu’s approach applies not just to display advertising but also to social media projects such as downloadable widgets and micro sites.

Even though Vizu can’t fully bridge the measurement gap between exposure and actual purchases, it does offer more insights than simply counting downloads, clickthroughs or traffic. In a world where so little measurement is available, every improvement is welcome.

Thursday, February 19, 2009

Tools for Social Media Measurement

I was whining last week on my other blog about the lack of integrated solutions for social media analytics. No sooner had I written that, of course, than up popped several interesting solutions to prove me wrong. I plan to write soon about a couple of specific products, but will use this post to set a framework for evaluation.

I suppose I should start with a definition of “social media”. By this I simply mean any communication method that allow users to interact directly with each other, as opposed to a broadcast medium where only a few people can send messages. I’m not intending to be especially restrictive here – I’d include blogs, public forums, Facebook , Myspace, YouTube , Flickr , Twitter, LinkedIn, Plaxo and many others. These all provide a huge stream of public chatter that marketers can tap into both to monitor what is being said about their products and to proactively spread their preferred messages.

From a measurement perspective, I see several distinct functions. Today, these are largely served by separate point solutions. Integrated systems are beginning to emerge that combine at least a few. The ultimate integrated system would service them all. The functions are:

- tracking mentions. This is the simplest goal; it simply means uncovering and reporting on social media events that relate to your product, brand or company. The fundamental tool here is the keyword search. Many systems do these, and some even combine different social network sources to provide a consolidated report. Google Alerts is probably the best known, although it doesn’t do much with social media aside from blogs. BoardTracker and Linqia are more focused on social communities.

- identifying mentioners. Most social media comments are signed with a user ID of some sort, but the identity of the person behind that ID is often not clear. I haven’t actually seen tools that address this, but they probably exist. What’s needed is to look at whatever public profile is available, use that to find out other information about the person, and then in turn see if you can find that person in other social media. As a not-too-scary example, I recently saw a Twitter post that mentioned a vendor I follow. Checking out the poster's profile to see if she was worth “following”, I saw that she was from a small town where I used to live. Curious, I then found her in Linked In and discovered the company she worked for. Yes, this sounds uncomfortably like stalking, but it’s old news that the Internet is really good for that. What’s interesting here is the potential to help understand background of an individual and her social media profile. The steps that I took could easily be automated; indeed, products like ZoomInfo do something similar, although so far as I can tell they don't include social media other than blogs.

- measuring influence. Influence has two overlapping dimensions: the influence of an individual mentioner, and the influence of a particular event. The mentioner’s influence is related to the profile I just mentioned, but also to blog readership, “friends” and network members in various social platforms, authority as measured by links and recommendations, etc. Again, these statistics are available in a scattered fashion for individual social media, and it wouldn’t be hard to build a system to pull them together once you had linked the user IDs. Surely someone is out there doing this but I haven’t tripped over them. Maybe if I spent more time at the gym?

Measuring the influence of a particular event is actually easier. It is a matter of links, views, downloads, recommendations, ratings, etc. The statistics are often published along with the item itself. One possible tool is TrackUr, a low-cost product (from $18 to $197 per month) that scores Web sites based on “the number of backlinks pointing to a web site, the number of blog discussions, an estimate of traffic, and even the number of times the web site has discussed the phrase in the past.” Another that I suspect costs much more is Radian6, which “tracks comments, viewership, user engagement and other metrics, 24/7, so that you can clearly see the reach and affect[sic] each post has on the community.” It also can “uncover the influencers online by topic, based on user-determined formula weightings.”

- understanding sentiment. This is the domain of semantic analysis (that’s a pun, kind of), which is a long-established field with many players. One specialist applying its technology to real-time Web content is Hapax. Solutions integrated more closely with social media search include Crimson Hexagon and newly-launched Scout Labs Scout Labs is also a low-cost option, with plans starting at $99 per month and currently offering 30-day free trial.

- measuring impact. Ah, the bottom line: what did people exposed to the social media event actually do? Even the Web hasn’t yet reached the stage of universal behavior tracking that would really let you answer this, and I personally hope it never does. But one product that gets close is Tealium Social Media, which builds a list of Web URLs (both social media and regular online media) related to your product, checks which of those your Web site visitors had seen previously, and pops the results into Google Analytics so you can treat the Web events like any other visitor source. (See my earlier blog post on Tealium for details.) At the other end of the process, Vizu lets marketers embed a question in Web ads that asks about the brand attitudes, and compares this against answers of people who didn’t see the ad, thereby measuring the net impact of the ad itself. The vendor has embedded its questions in social media applications from vendors including Lotame (ads in social networks), AdNectar (social ‘gifting’) and Buddy Media (custom social applications). See their press release for details.

Friday, February 6, 2009

When All Marketing is Internet Marketing, All Agencies are Internet Agencies

A little press notice this week reported a January 20 announcement from Ogilvy North America of “strategic alliances” with marketing automation software vendor Unica and marketing database integrator Pluris. On its face, this seemed to suggest a change in strategy for all three firms, moving towards a database marketing agency approach that combines technology, marketing strategy, data and analytics. But close reading of the press release shows this is just an agreement to make referrals. When I asked one of the players involved, they confirmed that’s all there is.

Nevertheless, the announcement prompted a little flurry of speculation in the Twittersphere / blogosphere (we need a new term -- blabosphere?) about changes in the role of traditional advertising agencies. Even though the database marketing agency model has been held a relatively small niche for decades (pioneers like Epsilon were founded in the late 1960’s), the thought seems to be that it will soon become the dominant model.

I’m skeptical. In some ways, the basic technologies for customer management have actually become more accessible to non-specialist companies. In particular, the hardest part, building a customer database, has largely been taken over by customer relationship management systems. Once that’s in place, it’s not much more work to add a serious marketing automation system. In fact, all you do is buy software like Unica’s—which is why a firm like Ogilvy doesn’t need to build its own, or to have a particularly intimate relationship with Unica itself. Yes, Ogilvy and other agencies need database marketing competencies. But all they really need to do is manage a firm like Acxiom doing the actual work. This takes expertise but much less capital and human investment than doing it yourself.

So, if database marketing has become easier, there is even less need than in the past for an integrated database marketing agency. Database marketing has remained a small part of the industry because its scope is too limited, particularly in dealing with non-customers (who mostly are not in your database). (Yes, the credit card industry is an exception.)

But the Internet is changing the equation substantially. Advertising agencies marginalized database marketing because customer management is not their core business. But advertising agencies exist to buy ads, and Internet advertising is now too important for them ignore. Plus, Internet advertising is much closer to agencies’ traditional core business of regular advertising, so it’s much easier for them to conceive it as a logical extension of their offerings. Even though many specialist agencies sprung up to handle early Internet advertising, the traditional agencies are now reasserting their control.

Now here’s the key point: managing Internet ads is not the same as managing traditional advertising. Ad agencies will develop new skills and methods for the Internet, and those skills and methods will eventually spread throughout the agency as a whole. Doing a good job at creating, buying and evaluating Internet advertising requires vastly more data and analysis than doing a good job at traditional mass media. It will take a while for the agencies to develop these skills and procedures, but these are smart people with ample resources who know their survival is at stake. They will keep working at it until they get it right.

Once that happens, those skills and methods won’t stop at the door of the Internet department. Agencies will recognize that the same skills and methods can be applied to other parts of their business, and frankly I expect that they’ll find themselves frustrated to be reminded how poorly traditional marketing has been measured. Equipped with new tools and enlightened by a vision of how truly modern marketing management, agency leaders will bring the rest of their business up to Internet marketing standards of measurement and accountability. It’s like any technology: once you’ve seen color TV, you won’t go back to black and white.

We’re already seeing hints of this in public relations, where the traditional near-total lack of performance measurement is rapidly being replaced by detailed analyses of the impact of individual placements. In fact, the public relations people are even pioneering quantification of social network impact, perhaps the trickiest of all Internet marketing measurement challenges.

So, yes, I do see a great change in the role of advertising agencies. I even expect they will resemble the integrated strategy, technology, analytics and data of today’s database marketing agencies. But it won’t happen because the ad agencies adopt a database marketing mindset. It will happen because they want to keep on making ads.

Sunday, February 1, 2009

Razorfish Study Measures Direct Response to Social Media

I’ve been spending more time than I should recently on Twitter (follow me at @draab). It provides a fascinating peek into the communal stream-of-consciousness, which would be pretty horrifying (“Britney…Brad…Jen…Obama…groceries…Britney…Britney…Britney”) if you couldn’t choose the people and search terms you follow. This filtering (which I do via a great product called Tweetdeck) turns Twitter into a very efficient source of information I wouldn’t see otherwise.

Naturally, my interest in Twitter also extends to how you measure its business value, and by extension the value of social media in general. Since the people I follow on Twitter are both marketers and Twitter users, they discuss this fairly often. One recent post (technically a “tweet” but the term seems so childish) pointed to a study Social Media Measurement: Widgets and Applications by interactive marketing agency Razorfish.

The study turns out to be a very brief and straightforward presentation of two projects, both involving creation of downloadable widgets. One was promoted largely through conventional media and the other through widget distribution service Gigya. For each project, we’re told the costs, number of visitors and/or downloads, how much time and money they spent, and the return on investment. The not-very-surprising findings were that people who spent more time also spent more money and, more broadly, that “social media may be used effectively as a way of engaging users and potential customers.” A less predictable and potentially more significant finding from the first project was that people who were referred by a friend downloaded more often and spent much more money than people who were attracted by the media. The numbers were: downloads, 23% vs. 8%; spend any money, 9% vs. 1%; and amount spent, $23.00 vs $3.14. But the study points out that the numbers were very small—only 216 individuals arrived at the landing page as a result of a friend’s email, vs. 41,599 from media sources. These figures are drawn only from the first project because the second project couldn’t be measured this way.

From a marketing measurement standpoint, none of this seems break any new ground. Visitors are tracked by their source URLs and subsequent behavior is tracked through cookies. The ROI is calculated on straight revenue (it really should be profit) and seems to include only immediate purchases. This is particularly problematic for the second project, which promoted a $399 product with very limited supply that sold out in one minute. (The study doesn’t say, but based on this award citation it seems to be a special edition Nike Air Jordan shoe.) Clearly the point of such Air Jordan promotions isn’t immediate revenue, but brand building at its hard-to-measure best. The real challenge of evaluating social media is measuring this type of indirect impact. This study makes no claim to do that, but I’ll keep my eyes out for others that do.