Saturday, February 28, 2009
Rate This Neutral: Scout Labs Social Media Monitoring is Definitely Cool, Possibly Accurate
A couple things, it turns out. But let’s look at the good stuff first. Scout Labs’ combines three of the five social media measures I proposed last week (tracking mentions, identifying mentioners, measuring influence, understanding sentiment and measuring impact). Specifically, it searches blogs, news feeds, video and photo sites, Twitter and some social network sites (although not yet the big ones); provides influence measures; and classifies blog posts by sentiment. It doesn’t attempt to identify mentioners (i.e., track multiple posts by the same individual), or to measure the impact of an item on its audience. But three out of five is pretty good.
More important, the things that Scout Labs does, it does well. The search feature lets users specify multiple terms and whether each term is required, relevant or excluded. Once a search is defined, the system will automatically scan the top 12 million blogs for qualified entries, rate their sentiments as positive, negative or neutral, and show them in a list. Each item on the list shows the blog headline and phrases with the search terms highlighted. A side box shows common words in all the entries, ranked by frequency. This by itself gives a quick view of what’s being said about the search target.
Users can drill into the listed items to see the full entry, details about where it came from, how many external links attach to the item and its source, and the sentiment rating. They can manually revise the rating, bookmark the item with keywords, attach a note for discussion, and email a link with a system-generated summary and the user’s own comments to anyone the user chooses. The system currently uses the link counts as an influence measure, and can rank the items by influence or date. Scout Labs is working to upgrade its influence metric by integrating Web traffic data and a measure of the source’s relevance to the search topic.
But there’s more. The system can prepare graphs showing trends in volume, sentiment, and share of total blog mentions. Graphs can compare statistics for up to four different searches. Users can specify the date ranges to report on, currently going back up to three months and soon extending to six months.
Things are a little less exciting once you move beyond the blogosphere. The system will list search results for photo sites, video sites and Twitter, but doesn’t offer sentiment tracking or graphs. Scout Labs is working on adding sentiment tracking to Twitter comments. I guess it's not fair to ask them to measure sentiment for photos or videos.
As to pricing, the smallest Scout Labs plan allows five saved searches for $99 per month, although the company thinks expects most businesses will take plans for 25 or more searches, which start at $249 per month. There are no limits on the number of users or search hits in any plan and the system continuously updates the results of the saved searches.
So far so good. There's a free 30 day trial, so I set up two test searches in Scout Labs, each for a demand generation software vendor I track closely. The system found many of the posts I expected, and it was definitely fun and convenient to dig into them. If I worked at one of those firms, I would gladly pay $249 per month for this.
But then I ran the same searchs in IceRocket, a free tool that also does searches of blogs and other sources. IceRocket found nearly twice as many hits during the same time period, and they looked legitimate. Ouch. But Scout Labs acknowledges that its 12 million blogs don’t cover the entire blogsphere (over 100 million blogs, last I heard), and it does let you add feeds if one you want is missing. Plus IceRocket doesn’t support saved searches or do any of the other cool stuff. So I’m a little worried about coverage but still willing to pay Scout Labs’ fee.
Next I took a closer look at the sentiment ratings in the Scout Labs results. I didn’t expect them to be perfect, but was seriously disappointed. On one search, 32 of 39 items were labeled as neutral. Some of those were actually pretty positive, but, as Scout Labs explains in a recent blog post, they try to be conservative by labeling items as neutral unless the tone is clear. Fair enough. But the seven positive items were all pretty much neutral too. For example, several were help wanted postings that simply specified experience with the products in question. There were no items classified as negative, although one or two of the posts arguably could have been.
In the blog post I just mentioned, Scout Labs offers a detailed discussion of its sentiment rating technique. The gist is that they don’t just count “happy” and “sad” words, but semantically analyze each entry to understand which words relate to the search topic. Sounds good in theory. They also say their automated ratings agree with college-educated humans about 75% of the time. In comparison, they say, college-educated humans agree with each other about 85% of the time. (Clearly they are not talking about married couples.)
But if the vast majority of items are neutral, that’s less useful than it sounds. Remember the basic statistics: if 80% of the items are neutral, then a system that blindly ranks everything as neutral will be correct 80% of the time. The ratings that really count are the positives and negatives, and I wonder how often a human would agree with those ratings in Scout Labs. I’d want to look at that much more closely before deciding whether to rely on Scout Labs' results.
I'd still pay for Scout Labs for the convenience of the searches, statistics and collaboration tools. As I say, it's a very nice interface. I might even find on closer examination that the sentiment ratings are useful even if they’re only somewhat accurate: after all, they might still get a trend right and call up useful samples. But much as I like the bells and whistles, I’m not as enthusiastic about Scout Labs as when I started.
Friday, February 6, 2009
When All Marketing is Internet Marketing, All Agencies are Internet Agencies
Nevertheless, the announcement prompted a little flurry of speculation in the Twittersphere / blogosphere (we need a new term -- blabosphere?) about changes in the role of traditional advertising agencies. Even though the database marketing agency model has been held a relatively small niche for decades (pioneers like Epsilon were founded in the late 1960’s), the thought seems to be that it will soon become the dominant model.
I’m skeptical. In some ways, the basic technologies for customer management have actually become more accessible to non-specialist companies. In particular, the hardest part, building a customer database, has largely been taken over by customer relationship management systems. Once that’s in place, it’s not much more work to add a serious marketing automation system. In fact, all you do is buy software like Unica’s—which is why a firm like Ogilvy doesn’t need to build its own, or to have a particularly intimate relationship with Unica itself. Yes, Ogilvy and other agencies need database marketing competencies. But all they really need to do is manage a firm like Acxiom doing the actual work. This takes expertise but much less capital and human investment than doing it yourself.
So, if database marketing has become easier, there is even less need than in the past for an integrated database marketing agency. Database marketing has remained a small part of the industry because its scope is too limited, particularly in dealing with non-customers (who mostly are not in your database). (Yes, the credit card industry is an exception.)
But the Internet is changing the equation substantially. Advertising agencies marginalized database marketing because customer management is not their core business. But advertising agencies exist to buy ads, and Internet advertising is now too important for them ignore. Plus, Internet advertising is much closer to agencies’ traditional core business of regular advertising, so it’s much easier for them to conceive it as a logical extension of their offerings. Even though many specialist agencies sprung up to handle early Internet advertising, the traditional agencies are now reasserting their control.
Now here’s the key point: managing Internet ads is not the same as managing traditional advertising. Ad agencies will develop new skills and methods for the Internet, and those skills and methods will eventually spread throughout the agency as a whole. Doing a good job at creating, buying and evaluating Internet advertising requires vastly more data and analysis than doing a good job at traditional mass media. It will take a while for the agencies to develop these skills and procedures, but these are smart people with ample resources who know their survival is at stake. They will keep working at it until they get it right.
Once that happens, those skills and methods won’t stop at the door of the Internet department. Agencies will recognize that the same skills and methods can be applied to other parts of their business, and frankly I expect that they’ll find themselves frustrated to be reminded how poorly traditional marketing has been measured. Equipped with new tools and enlightened by a vision of how truly modern marketing management, agency leaders will bring the rest of their business up to Internet marketing standards of measurement and accountability. It’s like any technology: once you’ve seen color TV, you won’t go back to black and white.
We’re already seeing hints of this in public relations, where the traditional near-total lack of performance measurement is rapidly being replaced by detailed analyses of the impact of individual placements. In fact, the public relations people are even pioneering quantification of social network impact, perhaps the trickiest of all Internet marketing measurement challenges.
So, yes, I do see a great change in the role of advertising agencies. I even expect they will resemble the integrated strategy, technology, analytics and data of today’s database marketing agencies. But it won’t happen because the ad agencies adopt a database marketing mindset. It will happen because they want to keep on making ads.
Wednesday, January 21, 2009
Tealium Measures Response to Social Media
Tealium, a developer of specialized Web analytics tools founded last year by veterans of WebSideStory/Visual Sciences, offers Tealium Social Media as a solution. It first builds a list of Internet references to a product, based on automated searches of sources such as Google News and Blogsearch, YouTube, Bloglines, Twitter, etc., plus any other RSS source you might have available. The system then checks whether visitors to a company Web site have previously visited one of these references by checking the cache of the visitor’s browser. If a match is found, the visit is attributed to that source.
I’m going to stop right here and say that this struck me as raising a significant privacy issue. I hadn’t really given the matter any thought but had assumed my browser history was private. But a Google search on "read browser history" shows that a method to check whether someone has visited a specified URL is widely known. This is what Tealium does and it isn’t as invasive as simply reading everything. More important, Tealium doesn't track individuals: rather, it reports how many people come from a given source. This is little different from conventional Web analytics, so I guess there is no particular privacy objection to the product. And, yes, you can always clear your browser cache or shorten the retention period. Quick show of hands: how many of you have actually done that? I thought so. End of sermon.
Tealium’s approach won’t be 100% accurate, since some people really do clean out their browser caches,. A few people will also access a site from a different computer or browser than the one where they saw the reference. Nor will Tealium capture referrals, such as an email I sent you with a product’s name after reading an article about it. But most of these problems apply to other Web analytics techniques, and on the whole the data should be accurate enough to be useful. It will certainly give a good measure of the relative power of different sources.
The system must also choose how to assign credit if the visitor’s cache contains more than one of the reference items. Tealium handles this by ranking the items on popularity and recency, and assigning the match to the highest ranked item. This seems reasonable.
Of course, Tealium can only measure Web-based activities. This almost goes without saying, but it's worth reminding ourselves every so often that there are still plenty of non-Web interactions taking place.
Tealium originally intended to present its social media results in a stand-alone interface. But the vendor decided a couple of months ago to instead feed them into existing Web analytics products, and Google Analytics in particular. This reduced the work Tealium had to perform (no reporting or data storage), hence lowering development and operating costs. From the client viewpoint, it integrates the social media results with other Web analytics, allowing direct comparisons between paid and unpaid media. In addition, downstream measures such as conversions or purchases automatically become available for the Tealium-derived sources. This was a very wise move.
What Tealium won’t provide is measures of sentiment, such as whether a particular social media reference was praise or criticism, of comments on particular subjects, or of changes in customer attitudes. Nor does it claim to. There are of course many other systems in this field; see last week’s post on reputation monitoring systems for a pointer to a detailed list.
Pricing of Social Media starts at $2,000 for implementation plus $250 per month with a one year contract. Price grows slightly as users add keywords and data feeds but is not related to actual traffic volume. The system has been in beta test with six clients until recently, and is being formally launched today.
Social Media is Tealium’s third product. The other two are WebToCRM, which captures Web visitor data and posts it to a CRM system, and Universal Tag, which lets a single page tag feed visitor data to multiple Web analytics systems.
Friday, December 5, 2008
TraceWorks' Headlight Integrates Online Measurement and Execution
According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)
These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.
All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.
The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.
Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.
The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.
Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.
Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.
Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.
TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.
Friday, November 21, 2008
Twitter Volume for Demand Generation Vendors
A comment on my previous post suggested Twitter mentions as a possible measure of vendor market presence. That had in fact occurred to me, but I hadn't bothered to check because I assumed the volume would be too low. But since the topic had been raised, I figured I'd take a peek.
The first two Twitter monitoring sites I looked at, Twitscoop and Twitterment, seemed to confirm my suspicion: of the three most popular vendors, Eloqua had 6 Twitscoop hits and 3 Twitterment hits; Silverpop had 2 on each; and Marketo had 3 on Twitscoop and none on the other. No point in looking further here.
But then I checked Twitstat. In addition to having a slightly less childish name, it seems to either do a more thorough search or look back further in time: for whatever reason, it found 152 hits for Eloqua, 65 for Silverpop, and 133 for Marketo. Much more interesting.
Alas, the numbers dropped down considerably after that, as you can see in the table below. Everything else is in single digits except for two anomalies LoopFuse with 22 mentions and Bulldog Solutions with a whopping 217. Interestingly, both those sites also had exceptionally high blog hit numbers on IceRocket. The root cause is probably the same: one or two active bloggers or Twitter users (which seems to be the accepted term; I guess we can't call them Twits) are enough to skew the figures when volumes are so low. More particularly, LoopFuse gets a lot of attention because some of its founders are closely tied to the open source community. Bulldog Solutions just seems to have a group of employees who are serious into Twitter. In fact, I now know more about their lives than I really care to (although there was nothing indiscreet in the posts, I'm pleased to report).
A couple of side notes:
- the very short length of the messages does make them easy to read, which paradoxically means you can actually gather more information from Twitter than by scanning blog posts, because reading the blog posts takes too much time. Of course, when we're dealing with such tiny volumes, there is no way to generalize from what you read: Twitter is strictly anecdotal evidence, and perhaps even dangerous for that reason.
- there seemed to be several Tweets that were purposely sent for marketing purposes. Nothing wrong with that, and they were quite open about it. Just interesting how quickly some firms have picked up on this. (OK, not so quickly: Twitter has been around since 2006 and very popular for about a year now.)
Still, the bottom line for the purposes of measuring demand generation vendors is still the same as for blogs: too little volume to be a reliable measure of relative market interest.
Twitterment | Ice Rocket | Alexa | Alexa | |
twitter mentions | blog posts | rank | share x 10^7 | |
| Already in Guide: | ||||
| Eloqua | 152 | 286 | 20,234 | 70,700 |
| Silverpop | 65 | 188 | 29,080 | 30,500 |
Marketo | 122 | 229 | 68,088 | 17,000 |
Manticore Technology | 0 | 56 | 213,546 | 6,100 |
Market2Lead | 5 | 5 | 235,244 | 4,800 |
Vtrenz | 8 | 53 | 295,636 | 3,600 |
Marketing Automation: | - | |||
Unica Affinium | 6 | 43 | 126,215 | 8,500 |
Alterian | 5 | 145 | 345,543 | 2,500 |
Aprimo | 6 | 139 | 416,446 | 2,200 |
Neolane | 5 | 64 | 566,977 | 1,690 |
Other Demand Generation: | - | |||
Marketbright | 9 | 167,306 | 5,400 | |
Pardot | 4 | 33 | 211,309 | 3,600 |
Marqui Software | 2 | 19 | 211,767 | 4,400 |
ActiveConversion | 2 | 12 | 257,058 | 3,400 |
Bulldog Solutions | 219 | 43 | 338,337 | 3,200 |
OfficeAutoPilot | 2 | 5 | 509,868 | 2,000 |
Lead Genesys | 1 | 5 | 557,199 | 1,450 |
LoopFuse | 22 | 43 | 734,098 | 1,090 |
eTrigue | 1 | 1,510,207 | 430 | |
PredictiveResponse | 1 | 0 | 2,313,880 | 330 |
FirstWave Technologies | 0 | 11 | 2,872,765 | 170 |
NurtureMyLeads | 0 | 5 | 4,157,304 | 140 |
Customer Portfolios | 0 | 3 | 5,097,525 | 90 |
Conversen | 1 | 0 | 6,062,462 | 70 |
FirstReef | 0 | 0 | 11,688,817 | 10 |
Friday, August 1, 2008
'Total Marketing Measurement' Is Closer Than You Think
Why, then, does ActiveConversion call itself a TMM? I think it’s mostly a bit of innocent marketing patter, but it also reflects the system’s ability to track Web visitors’ behavior in great detail and report it to salespeople. This is common among demand management products, but not a feature of traditional Web analytics, which is more about group behaviors such as how many times each page is viewed. In some sense, this detailed individual could reasonably be described as “total” measurement.
In practice, the “total” measurement by demand generation systems is limited to behavior on company Web pages. This is far from the complete set of interactions between a company and its customers. But as the scope of recorded transactions expands relentlessly—something I personally consider an Orwellian nightmare, but see as inevitable—it’s worth contemplating a world where “total” measurement truly does capture all behaviors of each individual. At that point, marketers will no longer be able to hide behind the traditional barrier of not knowing which messages reached each customer. This will leave them face to face with the need to take all this data and make sense of it.
Much as I love technology, I suspect there are significant limits to how accurately it will be able to measure marketing results at the individual level. But meaningful predictions should be possible for groups of people, and will yield substantial improvements in marketing effectiveness compared with what we have today. But this will only happen if marketers make a substantial investment in new measurement techniques, which in turn requires that marketers believe those techniques are important. The only way they will believe this is to see early successes, which is why marketers must start working today to find effective approaches, even if they are based on partial data. After all, it’s certain that the quality of the information will improve. What’s in question is whether marketers make good use of it.
Wednesday, July 2, 2008
MMA Avista DSS ...and more
This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.
BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.
In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”
That’s about all I know about BrandView. Now, back to Avista.
As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.
The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.
The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.
Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.
Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.
Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.
Wednesday, June 18, 2008
M-Factor M3 Aggregates Segment-Level Mix Models (Which Is Cooler Than It Sounds)
The main tool used to measure consumer marketing results is, of course, the marketing mix model. This is built by identifying historical correlations between sales results and inputs such as media spend, trade promotions, pricing, primary demand and competitive activities. Mix models can provide powerful insights into the causes of past performance and helpful forecasts of the impact of future plans. Even though few marketing managers really understand the underlying math, the models are well enough proven to be widely accepted.
But there are limits to what a single marketing mix model can accomplish. Most markets are in fact comprised of many different segments, based on geography, customer type, product attributes, and other distinctions. Each segment will behave slightly differently, so generating the most accurate results requires a separate model for every one. This wouldn’t matter, except that marketers work at the segment level. They have separate marketing plans for each segment and track segment results. In fact, a large company will often have entirely different people responsible for different segments. You can be sure that each of them focuses on her own concerns.
Building lots of segment-level models doesn’t have to be much more expensive than building one big model. The trick is keeping the inputs and model structure the same. But managing all those models and aggregating their results does require a substantial infrastructure. This is what SAS for Marketing Mix (formerly Veridiem) was designed to do. (See my related post ) . It’s also the function of M-Factor M3.
In fact, M-Factor was originally founded in 2003 specifically to help combine marketing mix models that were created by third parties. The company’s product can do this, but the firm found that externally-built models are often poorly understood, difficult to maintain, and inconsistent with each other. In self-defense, it decided to build its own.
Today, M-Factor developed its own model-building staff and toolkit. This allows it to develop separate models for each segment in a market—sometimes hundreds or thousands of them. These can be arrayed in a multi-dimensional cube, which allows the system to easily aggregate results or drill down within different dimensions. Sharing the same structure also makes it easy to update the models with new data and to build detailed reports such as profit statements derived from model outputs.
To go at it a bit more systematically, M3 provides three main functions. The first is results analysis: calculating return on marketing investments by estimating the contribution of each input to over-all results. The second is forecasting: accepting scenarios with planned inputs, and using these to estimate future results. The third is optimization: automatically identifying the best combination of inputs to produce the desired outputs.
The results analysis accepts historical inputs from the usual sources such as Nielsen and IRI. It then produces typical marketing mix reports on the sales levels, volume drivers and return on investment. It also provides model performance reports such as model fit and error analyses. M-Factor makes a point of breaking out the model error, to help users understand the limits of model accuracy and see how well models hold up over time. The company says that its particular techniques make its models unusually robust.
Forecasting starts with a marketing plan for business inputs such as budgets and prices. These are at roughly the same level as the mix model inputs: that is, spending by category but not for specific marketing campaigns. A typical model has 15-25 such inputs. They can be entered for individual segments and then aggregated by the system, or the user can provide summary figures and let the system distribute them among segments according to user-specified rules. The system then applies these inputs to its models to generate a forecast.
Once an initial plan is entered, it serves as a base for other scenarios. M3 displays the original inputs as one column in a grid, and lets users make changes in an adjacent column. Since the models are already built, the forecast is calculated almost instantly. Results can include a full profit statement as well as the inputs and estimated sales volume.
Users can freeze one forecast to treat it as the business plan. The system can later report planned vs. actual results, or compare the original plan against a revised forecast. The system can also project results for the current calendar year by combining actuals to date with forecasts for the balance of the period. Because the forecasts are built by the individual segment models, all results can be analyzed via drill-downs or aggregated into user-defined groups. M3 provides each user with a personalized dashboard to make this easier.
Optimization is an automated version of the scenario testing process. The user specifies output constraints such as minimum revenue levels, and driver ranges such as no more than 3% price change. The actual optimization process uses a genetic algorithm that randomly tests different combinations of inputs, selects the sets with the best outcomes, makes small changes, and tests them again. It continues testing and tweaking until it stops finding improvements.
Users can also ask the system to optimize two target variables simultaneously. What the system actually does is combine them into a weighted composite, using different weights in different model runs. It plots the result of each run on a chart where the X axis represents one target variable and the Y axis represents the other. Users can then choose the balance they prefer.
Initial deployment of M3 usually takes three to four months, including the time to assemble the historical data, build the models, and provide an initial set of strategic recommendations. Pricing is comparable to conventional mix models, although it is sold as a hosted service on an annual subscription. This typically includes monthly data updates and reports, and quarterly updates of the underlying models. End-users access the system via a browser and can run reports, scenarios and optimizations at will.
Thursday, June 5, 2008
Tying Up Some Loose Ends: Hardmetrics, Revcube and Viewmark
Hardmetrics does offer a marketing measurement solution, but it’s just an extension of its primary offering: business activity monitoring, especially for call centers.
The heart of Hardmetrics is middleware that can identify related inputs from disparate sources. This is essential for all types of business activity monitoring, which often reports on correlations between events recorded in different systems. Hardmetrics uses a specialized star schema design, running on any standard relational database engine. But instead of relying on exact matches against hard keys, the middleware can link records through indirect matches such as time/date stamps or comparisons across different fields. Of course, if a hard key is available, the system will use it.
This correlation mapping is Hardmetrics’ secret sauce: it lets the system load data with minimal preparation, substantially simplifying both the initial implementation and subsequent data loads. It also means the system will automatically reassign matches between records when new or changed data is added.
Hardmetrics also has a knowledgebase of data found in common application systems, such as standard call center software. This speeds the mapping process for clients with those systems in place.
Clients can access the data using Hardmetrics’ own browser-based tools for reports, dashboards, scorecards, alerts, etc., or by writing their own queries against the middleware API. Either way, they still get the benefit of the indirect matching.
Hardmetrics offers its technology as a hosted, externally-managed, or on-premise solution.
RevCube originally attracted my attention with their Web site’s bold claim of a “complete customer acquisition solution” that would optimize placement, creative and budgets within and across multiple online channels. Apparently their core technology, a self-training content targeting engine, really could do that. But it’s a large pill for most marketers to swallow, so the company is asking them to nibble on something smaller: optimal Web landing pages for different visitor segments.
The system finds best pages by developing a set of test pages, each with a different combination of values for key attributes. It then presents each page to different visitors and infers which values appeal to which segments. This is harder than it sounds because the segments themselves are based on visitor attributes. This means the system is considering different segmentation schemes at the same time that it’s trying to find out which attributes appeal to which segments. It’s like shooting a moving target while riding in a boat.
This is all quite interesting and I hope to eventually write about it in detail, probably in my Customer Experience Matrix blog. But that won’t happen until RevCube formally releases its new system, tentatively late this summer. Until then they’re in stealth mode—so forget everything I just told you. (Or, if you’re seriously paranoid, first ask yourself how much of it is likely to be true…)
Viewmark also caught my attention (okay, it doesn’t take much) with the promise of a system to “capture and correlate information from many sources – both online and offline.” The system even has an oddly-spelled and therefore trademarkable name of its own: Viewmetrix. So it must be serious.
Well, yes and no. Viewmetrix does exist and has been quite successful. But Viewmark chose not to pursue it as an independent product, deciding instead to focus on its core business of web development for medium-sized organizations. It does still integrate Viewmetrix with its content management system, which has another catchy name, Cyberneering™.
Viewmark almost certainly made the right business decision about Viewmetrix. Still, it’s a bit of a shame, because Viewmetrix looks like a very good product. It incorporates dashboards, custom sales funnels, and a sophisticated approach to marketing ROI. This approach gathers information on the marketing contacts made with each individual, such as emails and sales calls, and the ultimate value of sales made to that individual. The contacts are assigned weights that reflect their contribution to moving customers from one stage in the sales funnel to the next. Weights are further adjusted for the time between the contact and the subsequent customer behavior . Based on this information, the system can allocate a fraction of each customer’s value to each marketing contact with that customer. The ROI of a marketing program is then calculated by comparing the program cost with the cumulative value of its contacts.
At least, I think that’s how the ROI calculation works. I might have some details wrong. But you get the idea: this is a very complex calculation calling for lots of data gathering and lots of analysis to set those weights and validate them. The problem, according to Viewmark, is that only large companies can afford such sophisticated marketing measurement. Smaller firms don’t spend enough on marketing to justify the cost of such precision. Since Viewmark’s business is centered on those smaller companies, it has even less incentive to further refine those features of Viewmetrix.
Thursday, May 15, 2008
SAS Marketing Performance Management Provides Rich Business Model
The heart of the system is the Marketing Scorecard, which provides a detailed business model specifying the relationships among various marketing measures. These are expressed as key performance indicators (KPIs) which can be displayed on scorecards, dashboards, and strategy maps in views tailored to different types of users. Because the SAS model defines the relationships among its components, it can calculate the impact of changes in one element on other elements and on the business as a whole. Of course, clients can tailor model to fit their own business. But SAS says that starting with a predefined framework shaves several months from the deployment process and forces a degree of rigor that can be missing from systems that simply report on disconnected facts.
The data mart supporting all this can combine inputs from campaign reporting, financial systems, budgets, forecasts, syndicated research, and other external sources. The system can import results from individual campaigns for detailed operational reporting, and can aggregate these to compare them against higher-level financial or plan data. It would not typically incorporate customer-level data.
SAS for Marketing Mix started out as Veridiem, a suite of applications for marketing planning, reporting, analysis and optimization based primarily on marketing mix models. SAS acquired Veridiem’s assets in March 2006. It still offers a hosted version within its OnDemand solution as Veridiem MRM.
SAS has reworked the independent Veridiem software to share data and other components with the rest of Marketing Performance Management. In fact, you cannot purchase SAS Marketing Mix without also purchasing SAS Marketing Scorecards.
The Marketing Mix system itself is an administrative tool designed to help users use mix models more effectively. It will draw information from the data mart as model inputs, allow users to modify inputs to build different scenarios, combine the results of multiple models, present the results in a variety of formats, and feed forecasts back into the data mart for reporting. The models are built outside of the system, using SAS or other vendors’ products.
The third component of Marketing Performance Management is SAS Profitability Management. This is a version of SAS’s generic profitability management software, which applies user-defined cost assignment rules to calculate profitability down as far as individual transactions.
Pricing of the Marketing Performance Management system depends on the components purchased, number of users, and other elements. Entry price is $125,000 plus one to two times that amount in professional services.
Tuesday, April 22, 2008
Visual IQ Measures Impact of Online Campaigns
Visual IQ promises to “solve your dilemma of capturing, integrating, analyzing and understanding your marketing performance data.” Word choice aside*, this promises exactly what I want from a marketing measurement system. Within limits, Visual IQ appears to deliver.
First a bit of background. Visual IQ is the name adopted in February 2008 by Connexion.a, a firm founded in 2005 with roots in online marketing measurement. The goal was (and is) to help marketers do a better job of allocating their budgets by comparing results across channels. The particular focus has been digital channels—display ads, paid search, organic search, affiliate programs, mobile, etc.—although the company can analyze data from conventional channels as well. The core technology is an “Online Intelligence Platform” that integrates data from all sources and presents it in a way that shows the value of each campaign. This happens in three system modules, each representing a higher level of sophistication.
- IQ Reporter accepts campaign-level inputs and presents campaign-level results. These can be from one or multiple channels and incorporate plan, actual and syndicated competitive information. The value is being able to view data from different sources and multiple channels in a single reporting system. Visual IQ has developed some attractive tools for reporting and analysis. Technically these are quite impressive, using Adobe Flex technology to deliver rich interactive analytics through a browser.
- IQ Envoy steps up to individual-level data from cookies, transactions and other sources. It consolidates this by customer to view contacts across campaigns. Since individual-level data involves much more volume than campaign statistics, Visual IQ extracts selected attributes from each channel’s inputs and loading them into a proprietary database. This compresses the data for storage and decompresses it for analysis. The system can automatically poll for new data on what the company calls a “near real time” basis, which is usually daily but can be more often if appropriate. The attributes extracted for each channel are defined in a standard data model, which makes it easier to set up new clients and add new channels to existing clients.
Managing data at the customer level lets the system look across campaigns to find all contacts leading up to an interaction such as a purchase. This lets Visual IQ apply statistical methods to estimate the contribution that each contact made to the final result. These models can look across contact attributes, such as campaigns, web sites, keywords, creative treatments, and dates, to better understand the exact elements that are driving response. Such information lets marketers allocate funds to the most effective treatments--a major step towards true marketing optimization.
The challenge here is the scope of information available to analyze. Visual IQ relies primarily on ad server cookies to track the messages received by each individual. This automatically creates a history of ads served by the ad network itself. But it requires additional coordination for the ad server to track organic search, paid search and affiliate campaigns. Information other channels, both online and offline, requires additional identifiers. Typically it would depend on establishing the actual user identity at the time of a transaction, and then linking this to identifiers in other channels. For example, an online purchase might capture an email address that could be linked to an email account, and capture an actual name and postal address that could be linked to a mailing list or regional media market. Exactly how much information will be available will depend on the situation and has little to do with Visual IQ itself. (Of course, you could argue that having Visual IQ available to analyze the data gives marketers a stronger reason to take the effort to gather it.)
Just to be clear: cookies are inherently anonymous. Tying them to individuals requires information from an external source. Visual IQ can effectively analyze contact history of a given cookie without knowing the identity of its owner.
- IQ Sage uses the individual information to build customer profiles and to model the paths followed by customers as they head towards a purchase. These models can simulate the impact of changes in marketing programs, such as increasing spending on programs to move customers from one stage of the purchase cycle to the next, or switching funds to programs that attract different types of customers. Optimization can recommend changes that would produce the best over-all results.
Visual IQ offers its products as a hosted service with monthly fees ranging from $7,500 to $25,000. Cost depends largely on which components are used, with some additional fees for data storage. The company may adopt volume-based pricing in the future. Visual IQ has 14 clients, including very large advertisers, agencies and Web publishers. Although a majority of its clients use the system only for campaign-level reporting, about one-third work with cookie-level data.
=====================================================
*The primary definition of “dilemma” is “a situation requiring a choice between equally undesirable alternatives.” (Dictionary.com) I don’t think that really applies here. Presumably Visual IQ has in mind the secondary meaning of “any difficult or perplexing situation or problem.”
Tuesday, April 15, 2008
MPM Reporting Software
One of my intentions for this blog is to help you find vendors for MPM projects. Let’s jump right in with software providers specializing in MPM systems. There are just a handful of these once you narrow the focus to firms that are:
- primarily software developers rather than consultancies (although all vendors provide some consulting to help you set up and use their products, and a couple of consultancies have slipped into the list below if I think their software can stand on its own)
- dedicated primarily to analyzing marketing results (as opposed to generic reporting systems or broader marketing management systems)
- work across multiple channels (as opposed to a single channel such as Web analytics or paid search management).
My current list is below, in alphabetical order. I plan to update this over time and welcome suggestions for additions. Each vendor is listed with a brief description taken from their Web site. I will post links to detailed reviews of individual vendors as I write them.
DecisionPower
- My take: builds simulations using ‘agent-based modeling’, a specialized approach that is very powerful and efficient. These can be used to create the equivalent of a traditional marketing mix model.
- Company quote: “The cornerstone of DecisionPower™ products and services is agent-based modeling (ABM) technology. ABM gives you the power to simulate real-life consumer behavior in real-world markets — in real time.”
- My review (from 2006)
- My take: provides tools to build and analyze databases of point-of-sale and syndicated data.
- Company quote: “Decisions Made Easy, a global business service of The Nielsen Company, provides software and services for consumer goods manufacturers and retailers focusing on direct data. Clients use our solutions to efficiently extract insight from point-of-sale (POS) and related data, and create actionable information for use across the business.
- My take: analyzes program results based on online consumer surveys. A limited but interesting approach.
- Company quote: “Leading Brands provides a system of continuous marketing measurement that captures the effects of specific marketing tactics as they happen via online consumer surveys. Brand effects are measured according to accepted research methods, analyzed with sales data, and reported when and how you need them (for planning, execution, optimization, and modeling.)”
- My review (May 2008)
- My take: not MPM specialists, they have a flexible technology for cross-system process measurement which was originally for call center performance management.
- Company quote: “MPM takes a self-service approach that allows you to define, drill and analyze in real time, your own dashboards, scorecards, reports and alerts on any moving part of a campaign. What’s even better is MPM allows you to drill down/up/sideways into any data source, anywhere in the enterprise to get the answers you need. ... HardMetrics is the industry’s FIRST and ONLY performance management company to offer a “codeless implementation’. That means your marketing organization can have a production ready system in hours/days and not have to wait weeks/months.”
- My review (June 2008)
Marketing Management Analytics Avista DSS
- My take: they’re really a consultancy with roots in marketing mix models, but they’ve productized their offering in Avista.
- Company quote: “Avista Decision Support Service provides marketers with an easy-to-use toolset that delivers on-demand insights about the effectiveness of marketing efforts. It combines the insights from MMA marketing mix models with the real time analytic power of industry-leading business intelligence to support fact-driven decisions – all drawn from a company’s integrated marketing data.... On-demand analytic tools support “what-if” scenarios, optimization, forecasting, portfolio allocation and media schedule optimization. Avista management dashboards support continuous tracking and diagnosis of marketing performance to help companies generate the maximum return from their marketing investment.”
- My Review (July 2008)
- My take: system combines modeling, planning and comparisons against actuals; can use models developed by the vendor or elsewhere.
- Company quote: “M-Factor provides marketing investment management solutions that keep analysis up to date and reveal choices with the best bottom-line results. Components include: Specialized reports that explain results as they occur; Scenario builder that predicts P&L impact from plan changes at any product and market level; Optimization engine that runs thousands of simulations within business constraints to maximize desired outcomes.”
- My review (June 2008)
- My take: real-time reporting and optimization, online channels only. [4/24/08 - The company tells me they are repositioning to specialize in identifying optimal landing pages for customer segments. Formal launch is expected in the 3rd quarter of 2008.]
- Company quote: “Our proprietary platform captures and processes all useful data in the customer acquisition cycle across every online media channel. Proprietary algorithms then use this response data to generate optimization rules which are applied to campaigns in real-time. Optimization of placement, creative and budget occurs both within a single channel as well as across multiple channels including paid search, contextual search, display and email. This complete solution provides real-time attribute level reporting giving the advertiser unprecedented insight and control of their marketing efforts.”
- My review (June 2008)
SAS Marketing Performance Management
- My take: a comprehensive performance analysis and simulation system; SAS calls it MRM because it has another product called MPM which is more straight reporting.
- Company quote: “MRM automatically links sales and other business results to the marketing investments that drive them....With SAS, it is now possible to continuously plan, measure and optimize the impact of marketing spend on revenue and profitability.”
- My review (May 2008)
- My take: they consolidate media plans and sales results and make the results available with some forecasting and analysis tools. But no marketing mix models here.
- Company quote: “UQube Marketing is comprised of a series of web-based applications that automatically consolidate and interlink all marketing campaign schedules and sales data from any source. Each UQube Marketing application provides Cross Channel Reporting and Dashboards, Forecasting and Analytics, and rules based, version controlled Media Plan Management. Applications can be purchased separately or fully integrated with one another, and with the UQube Call Center Application Suite.”
- My review (from 2005)
- My take: primarily an e-marketing consultancy but offers a powerful system to consolidate and report on marketing data across channels.
- Company quote: “Our Viewmetrix marketing-intelligence tool is an award-winning solution that lets you capture and correlate information from many sources – both online and offline. You can then analyze and present that information so you will make good business decisions now and secure new program resources for the future.”
- My review (June 2008)
- My take: gathers customer-level data from online media and transactions, produces detailed analysis of factors influencing customer behavior.
- Company quote: “Visual IQ is one of the few marketing intelligence solutions capable of normalizing and integrating syndicated data, planning data and actual results from your campaigns - across multiple disparate data sources....[IQ] Reporter provides agencies and marketers the full data capture, integration, basic analysis and accurate reporting you need to make informed decisions and investment. IQ Envoy combines all of the features of Reporter with sophisticated media channel intelligence, customer transaction data, and high definition cookie-level analysis. Intuitive, customizable dashboards present intelligence in the formats most useful to your team. IQ Sage takes customer intelligence to its highest levels through the use of predictive modeling, scenario building, and simulation.”
- My review (April 2008)
