Wednesday, June 25, 2008

'Integration' Offers Effectiveness Measurement Methodology

This blog is mostly about technology. But I haven’t finished the research I had intended this week, so I’m going to point you to an interesting methodology instead. This is from a vendor called Integration, about which I know nothing except what I found on their Web site. They are cool if for no other reason than being headquartered in Cyprus. But they also appear to conduct detailed studies of the effectiveness of different marketing contacts—something they refer to as a Market ContactAudit. (Why they make that two words instead of one or three, I have no idea.)

The Web site contains a detailed evaluation of their technology by the Advertising Research Foundation. The steps are:
  • define a set of brands and contacts to assess. This is based on discussions with company managers and focus groups with consumers.
  • survey consumers to find identify the “clout” of each contact type (specifically, its ability to convey information, create emotional bonds, and influence attitudes and behavior), and to find which brands they associate with which contacts
  • use the results to calculate ‘Brand Experience Points’ (the clout of each contact x the number of brand associations with that contact), and the ‘Brand Experience Share’ (the target brand’s share of total category Brand Experience Points)
  • apply these measures to other analyses such as identifying the most influential contacts, position of the brand vs. its competitors, and spending efficiency (cost per Brand Experience Point)

Compared with some other brand valuation methodologies, the Integration approach is quite straightforward. Much of the simplicity derives from its use of consumer surveys, which avoids the work of gathering actual data on media spending, competitive products, company financials, and so on. Of course, this comes at a cost: it requires relying on the accuracy of consumer perceptions, and doesn’t factor in elements of the marketing mix that are invisible to consumers, such as distribution. Thus, it will not provide anything close to the precise of a marketing mix model. Nor does it allow calculation of a financial measure of brand value.

Still, having a “common currency” to measure the value of contacts across touchpoints is critical to making effective resource allocations. Combining the Brand Experience Points with company media spending is easy enough and gives good tactical guidance. According to the audit, the Brand Experience Share has correlated closely with market share over hundreds of projects, so the basic consumer input seems to be fairly reliable.

The ARF audit also says that Integration provides detailed materials describing the process, which can even be executed without an outside consultant. Speaking as a consulant, I'm not 100% sure I like that, but I suppose it's a good thing from the client perspective.

The Integration Web site lists “global alliances” with a number of major ad agencies and consultancies, who presumably have deployed or adapted the methodology in-house. This gives the approach additional credibility. It also seems that the Market ContactAudit is part of a larger strategy process offered by Integration, which in turn can be part of a larger business planning process. There's even some marketing management software involved which includes activity based costing and management dashboards.

In any case, It's probably best to let Integration speak for themselves. Take a look at their approach if you have a moment.

Wednesday, June 18, 2008

M-Factor M3 Aggregates Segment-Level Mix Models (Which Is Cooler Than It Sounds)

I spend most of my time these days thinking about business to business marketing, where performance is measured one sale at a time. Ironically, it’s much harder for business marketers to calculate their impact on sales than for marketers in the anonymous, vastly less precise realm of consumer packaged goods. The reason: predicting individual behavior is difficult in both cases, but there are so many consumers that their aggregate behavior can be modeled accurately with statistics. Packaged goods marketers may not know the name of every tree, but they have a much clearer picture of the forest.

The main tool used to measure consumer marketing results is, of course, the marketing mix model. This is built by identifying historical correlations between sales results and inputs such as media spend, trade promotions, pricing, primary demand and competitive activities. Mix models can provide powerful insights into the causes of past performance and helpful forecasts of the impact of future plans. Even though few marketing managers really understand the underlying math, the models are well enough proven to be widely accepted.

But there are limits to what a single marketing mix model can accomplish. Most markets are in fact comprised of many different segments, based on geography, customer type, product attributes, and other distinctions. Each segment will behave slightly differently, so generating the most accurate results requires a separate model for every one. This wouldn’t matter, except that marketers work at the segment level. They have separate marketing plans for each segment and track segment results. In fact, a large company will often have entirely different people responsible for different segments. You can be sure that each of them focuses on her own concerns.

Building lots of segment-level models doesn’t have to be much more expensive than building one big model. The trick is keeping the inputs and model structure the same. But managing all those models and aggregating their results does require a substantial infrastructure. This is what SAS for Marketing Mix (formerly Veridiem) was designed to do. (See my related post ) . It’s also the function of M-Factor M3.

In fact, M-Factor was originally founded in 2003 specifically to help combine marketing mix models that were created by third parties. The company’s product can do this, but the firm found that externally-built models are often poorly understood, difficult to maintain, and inconsistent with each other. In self-defense, it decided to build its own.

Today, M-Factor developed its own model-building staff and toolkit. This allows it to develop separate models for each segment in a market—sometimes hundreds or thousands of them. These can be arrayed in a multi-dimensional cube, which allows the system to easily aggregate results or drill down within different dimensions. Sharing the same structure also makes it easy to update the models with new data and to build detailed reports such as profit statements derived from model outputs.

To go at it a bit more systematically, M3 provides three main functions. The first is results analysis: calculating return on marketing investments by estimating the contribution of each input to over-all results. The second is forecasting: accepting scenarios with planned inputs, and using these to estimate future results. The third is optimization: automatically identifying the best combination of inputs to produce the desired outputs.

The results analysis accepts historical inputs from the usual sources such as Nielsen and IRI. It then produces typical marketing mix reports on the sales levels, volume drivers and return on investment. It also provides model performance reports such as model fit and error analyses. M-Factor makes a point of breaking out the model error, to help users understand the limits of model accuracy and see how well models hold up over time. The company says that its particular techniques make its models unusually robust.

Forecasting starts with a marketing plan for business inputs such as budgets and prices. These are at roughly the same level as the mix model inputs: that is, spending by category but not for specific marketing campaigns. A typical model has 15-25 such inputs. They can be entered for individual segments and then aggregated by the system, or the user can provide summary figures and let the system distribute them among segments according to user-specified rules. The system then applies these inputs to its models to generate a forecast.

Once an initial plan is entered, it serves as a base for other scenarios. M3 displays the original inputs as one column in a grid, and lets users make changes in an adjacent column. Since the models are already built, the forecast is calculated almost instantly. Results can include a full profit statement as well as the inputs and estimated sales volume.

Users can freeze one forecast to treat it as the business plan. The system can later report planned vs. actual results, or compare the original plan against a revised forecast. The system can also project results for the current calendar year by combining actuals to date with forecasts for the balance of the period. Because the forecasts are built by the individual segment models, all results can be analyzed via drill-downs or aggregated into user-defined groups. M3 provides each user with a personalized dashboard to make this easier.

Optimization is an automated version of the scenario testing process. The user specifies output constraints such as minimum revenue levels, and driver ranges such as no more than 3% price change. The actual optimization process uses a genetic algorithm that randomly tests different combinations of inputs, selects the sets with the best outcomes, makes small changes, and tests them again. It continues testing and tweaking until it stops finding improvements.

Users can also ask the system to optimize two target variables simultaneously. What the system actually does is combine them into a weighted composite, using different weights in different model runs. It plots the result of each run on a chart where the X axis represents one target variable and the Y axis represents the other. Users can then choose the balance they prefer.

Initial deployment of M3 usually takes three to four months, including the time to assemble the historical data, build the models, and provide an initial set of strategic recommendations. Pricing is comparable to conventional mix models, although it is sold as a hosted service on an annual subscription. This typically includes monthly data updates and reports, and quarterly updates of the underlying models. End-users access the system via a browser and can run reports, scenarios and optimizations at will.

Wednesday, June 11, 2008

CLOSE Survey Finds Marketing / Sales Integration Gaps

CLOSE (Coalition to Leverage and Optimize Sales Effectiveness) is a “peer-led community of over 3,000 sales, marketing and channel professionals” within the CMO Council. The group recently surveyed its members (mostly business-to-business marketers) about sales and marketing integration. The report is not yet officially released, but they did send me a preliminary copy which I discussed with CMO Council Executive Director Donovan Neale-May.

In my eyes, the survey results boiled down to two main points: marketing’s main job is to provide good leads, and alignment between the two groups depends more on processes than technology. Neither of these is surprising. But there were some anomalies that are worth considering.

Let’s start with the role of marketing. The survey asks about this in several ways, but the most telling question was, “What metrics and measures marketing should use to quantify its impact on sales results and business outcomes?” The top answers were unambiguous: 19% said “pipeline and prospect flow” and 18% said “volume and caliber of leads.” No other answer had more than 12% of responses. So it’s clear that marketing’s job is to get good leads, right?

Not necessarily. When asked what “role” marketing should play in optimizing sales performance, there was a statistical dead heat between lead generation (29.1%) and providing sales materials (29.2%). Effectiveness measurement followed close behind (24.5%). Those are three very different things.

In another question about how marketing is “viewed” by their organization, by far the top answer was providing content and sales materials was by far the top answer (41%). Answers relating to leads and demand generation combined for another 32%, while the remaining 27% pretty much said marketing was useless. (I’m not exaggerating: 15% chose marketing provides “no real customer insight or value-added thinking” and 12% said marketing “operates in a vacuum; programs do little to affect sales.” Ouch.)

So: leads are the main measure of marketing impact, except that producing sales materials and analysis are just as important when it comes to marketing’s role or how it is viewed. This seems like a contradiction.

Neale-May’s take was that marketing is viewed as tactical (i.e., a provider of sales materials) because it doesn’t think or act strategically. He felt that marketing would be more effective and get more respect if it took more responsibility for lead nurturing and measuring final results, rather than simply catching leads and passing them immediately to sales.

It sounds so crazy that it just might work.

Back to the survey. When asked to list the key elements to maximize sales, the number one response was “lead quality and ROI” (52%). I suppose this explains why “better integrate and align with marketing” showed up as the highest ranked way to improve sales effectiveness (41%). That is, working more closely with sales would help marketing to generate better leads.

There’s just one problem with alignment: few people seem to do it. Only 16% of the respondents reported an “extremely collaborative” relationship between marketing and sales, although another 40% shrugged that they had “relatively good information sharing”.

Even scarier, less than half (42%) reported “any” formal programs, systems or processes to align sales and marketing, and only half of these (47%) said the programs were successful. That means three-quarters of the companies are not addressing alignment effectively.

One bright spot is that respondents do seem to recognize that the key to alignment is process, not technology. At least, that’s how I interpret their citing “limited processes and systems in place” as the largest challenge to integration (41%), followed by “reporting and organizational structures” (30%) and “siloed operations” (29%). The truly technical issues of “no shared data and real-time information” rank just sixth with 20%.

In terms of existing technology, 12% reportedly live in the paradise of a “well-integrated, real-time view of all customer interactions; readily accessible on-demand by all functions.” Another 37% report that “sales has good visibility into prospects, pipeline, deal flow and conversion rates”. But the other half lives poorly indeed: 20% report that “marketing hands off leads to sales and has no insight into conversion and close process”, 13% report that “most leads are never captured, qualified or acted on”, 11% have “no customer relationship management system or on-demand CRM service in place”, 7% “still use spreadsheets for tracking targets and prospects” and 1% just plain “don’t know”.

The numbers are somewhat similar for CRM systems. A lucky 13% report that CRM is “highly valued and widely deployed” and another 42% say it is “growing acceptance and adoption”. Again, the other half are in bad shape: 15% say the system is “difficult to customize and use”, 10% report a “high level of dissatisfaction”, and 21% have “no CRM system in place.”

Analytics are a slightly different story. A near-majority (46%) report that sales and marketing can both access customer analytics, while another 8% each report that only sales or only marketing have access. This leaves a little more than one-third flying blind.

Or is it really much worse? On the specific issue of “tracking and optimizing customer lifetime value and profitability”, just 6% said they had already made a “significant investment in analytics and programs.” Half the remainder (46%) are working on it, while the other half (48%) apparently are not.

But while just 6% have significantly invested in analytics, 24% list analytics as the best way for marketing to help sales to grow customer value. That was the most popular answer. The difference between 24% and 6% suggests an embarrassingly large gap between what marketers say and what they do.

Over all, it seems that about two-thirds of the companies have reasonably good customer data and analytic tools, but a much smaller elite--fewer than 15%--take full advantage of them.

Neale-May commented that marketing often does not have full access to CRM data. But he added that many marketers could make better use of the tools they do have available. Specifically, they must track prospects through the end of the sales process to understand what makes a quality lead. And producing higher quality leads is what really counts.

Thursday, June 5, 2008

Tying Up Some Loose Ends: Hardmetrics, Revcube and Viewmark

I’ve been following up systematically—some might say compulsively—on my earlier list of MPM software vendors. This has been a lesson in the perils of Internet research. Despite my close reading of their Web sites, several firms turned out to be focused on something else. Rather than simply remove them from the list, I thought I’d give a little update on what I found.

Hardmetrics does offer a marketing measurement solution, but it’s just an extension of its primary offering: business activity monitoring, especially for call centers.

The heart of Hardmetrics is middleware that can identify related inputs from disparate sources. This is essential for all types of business activity monitoring, which often reports on correlations between events recorded in different systems. Hardmetrics uses a specialized star schema design, running on any standard relational database engine. But instead of relying on exact matches against hard keys, the middleware can link records through indirect matches such as time/date stamps or comparisons across different fields. Of course, if a hard key is available, the system will use it.

This correlation mapping is Hardmetrics’ secret sauce: it lets the system load data with minimal preparation, substantially simplifying both the initial implementation and subsequent data loads. It also means the system will automatically reassign matches between records when new or changed data is added.

Hardmetrics also has a knowledgebase of data found in common application systems, such as standard call center software. This speeds the mapping process for clients with those systems in place.

Clients can access the data using Hardmetrics’ own browser-based tools for reports, dashboards, scorecards, alerts, etc., or by writing their own queries against the middleware API. Either way, they still get the benefit of the indirect matching.

Hardmetrics offers its technology as a hosted, externally-managed, or on-premise solution.

RevCube originally attracted my attention with their Web site’s bold claim of a “complete customer acquisition solution” that would optimize placement, creative and budgets within and across multiple online channels. Apparently their core technology, a self-training content targeting engine, really could do that. But it’s a large pill for most marketers to swallow, so the company is asking them to nibble on something smaller: optimal Web landing pages for different visitor segments.

The system finds best pages by developing a set of test pages, each with a different combination of values for key attributes. It then presents each page to different visitors and infers which values appeal to which segments. This is harder than it sounds because the segments themselves are based on visitor attributes. This means the system is considering different segmentation schemes at the same time that it’s trying to find out which attributes appeal to which segments. It’s like shooting a moving target while riding in a boat.

This is all quite interesting and I hope to eventually write about it in detail, probably in my Customer Experience Matrix blog. But that won’t happen until RevCube formally releases its new system, tentatively late this summer. Until then they’re in stealth mode—so forget everything I just told you. (Or, if you’re seriously paranoid, first ask yourself how much of it is likely to be true…)

Viewmark also caught my attention (okay, it doesn’t take much) with the promise of a system to “capture and correlate information from many sources – both online and offline.” The system even has an oddly-spelled and therefore trademarkable name of its own: Viewmetrix. So it must be serious.

Well, yes and no. Viewmetrix does exist and has been quite successful. But Viewmark chose not to pursue it as an independent product, deciding instead to focus on its core business of web development for medium-sized organizations. It does still integrate Viewmetrix with its content management system, which has another catchy name, Cyberneering™.

Viewmark almost certainly made the right business decision about Viewmetrix. Still, it’s a bit of a shame, because Viewmetrix looks like a very good product. It incorporates dashboards, custom sales funnels, and a sophisticated approach to marketing ROI. This approach gathers information on the marketing contacts made with each individual, such as emails and sales calls, and the ultimate value of sales made to that individual. The contacts are assigned weights that reflect their contribution to moving customers from one stage in the sales funnel to the next. Weights are further adjusted for the time between the contact and the subsequent customer behavior . Based on this information, the system can allocate a fraction of each customer’s value to each marketing contact with that customer. The ROI of a marketing program is then calculated by comparing the program cost with the cumulative value of its contacts.

At least, I think that’s how the ROI calculation works. I might have some details wrong. But you get the idea: this is a very complex calculation calling for lots of data gathering and lots of analysis to set those weights and validate them. The problem, according to Viewmark, is that only large companies can afford such sophisticated marketing measurement. Smaller firms don’t spend enough on marketing to justify the cost of such precision. Since Viewmark’s business is centered on those smaller companies, it has even less incentive to further refine those features of Viewmetrix.