Showing posts with label reviews. Show all posts
Showing posts with label reviews. Show all posts

Friday, December 5, 2008

TraceWorks' Headlight Integrates Online Measurement and Execution

I’ve been looking at an interesting product called Headlight from a Danish firm TraceWorks. Headlight is an online advertising management system, which means that it helps marketers to plan, execute and measure paid and unpaid Web advertising.

According to TraceWorks CEO Christian Dam, Headlight traces its origins to an earlier product, Statlynx, which measured the return on investment of search marketing campaigns. (This is why Headlight belongs on this blog.) The core technology of Headlight is still the ability to capture data sent by tags inserted in Web pages. These are used to track initial responses to a promotion and eventual conversion events. The conversion tracking is especially critical because it can capture revenue, which provides the basis for detailed return on investment calculations. (Setting this up does require help from your company's technology group; it is not something marketers can do for themselves.)

These functions are now supplemented by functions that let the system actually deliver banner ads, including both an ad serving capability and digital asset management of the ad contents. The system can also integrate with Google AdWords paid search campaigns, automatically sending tracking URLs to AdWords and using those URLs in its reports. It can also capture tracking URLs from email campaigns.

All Web activity tracking may make Headlight sound like a Web analytics tool, but it’s quite different. The main distinction is that Headlight lets users set up and deliver ad campaigns, which is well outside the scope of Web analytics. Nor, on the other hand, does Headlight offer the detailed visitor behavior analysis of a Web analytics system.

The campaign management functions extend both to the planning that precedes execution and to the evaluation that follows it. The planning functions are not especially fancy but should be adequate: users can define activities (a term that Headlight uses more or less interchangeably with campaigns), give them start and end dates, and assign costs. The system can also distinguish between firm plans and drafts. TraceWorks expects to significantly expand workflow capabilities, including sub-tasks with assigned users, due dates and alerts of overdue items, in early 2009.

Evaluation functions are more extensive. Users can define both corporate goals (e.g., total number of conversions) and individual goals (related to specific metrics and activities) for specific users, and have the system generate reports that will compare these to actual results. Separate Key Performance Indicator (KPI) reports show selected actual results over time. In addition, something the vendor calls a “WhyChart” adds marketing activity dates to the KPI charts, so users can see the correlation between different marketing efforts and results. Summary reports can also show the volume of traffic generated by different sources.

The value of Headlight comes not only from the power of the individual features but the fact that they are tightly integrated. For example, the asset management portion of the system can show users the actual results for each asset in previous campaigns. This makes it much easier for marketers to pick the elements that work best and to make changes during campaigns when some items work better than others. The system can also be integrated with other products through a Web Service API that lets external systems call its functions for AdWords campaign management, conversion definition, activity setup, and reporting.

Technology aside, I was quite impressed with the openness of TraceWorks as a company. The Web site provides substantial detail about the product, and includes a Wiki with what looks like fairly complete documentation. The vendor also offers a 14 day free trial of the system.

Pricing also seems quite reasonable. Headlight is offered as a hosted service, with fees ranging from $1,000 to $5,000 per month depending on Web traffic. According to Dam, the average fee is about $1,300 per month. Larger clients include ad agencies who use Headlight for their own clients.

Incidentally, the company Web site also includes an interesting benchmarking offer, which lets you enter information about your own company's online marketing and get back a report comparing you to industry peers. (Yes, I know a marketing information gathering tool when I see one.) At the moment, unfortunately, the company doesn't seem to have enough data gathered to report back results. Or maybe it just didn't like my answers.

TraceWorks released its original Statlynx product in 2003 and launched Headlight in early 2007. The system currently serves about 500 companies directly and through agencies.

Wednesday, July 2, 2008

MMA Avista DSS ...and more

I originally contacted Marketing Management Analytics (MMA) to discuss Avista DSS, a hosted service that helps its clients use MMA-built mix models for planning, forecasting and optimization. But while MMA Vice President Douglas Brooks seemed happy to discuss Avista, he said the most excitement today is being generated by BrandView, a newer offering that shows the long-term impact of marketing messages on financial results.

This is important because mix modeling shows the short-term, incremental impact of marketing efforts on top of a base sales level. BrandView addresses the size of the base itself.

BrandView works by comparing the messages the company has delivered in its advertising with changes in brand measures such as consumer attitudes. It also considers media spending and market conditions. These in turn are related to actual sales results. Using at least three years of data, BrandView can estimate the impact of different messages and media expenditures on the company’s base sales level. This allows calculation of long-term return on investment, supplementing the short-term ROI generated by mix models.

In other words, BrandView lets MMA relate brand health measures to financial results—something that Brooks sees as the biggest opportunity in the marketing measurement industry. He said the company has completed two BrandView projects so far with “rave reviews.”

That’s about all I know about BrandView. Now, back to Avista.

As I mentioned, Avista is a hosted service. Beyond browser-based access to the software itself, it includes having MMA build the underlying models, update them with new data monthly or quarterly, train and help company personnel in using the system, and consult on taking advantage of the system results.

The software has the functions you would want in this sort of system. It can combine results from multiple models, which lets users capture different behaviors for different market segments such as regions or product lines. It lets users build and save a base scenario, and then test the results of changing specific components such as spending, pricing and distribution. It also lets users change assumptions for external factors such as competitive behavior and market demand, as well as new factors not built into the historically-based mix models. It provides more than 30 standard reports showing forecasted demand, estimated impact of mix components, actual vs. forecast results (with forecasts based on updated actual inputs), and return on different marketing investments. Reports can convert the media budget to Gross Ratings Points, to help guide media buyers.

The system also includes automated optimization. Users select one objective from a variety of options, such as maximum revenue for a fixed marketing budget or minimum marketing spend to reach a specified volume goal. They can also specify constraints such as maximum budget, existing media commitments, or allocations of spending over time. The system then identifies the optimal resource allocations to meet the specified conditions. Reports will compare the recommended allocations against past actuals, to highlight the changes.

Avista was released in 2005. Brooks reports it is now used by about two-thirds of MMA’s mix model clients. The system typically has ten to twenty users per company, spread among marketing, finance and research departments. Each user can be given customized reports—for example, to focus on a particular product line or region—as well as different system capabilities. Building the underlying models usually takes three to four months, depending largely on how long it takes to assemble the company-provided inputs. (Standard external inputs, such as syndicated research, are easy.) After this, it takes another month to deploy Avista itself, mostly doing quality control. Cost depends on the scope of each project, but might start at around $400,000 per year for a typical company with multiple models.

Of course, just getting Avista deployed is only the start of the process. The real challenge is getting company managers to trust and use the results. Brooks said that most firms need three to six months to build the necessary confidence. The roll-out usually proceeds in phases, starting with dashboard reports, adding what-if analyses, and only then using the outputs in official company budgets and forecasts.

Brooks said that MMA will eventually integrate BrandView with Avista. The synergy is obvious: the base demand projections created by BrandView are part of the input to the Avista mix models. This is definitely something to keep an eye on.

Wednesday, June 18, 2008

M-Factor M3 Aggregates Segment-Level Mix Models (Which Is Cooler Than It Sounds)

I spend most of my time these days thinking about business to business marketing, where performance is measured one sale at a time. Ironically, it’s much harder for business marketers to calculate their impact on sales than for marketers in the anonymous, vastly less precise realm of consumer packaged goods. The reason: predicting individual behavior is difficult in both cases, but there are so many consumers that their aggregate behavior can be modeled accurately with statistics. Packaged goods marketers may not know the name of every tree, but they have a much clearer picture of the forest.

The main tool used to measure consumer marketing results is, of course, the marketing mix model. This is built by identifying historical correlations between sales results and inputs such as media spend, trade promotions, pricing, primary demand and competitive activities. Mix models can provide powerful insights into the causes of past performance and helpful forecasts of the impact of future plans. Even though few marketing managers really understand the underlying math, the models are well enough proven to be widely accepted.

But there are limits to what a single marketing mix model can accomplish. Most markets are in fact comprised of many different segments, based on geography, customer type, product attributes, and other distinctions. Each segment will behave slightly differently, so generating the most accurate results requires a separate model for every one. This wouldn’t matter, except that marketers work at the segment level. They have separate marketing plans for each segment and track segment results. In fact, a large company will often have entirely different people responsible for different segments. You can be sure that each of them focuses on her own concerns.

Building lots of segment-level models doesn’t have to be much more expensive than building one big model. The trick is keeping the inputs and model structure the same. But managing all those models and aggregating their results does require a substantial infrastructure. This is what SAS for Marketing Mix (formerly Veridiem) was designed to do. (See my related post ) . It’s also the function of M-Factor M3.

In fact, M-Factor was originally founded in 2003 specifically to help combine marketing mix models that were created by third parties. The company’s product can do this, but the firm found that externally-built models are often poorly understood, difficult to maintain, and inconsistent with each other. In self-defense, it decided to build its own.

Today, M-Factor developed its own model-building staff and toolkit. This allows it to develop separate models for each segment in a market—sometimes hundreds or thousands of them. These can be arrayed in a multi-dimensional cube, which allows the system to easily aggregate results or drill down within different dimensions. Sharing the same structure also makes it easy to update the models with new data and to build detailed reports such as profit statements derived from model outputs.

To go at it a bit more systematically, M3 provides three main functions. The first is results analysis: calculating return on marketing investments by estimating the contribution of each input to over-all results. The second is forecasting: accepting scenarios with planned inputs, and using these to estimate future results. The third is optimization: automatically identifying the best combination of inputs to produce the desired outputs.

The results analysis accepts historical inputs from the usual sources such as Nielsen and IRI. It then produces typical marketing mix reports on the sales levels, volume drivers and return on investment. It also provides model performance reports such as model fit and error analyses. M-Factor makes a point of breaking out the model error, to help users understand the limits of model accuracy and see how well models hold up over time. The company says that its particular techniques make its models unusually robust.

Forecasting starts with a marketing plan for business inputs such as budgets and prices. These are at roughly the same level as the mix model inputs: that is, spending by category but not for specific marketing campaigns. A typical model has 15-25 such inputs. They can be entered for individual segments and then aggregated by the system, or the user can provide summary figures and let the system distribute them among segments according to user-specified rules. The system then applies these inputs to its models to generate a forecast.

Once an initial plan is entered, it serves as a base for other scenarios. M3 displays the original inputs as one column in a grid, and lets users make changes in an adjacent column. Since the models are already built, the forecast is calculated almost instantly. Results can include a full profit statement as well as the inputs and estimated sales volume.

Users can freeze one forecast to treat it as the business plan. The system can later report planned vs. actual results, or compare the original plan against a revised forecast. The system can also project results for the current calendar year by combining actuals to date with forecasts for the balance of the period. Because the forecasts are built by the individual segment models, all results can be analyzed via drill-downs or aggregated into user-defined groups. M3 provides each user with a personalized dashboard to make this easier.

Optimization is an automated version of the scenario testing process. The user specifies output constraints such as minimum revenue levels, and driver ranges such as no more than 3% price change. The actual optimization process uses a genetic algorithm that randomly tests different combinations of inputs, selects the sets with the best outcomes, makes small changes, and tests them again. It continues testing and tweaking until it stops finding improvements.

Users can also ask the system to optimize two target variables simultaneously. What the system actually does is combine them into a weighted composite, using different weights in different model runs. It plots the result of each run on a chart where the X axis represents one target variable and the Y axis represents the other. Users can then choose the balance they prefer.

Initial deployment of M3 usually takes three to four months, including the time to assemble the historical data, build the models, and provide an initial set of strategic recommendations. Pricing is comparable to conventional mix models, although it is sold as a hosted service on an annual subscription. This typically includes monthly data updates and reports, and quarterly updates of the underlying models. End-users access the system via a browser and can run reports, scenarios and optimizations at will.

Thursday, June 5, 2008

Tying Up Some Loose Ends: Hardmetrics, Revcube and Viewmark

I’ve been following up systematically—some might say compulsively—on my earlier list of MPM software vendors. This has been a lesson in the perils of Internet research. Despite my close reading of their Web sites, several firms turned out to be focused on something else. Rather than simply remove them from the list, I thought I’d give a little update on what I found.

Hardmetrics does offer a marketing measurement solution, but it’s just an extension of its primary offering: business activity monitoring, especially for call centers.

The heart of Hardmetrics is middleware that can identify related inputs from disparate sources. This is essential for all types of business activity monitoring, which often reports on correlations between events recorded in different systems. Hardmetrics uses a specialized star schema design, running on any standard relational database engine. But instead of relying on exact matches against hard keys, the middleware can link records through indirect matches such as time/date stamps or comparisons across different fields. Of course, if a hard key is available, the system will use it.

This correlation mapping is Hardmetrics’ secret sauce: it lets the system load data with minimal preparation, substantially simplifying both the initial implementation and subsequent data loads. It also means the system will automatically reassign matches between records when new or changed data is added.

Hardmetrics also has a knowledgebase of data found in common application systems, such as standard call center software. This speeds the mapping process for clients with those systems in place.

Clients can access the data using Hardmetrics’ own browser-based tools for reports, dashboards, scorecards, alerts, etc., or by writing their own queries against the middleware API. Either way, they still get the benefit of the indirect matching.

Hardmetrics offers its technology as a hosted, externally-managed, or on-premise solution.

RevCube originally attracted my attention with their Web site’s bold claim of a “complete customer acquisition solution” that would optimize placement, creative and budgets within and across multiple online channels. Apparently their core technology, a self-training content targeting engine, really could do that. But it’s a large pill for most marketers to swallow, so the company is asking them to nibble on something smaller: optimal Web landing pages for different visitor segments.

The system finds best pages by developing a set of test pages, each with a different combination of values for key attributes. It then presents each page to different visitors and infers which values appeal to which segments. This is harder than it sounds because the segments themselves are based on visitor attributes. This means the system is considering different segmentation schemes at the same time that it’s trying to find out which attributes appeal to which segments. It’s like shooting a moving target while riding in a boat.

This is all quite interesting and I hope to eventually write about it in detail, probably in my Customer Experience Matrix blog. But that won’t happen until RevCube formally releases its new system, tentatively late this summer. Until then they’re in stealth mode—so forget everything I just told you. (Or, if you’re seriously paranoid, first ask yourself how much of it is likely to be true…)

Viewmark also caught my attention (okay, it doesn’t take much) with the promise of a system to “capture and correlate information from many sources – both online and offline.” The system even has an oddly-spelled and therefore trademarkable name of its own: Viewmetrix. So it must be serious.

Well, yes and no. Viewmetrix does exist and has been quite successful. But Viewmark chose not to pursue it as an independent product, deciding instead to focus on its core business of web development for medium-sized organizations. It does still integrate Viewmetrix with its content management system, which has another catchy name, Cyberneering™.

Viewmark almost certainly made the right business decision about Viewmetrix. Still, it’s a bit of a shame, because Viewmetrix looks like a very good product. It incorporates dashboards, custom sales funnels, and a sophisticated approach to marketing ROI. This approach gathers information on the marketing contacts made with each individual, such as emails and sales calls, and the ultimate value of sales made to that individual. The contacts are assigned weights that reflect their contribution to moving customers from one stage in the sales funnel to the next. Weights are further adjusted for the time between the contact and the subsequent customer behavior . Based on this information, the system can allocate a fraction of each customer’s value to each marketing contact with that customer. The ROI of a marketing program is then calculated by comparing the program cost with the cumulative value of its contacts.

At least, I think that’s how the ROI calculation works. I might have some details wrong. But you get the idea: this is a very complex calculation calling for lots of data gathering and lots of analysis to set those weights and validate them. The problem, according to Viewmark, is that only large companies can afford such sophisticated marketing measurement. Smaller firms don’t spend enough on marketing to justify the cost of such precision. Since Viewmark’s business is centered on those smaller companies, it has even less incentive to further refine those features of Viewmetrix.

Thursday, May 15, 2008

SAS Marketing Performance Management Provides Rich Business Model

Like Caesar's Gaul, SAS Marketing Performance Management has three parts: SAS Marketing Scorecards, SAS Profitability Management, and SAS for Marketing Mix.

The heart of the system is the Marketing Scorecard, which provides a detailed business model specifying the relationships among various marketing measures. These are expressed as key performance indicators (KPIs) which can be displayed on scorecards, dashboards, and strategy maps in views tailored to different types of users. Because the SAS model defines the relationships among its components, it can calculate the impact of changes in one element on other elements and on the business as a whole. Of course, clients can tailor model to fit their own business. But SAS says that starting with a predefined framework shaves several months from the deployment process and forces a degree of rigor that can be missing from systems that simply report on disconnected facts.

The data mart supporting all this can combine inputs from campaign reporting, financial systems, budgets, forecasts, syndicated research, and other external sources. The system can import results from individual campaigns for detailed operational reporting, and can aggregate these to compare them against higher-level financial or plan data. It would not typically incorporate customer-level data.

SAS for Marketing Mix started out as Veridiem, a suite of applications for marketing planning, reporting, analysis and optimization based primarily on marketing mix models. SAS acquired Veridiem’s assets in March 2006. It still offers a hosted version within its OnDemand solution as Veridiem MRM.

SAS has reworked the independent Veridiem software to share data and other components with the rest of Marketing Performance Management. In fact, you cannot purchase SAS Marketing Mix without also purchasing SAS Marketing Scorecards.

The Marketing Mix system itself is an administrative tool designed to help users use mix models more effectively. It will draw information from the data mart as model inputs, allow users to modify inputs to build different scenarios, combine the results of multiple models, present the results in a variety of formats, and feed forecasts back into the data mart for reporting. The models are built outside of the system, using SAS or other vendors’ products.

The third component of Marketing Performance Management is SAS Profitability Management. This is a version of SAS’s generic profitability management software, which applies user-defined cost assignment rules to calculate profitability down as far as individual transactions.

Pricing of the Marketing Performance Management system depends on the components purchased, number of users, and other elements. Entry price is $125,000 plus one to two times that amount in professional services.

Thursday, May 8, 2008

Factor TG Answers All The Measurement Questions

Most of marketing performance measurement comes down to three key questions:

- what is the sales impact of marketing spending?
- what is the value of my brand?
- how should I allocate my marketing budget across channels?

Think of them as the CFO, CEO and CMO questions, respectively. Each question is conventionally answered by a different tool: sales impact is measured by marketing mix models; brand value is measured by brand valuation models; and channel programs are measured by return on investment. Although there is some connection among these tools, they are mostly separate. They are also unavailable to many firms because of high cost and extensive data requirements.

Factor TG takes a different approach, using a single tool to answer all three questions at once. I’ll spare you the suspense: their secret is online consumer surveys. These provide the critical information which is otherwise unavailable to measurement systems.

Specifically, Factor TG surveys identify customer demographics, the advertisements that customers have seen, how customer attitudes have changed as a result, and the relationships between customer attitudes and purchase behavior. With this data as a foundation, Factor TG can calculate the impact of different campaign components, such as advertising medium and creative, on immediate sales and long-term brand value for the individual consumers. It projects from this to the campaign as a whole by adding sales information from company records or third-party compilers and media planning information about campaign costs and audiences. As a result, it can provide clients with reports on:

- the return on investment of specific marketing programs,
- the near-term impact of marketing programs on sales, and
- the long-term impact of marketing programs on baseline demand (which is the functional definition of brand value).

Obviously it takes a fair amount of rocket science to make this all happen. Factor TG must find ways to reach the right audience for its surveys and craft the surveys themselves to capture the right information. Then it must build complex models to estimate how consumer attitudes translate into behavior. It must also correlate survey results with external sales and media information to make meaningful projections at the campaign level. Finally, it must build still more models to estimate the long-term effects of changes in brand attributes.

Some of this resembles traditional marketing mix and brand value models. But while traditional marketing mix models are based largely on spending levels, Factor TG captures a greater level of detail about each campaign, allowing it to measure the effectiveness of individual campaign components much more precisely. Similarly, traditional brand value models give largely static estimates of the importance of consumer attitudes, while Factor TG looks at the results of small changes in those attitudes. This greater detail lets Factor TG provide weekly reports on current campaigns so that marketers can fine-tune their programs in near-real-time. Because the reports estimate both short-term sales impact and changes in long-term base demand, marketers are able to optimize their spending along both dimensions simultaneously.

Of course, this approach has its limits. The most obvious is finding enough people who have actually seen the campaigns being measured. Surveys are easily attached to online campaigns, but otherwise FactorTG must use customer lists, consumer panels or other techniques to locate the 3,000 or so audience members needed for an adequate sample. According to Factor TG COO Margaret Coles, it usually takes an advertising budget of at least $10 million to generate an audience large enough to survey effectively. The threshold could be still higher for certain population segments, such as those over age 65 or lower income groups. When necessary, Factor TG will gather information through non-Web techniques such as personal interviews at events.

In addition, I am personally skeptical of the accuracy of survey data. Capturing attitudes is fairly straightforward, but Factor TG must also rely on consumers to report their purchases and advertising exposures, which they may not recall correctly. Sales and advertising data are critical links in the chain because Factor TG needs them to estimate relations between consumer attitudes, campaign exposures and business results. The focus on survey data also leaves out competitive and environmental factors such as advertising levels, promotions, distribution and over-all demand.

Factor TG largely rejects these concerns, arguing it has proven its techniques to skeptical clients many times. Regarding competitive and environment factors, the company says it could include them in its models but has not found it necessary. In fact, Coles said its models have less than 1% error, although I’m not sure exactly what that is measuring.

A more convincing argument may be that Factor TG needs only to be directionally correct, allowing companies to identify the stronger and weaker campaigns so it can reallocate funds appropriately. This allow continuous optimization of marketing spend even if the precise details are incorrect.

Factor TG also benefits from making very frequent measurements, which allows it to recalibrate it models on a regular basis and thereby keep errors to a minimum. This contrasts with conventional econometric (marketing mix) models, which rely on years of historical data. Of course, frequent readjustment may itself lead to errors if the model overreacts to random variations in behavior. But this is the sort of technical adjustment that Factor TG’s statisticians are no doubt very good at.

Factor TG introduced its approach about three years ago. Major clients have been in the automobile, pharmaceutical and consumer packaged goods industries, which sell through third parties rather than direct to consumers. The company has also done several dozen projects in retail, consumer electronics, hospitality, and financial services. Most clients are advertisers, with some ad agencies and publishers. Initial set-up takes about six weeks, most of which is spent understanding the client’s situation and validating Factor TG’s models with the client’s in-house researchers. Pricing typically ranges from 1% to 3% of the media spend depending on the data volume and complexity of the project. All processing occurs on Factor TG's servers, which clients can access for reports via a Web portal.

Tuesday, April 22, 2008

Visual IQ Measures Impact of Online Campaigns

Visual IQ promises to “solve your dilemma of capturing, integrating, analyzing and understanding your marketing performance data.” Word choice aside*, this promises exactly what I want from a marketing measurement system. Within limits, Visual IQ appears to deliver.

First a bit of background. Visual IQ is the name adopted in February 2008 by Connexion.a, a firm founded in 2005 with roots in online marketing measurement. The goal was (and is) to help marketers do a better job of allocating their budgets by comparing results across channels. The particular focus has been digital channels—display ads, paid search, organic search, affiliate programs, mobile, etc.—although the company can analyze data from conventional channels as well. The core technology is an “Online Intelligence Platform” that integrates data from all sources and presents it in a way that shows the value of each campaign. This happens in three system modules, each representing a higher level of sophistication.

  • IQ Reporter accepts campaign-level inputs and presents campaign-level results. These can be from one or multiple channels and incorporate plan, actual and syndicated competitive information. The value is being able to view data from different sources and multiple channels in a single reporting system. Visual IQ has developed some attractive tools for reporting and analysis. Technically these are quite impressive, using Adobe Flex technology to deliver rich interactive analytics through a browser.

  • IQ Envoy steps up to individual-level data from cookies, transactions and other sources. It consolidates this by customer to view contacts across campaigns. Since individual-level data involves much more volume than campaign statistics, Visual IQ extracts selected attributes from each channel’s inputs and loading them into a proprietary database. This compresses the data for storage and decompresses it for analysis. The system can automatically poll for new data on what the company calls a “near real time” basis, which is usually daily but can be more often if appropriate. The attributes extracted for each channel are defined in a standard data model, which makes it easier to set up new clients and add new channels to existing clients.

    Managing data at the customer level lets the system look across campaigns to find all contacts leading up to an interaction such as a purchase. This lets Visual IQ apply statistical methods to estimate the contribution that each contact made to the final result. These models can look across contact attributes, such as campaigns, web sites, keywords, creative treatments, and dates, to better understand the exact elements that are driving response. Such information lets marketers allocate funds to the most effective treatments--a major step towards true marketing optimization.

    The challenge here is the scope of information available to analyze. Visual IQ relies primarily on ad server cookies to track the messages received by each individual. This automatically creates a history of ads served by the ad network itself. But it requires additional coordination for the ad server to track organic search, paid search and affiliate campaigns. Information other channels, both online and offline, requires additional identifiers. Typically it would depend on establishing the actual user identity at the time of a transaction, and then linking this to identifiers in other channels. For example, an online purchase might capture an email address that could be linked to an email account, and capture an actual name and postal address that could be linked to a mailing list or regional media market. Exactly how much information will be available will depend on the situation and has little to do with Visual IQ itself. (Of course, you could argue that having Visual IQ available to analyze the data gives marketers a stronger reason to take the effort to gather it.)

    Just to be clear: cookies are inherently anonymous. Tying them to individuals requires information from an external source. Visual IQ can effectively analyze contact history of a given cookie without knowing the identity of its owner.

  • IQ Sage uses the individual information to build customer profiles and to model the paths followed by customers as they head towards a purchase. These models can simulate the impact of changes in marketing programs, such as increasing spending on programs to move customers from one stage of the purchase cycle to the next, or switching funds to programs that attract different types of customers. Optimization can recommend changes that would produce the best over-all results.

Visual IQ offers its products as a hosted service with monthly fees ranging from $7,500 to $25,000. Cost depends largely on which components are used, with some additional fees for data storage. The company may adopt volume-based pricing in the future. Visual IQ has 14 clients, including very large advertisers, agencies and Web publishers. Although a majority of its clients use the system only for campaign-level reporting, about one-third work with cookie-level data.

=====================================================
*The primary definition of “dilemma” is “a situation requiring a choice between equally undesirable alternatives.” (Dictionary.com) I don’t think that really applies here. Presumably Visual IQ has in mind the secondary meaning of “any difficult or perplexing situation or problem.”