Tuesday, December 17, 2013

The Real Reason Many Digital Analytics Implementations Fail


There is a dirty little secret that those who sell digital analytics technology won’t tell you. More often than not, the data that initial applications yield just sits on the servers and is never used effectively, if it is used at all.

This probably will not come as a surprise to many of you reading this.  If you are an experienced practitioner, you have likely seen it firsthand.   So why are expectations - so artfully set in the technology sales cycle with beautifully polished demos and proofs of concept - not fulfilled initially?  Note I say “initially” as, there are expensive re-implementation efforts that follow and these tend to be more successful.

So what happens when you roll out your brand new analytics technology (hopefully on time and budget ;-) to great fanfare and expectation, and you find out a month later that it is getting very little user traction?  Of course, you do a survey to discover what is wrong and you get responses like these:
  • I can’t find what I need; too much noise in the data
  • The tool is too hard to use
  • It does not tell me the right things
  • I don’t trust the data; it does not match what I had before
  • The data is inconsistent
  • I need more history for trending
  • I can’t slice it the way I want
  • I let my analyst get me what I need, and it takes them so long it is useless by the time I get it
  • I cannot compare it to anything
  • Data is too old
  • I can’t easily export to Excel
  • The graphics suck (see - export to Excel)
  • I can’t make notes or track external events or share with others
  • It’s great for clickstream, but I need survey results, ads served, and inventory too

These are just a few examples; I could go on for quite a while here and feel free to comment with your own.

For many, the initial instinct is to blame the technology.  This works really well if the folks who own the implementation are not the ones who selected the technology. In most cases though, the technology is not at fault.  This truth tends to be reinforced by that customer at the user conference (we got to attend for free last year) that proudly showcased all the wonderful things we cannot seem to do.

The real barrier to acceptance tends to lie in the failure to properly define, document, and act upon the true functional requirements of the application. 

This fact has not gone unnoticed by the community.  (See: How to Choose a Web Analytics Tool: A Radical Alternative and the comments for one take on technology selection failures and Why Analytical Applications Fail for a view on the difficulties inherent in defining requirements.)
In my experience as a consultant and a practitioner, I have found requirements definition failure generally results from one or more of the following mistakes:
  1. Assuming that purpose built applications like digital analytics are useful out of the box and do not require a formal set of requirements
  2. Thinking the requirements used to select the technology are sufficient to drive the implementation
  3. Defining requirements without any defined process or using methodology that is completely ill-suited to analytic applications

Anyone with experience in this area will tell you 1) and 2) are patently false apart from the simplest and most basic applications.  As for 3), it happens quite often.  Let’s break this down a bit by input, process and output:

Input: All analytic applications are hungry for data that needs to be captured within the underlying business process and technology.  For customer analytics, think website clicks, survey responses, call center interactions, etc. Event capture nearly always requires some custom coding beyond the plug-ins and templates that the tools provide. Software developers are not great mind readers; they need solid requirements to do that well.

Process:  Sticking with my customer analytics example, buying a tool in lieu of a custom build is an easy decision when the high level requirements fit what is on the market. The tools themselves provide enough flexibility to satisfy most of the situations they encounter.  That flexibility comes at the price of complexity.  This takes the form of what can be a bewildering array of configuration options and combinations; any number of which can be applied to solve the same problem.  Finding the optimal combinations, even for those with the best expertise in that tool requires a well communicated set of current and anticipated requirements.

Output: Again, purchased analytic applications usually come with a slew of predefined reports, but the utility of these is directly tied to the manner in which those report designs mesh with the application configuration and the preferences of the intended audiences, not to mention the devices those reports will be viewed on.  As such, they must also be implemented and modified based on documented requirements.


With this in mind, I will detail in my next post why requirements definition for analytic applications tends to go off the rails and my thoughts on how to get it right. 

Thursday, October 31, 2013

The Sum of its Parts





Welcome to what will be the first in a series of blog posts covering the art and science of decision support as it applies to internet properties, including content websites, e-commerce sites, and attendant applications for fixed and mobile devices.

Although this covers the same domain space as Web Analytics (WA), Digital Analytics (DA), Business Intelligence (BI), Data Warehousing (DW), and even Big Data (BfD); I find that these terms are more useful when selling technology and consulting than to describe real business problems and proposed solutions. For example, I’ve heard it said that the definition of a Web Analytics application is simply a BI application that Marketing will pay for these days.

What is clear is that the WA, DA, and BI/DW spaces are rapidly converging out of necessity. Ask marketing decision makers what they like least about the way their data is delivered to them, and most will tell you that having to use bespoke siloed applications for clickstreams, search, ad performance, attribution models, CRM, etc. is not only inefficient, but ineffective as they often produce conflicting information.

After purchasing these tools on their own, business leaders realize they require integration of this data and a coherent access path. This is mandatory since customer acquisition, conversion, and retention decisions are not (or at least should not) be made entirely independently of each other. When attention is turned to profitability, additional information is needed around revenues, cost of goods owned, transported and sold, inventory levels, labor rates, etc. This level of integration is only practical using a data warehouse or one of the new cloud based integration and presentation platforms. More on those in a future post.

Of course, looking at history is also insufficient since a full decision loop includes analysis, projection, action and reporting the results of those actions against expectations. Actions in this case are not confined to changes in strategy, tactics or resource allocations. Digital properties afford us the ability to change what we present to the customer frequently, even continuously in the form of personalized experiences and marketing experiments. These tools need to be integrated as well but are often separated both technologically and organizationally.

When this reality becomes apparent, the business leadership often looks to IT to provide an architected solution. This approach was feasible, often after some fits and starts, with Financial, Supply Chain, and CRM data which was mostly structured and static. Even the protracted ERP/CRM/DW efforts that lasted years eventually bore some fruit in many cases. In the digital world, however, we operate on Internet time. The data we must integrate and use is often neither structured (customer textual feedback, video content) or static as key data like prices can change many times a day. We are expected to acquire, analyze, and present useful information with very low latency, sometimes in real or near-real time.

Historically, the software industry was content to lag behind the decision support needs of its customers and react with half-baked and often re-purposed solutions; allowing more nimble startups grab market share by innovating and selling directly to business-side buyers who were much more willing to go with less established vendors than their risk-averse IT counterparts. That was then. These days, the established players in software are gobbling up the analytics startups quickly as they recognize the need for integrated solutions with robust service and support capabilities and facing a market where CMOs are spending more on technology than CIOs.

This shift in buying power has specific implications. In my experience, and that of my industry contacts, it is rare in larger enterprises that one person in the organization who has both the expertise and the authority to make the architectural and purchasing decisions with regard to the decision support technology stack. In other words, it is common to see Finance, Marketing (traditional), Marketing (Online), HR, IT, etc. all making purchase decisions of BI/Analytics technology without any real collaboration or architectural vision. The availability of these capabilities as cloud based services has drastically lowered the barriers to purchase, further exacerbating the problem. This is great for the vendors, but IT often loses any real control or even awareness of all the enterprise data assets that are living in the cloud. Eventually, I predict this situation will prove unsustainable for most firms as they realize that the whole of their decision support capability is drastically less than the sum of its parts.

What do you think? How do we regain the ability to architect our decision support toolsets?