Friday, January 10, 2014

Why do analytic application requirements have to be so hard?


In my last post, I blamed the failure of many analytics and BI software implementations on a failure of requirements definition.  In this segment, I will discuss why I think this happens as often as it does and ways to avoid this costly result.

There are many common misconceptions out there in the software development community around analytic applications.  One of the worst is that proper requirements definition is not all that important and that you can approach it as a “lite” version of a traditional requirements definition process that is used for more process-driven projects.

Another one has to do with the basic purpose of requirements definition.  It is often treated as just a perfunctory step mandated by a pre-defined methodology that only a time consuming set of interviews and a long deliverable document can satisfy.  Worse yet, this might consist of an endless list of line items beginning with “the system shall…” I find these to be nearly useless in the context of analytic applications.  Defenders of this approach will counter that these are only “Functional Requirements” and they are simply a prelude to a set of “Technical Requirements” that actually become the inputs to the development phase. They go on to point out that this step helps the users get a handle on, as well as prioritize their own needs and desires.  That’s acceptable if the technical requirements are actually developed, but how often does that happen with any real rigor?

This cookie-cutter approach has never worked for me, whether we are talking about generic BI applications or specific analytic applications like digital analytics.  It also makes little difference whether we are talking about implementing a pre-packaged application or building one from scratch.  What I try to stress on the projects I am responsible for is that requirements, no matter how they are documented and communicated, exist for the sole purpose of guiding those who will write the code and/or configure the rules for your application.  Failure here equals project failure. Period. Bad requirements lead to test fails, bad data, delays, cost overruns and unhappy users.  This not only means you have to get the actual requirements right, but doing so at the right time and in a form that is most useful to your developer(s).  This requires getting their input on this and listening to it.

So where is the disconnect here? I think it is the tendency of those project managers and business analysts with most of their experience working operational software projects, to apply the same techniques to an analytic application.  This is like trying to drive nails with screwdrivers.  Operational applications like general ledgers and reservation systems are generally procedural or even transactional in nature with well-defined and static processes and user roles.  One step follows the next in a predictable way and can be depicted with flow charts and swim lanes.  In BI, the ‘use cases’ and roles are dynamic, even unpredictable in nature. The analytic process tends to be iterative (the answer to the last question defines the next question), collaborative, and personal in the sense that different users like to present the same information in different ways.  Even so-called ‘operational’ BI, where the decisions to be supported are pre-defined, has to tolerate edge case situations where the decision trees sprout new branches on the fly.

One alternative approach is to fall back on our instincts and begin the requirements definition process by trying to envision what the user should actually see, and working backwards from there.  That works pretty well if we are talking about a reporting application (not to be confused with analytics – a distinction for a future post) where the desired displays can be mocked up and become the de-facto requirements for the data modelers and report writers to work from.  Of course, this only works if the reports never change materially. The point is we are not specifying static processes or reports.  We are specifying environments that must be flexible, dynamic, and modifiable by people other than those who developed them.  These environments are defined by:
  •          The events to be captured,
  •          The attributes of those events we need, and
  •          Any necessary context – this will vary by application type, but using customer analytics examples:

-    The event - when they occurred and in what sequence?
-    The customer/user contact  - where did it come from?
-    The customer/user origin – what role, segment or campaign?
-    The property or platform  - what device or what  brand?

  •          Derived Metrics, like variances, rates, or indices
  •          Any required projection or prediction algorithms
  •       (If applicable) Connection points to operational systems for quick action on delivered insights (e.g. media purchases)
  •          Other downstream application requirements your system is feeding, these could be:

-    Personalization tools
-    Pricing tools
-    Testing tools
-    3rd party  website tags

  •          So called “non-functional” (I love that term) requirements like security and retention periods
  •          For application configuration, we can add:
  •          Presentation preferences (tables, graphs)
  •          Menu structures
  •          Historical comparisons

That’s pretty much all there is to it.  If you get these things right, you will have all the necessary building blocks in place in your application to construct a satisfying experience for your stakeholders at the outset.  It may not be perfect, but the fixes and enhancements identified in testing or post launch should be relatively easy to implement.  On the other hand, if you miss on the basic data acquisition and manipulation requirements, you will likely end up ripping up quite a bit of track.

A note for those of you who design and implement these applications as part of a business organization, that is, outside of IT.  Don’t fall into the trap of thinking just because you are not subject to the tyranny of a formal and rigorous methodology for applications development means you should operate without any process at all. At minimum, you need:
  •          A project plan
  •          Defined project roles and responsibilities
  •          A template for your documentation that all project members are comfortable with
  •          A complete test plan that uses the functional and technical requirements as a basis and includes an integrated testing cycle that uses the event data, the processing rules and the presentation layer after the code for all those elements is frozen


Winging it with folks who have never done this before almost never ends well.  Analytic applications that deliver really valuable insights are deceptively complex and represent a significant investment. They require a defined implementation process using experienced folks.  If you do not have them, hire some.

1 comment:

  1. Thank you Stephen for describing eloquently the issues and importance of requirement gathering, and documentation, that clients often don't appreciate. It's tough to convey this to those parts of the business that don't have the time or experience, and when they just expect reporting from outcomes rather than true insights. Looking forward to your next article.

    ReplyDelete

Note: Only a member of this blog may post a comment.