It’s now official. Agile methodology for data and analytics has indeed reached the peak of the hype cycle. How do I know this? Because the venerable strategy firm McKinsey has blessed Agile as a data ‘transformation’ enabler. See this white paper. I encourage you to read it in its entirety, but I will highlight some of the major points here.
The main thesis of the piece is that data has become a key strategic asset for their large enterprise clients (!) and that the diversity, volume, and velocity of that data require a high level of agility to leverage it quickly enough to effectively support decision making and opportunity discovery.
The key challenges that these clients face are integrating the data silos that their IT architecture creates and drawing a direct connection between data management success and quantifiable business benefits. This, in turn makes it difficult to justify the significant investments required to manage diverse data at scale.
They point out that Agile methodology that has been adopted by IT management to make applications software development more responsive to business needs. They then posit that Agile can have a similar impact on the establishment of enterprise data management capabilities in the age of Big Data.
Wow! They are just coming to the realization that enterprise data management is a good thing and traditional IT practices in this sphere can be ponderous, excruciatingly slow, exceedingly expensive, and out of touch with their missions?
The analytics community has known this for decades. Typically, the fix has been for business areas to go ‘rogue’ and build out their own data capabilities to drive their analytics. This is an old issue but it is now getting new currency with McKinsey’s IT clients as technology in the form of cloud-based BI platforms, powerful self-service BI tools, and APIs that make ‘roll your own’ analytics practical and relatively cheap at scale.
Of course, this is always the case when technology creates new-found BI ‘democracy’. Governance suffers and the data silos reach their limits quickly when business operations require true cross-functional analysis with integrated data.
McKinsey's solution approach has 4 major aspects:
- Create and empower Agile cross functional teams (scrums)
- Update the technology infrastructure to support and integrate “Big” and legacy data assets
- Emphasize new forms of communication to demonstrate value and discover new opportunities
- Develop KPIs to measure success
Here is where I take issue with what they are saying: It makes perfect sense from a strategic perspective, but can be very difficult to implement tactically. Agile works best when applied to a discreet software product with its own life cycle. Forming scrum teams to work in parallel sprints churning out stories and epics, and then ultimately disband, can be practical in this scenario.
Data management, however, is not a project, product, or even a platform. It must be an ongoing capability if it is to work. To McKinsey, this requires drafting business experts to join their highly talented and experienced IT counterparts and wall them all off in a data lab. This, however, cannot be a short term assignment. In fact, if they succeed in discovering new opportunities, these labs will create an ongoing need to remain in place. Even the largest organizations I have worked in cannot afford to take that talent from their native organizations and send them to a lab for long.
For blue-chip consulting firms, promoting this kind of transformation initiative makes for some very lucrative consulting opportunities (I know. I have worked some of them.) I believe what works better in practice is to take an entire line of business, and building (or rebuilding) it from the ground up to support not only a comprehensive data management capability, but a data driven culture where everyone has some direct responsibility in their job description for acquiring, processing, deploying and using data in their daily work. The success of that effort can be used to propagate the culture across the other businesses in the enterprise.
Perhaps instead of thinking in terms of minimum viable products, we should set a minimum viable business unit as the initial data transformation goal. From that, we can deliver, iterate, improve, and expand by example.