Sunday, January 3, 2016

My 2016 Business Intelligence & Analytics (BI&A) Wish List

It's that time of year where we bid goodbye to what we did not like about last year and look forward to our best hopes and wishes for this coming year. As I look back on the past year in BI&A, here is what I'd like to see in 2016:

1.       Some real standards the software vendors will respect
I'll admit I've wanted this for a long time, but I can still hope. We are now in the big-data driven third generation of BI&A technology. The first was relational databases. Then, the industry settled on SQL as the standard query language and that facilitated a whole industry with interoperable query tools, ETL tools, database platforms, and a generation of expert professionals. The second was multi-dimensional technology. In this case, we never even saw a standard for query languages aside from some weak attempts to extend SQL. Now 'No SQL' is the emerging non-standard. Metadata standards? Dream on. The one credible attempt at it, the Common Warehouse Metamodel in the 90's, never really caught despite support from several major vendors. Those same vendors eventually decided that proprietary solutions could become de-facto standards if they acquired enough market share through buying out smaller competitors. So metadata standardization is achievable, but mostly with single vendor solutions. This brings me to my second wish.

2.       The next consolidation wave
As is always the case during a technology consolidation, you get a wave of startups with new technology and established players trying to reposition older technology in a fight to win over the early adopters in the market and some love from the industry analysts. In time, the weaker players drop out of sight and the stronger ones get swallowed up by the big enterprise players looking to buy technology and market share. It will be a little different this time as SAAS deployments will allow more of the upstarts to thrive on their own. Some may end up pushing out older players in the process. For me though, there are just too many incomplete solutions out there right now and the consolidation wave cannot come fast enough.

3.     Fewer Big Data wannabees
Another consequence of this technology shift is the emergence of a huge number of resumes on the market promising a depth of knowledge and experience in big data technology stacks that is more hype than substance and becomes clear 10 minutes into an interview session. I’d rather work with seasoned BI&A professionals that have mastered the basics of software engineering, project management, requirements development, data governance, and testing who have demonstrated the ability to learn and adapt to new technology quickly.

4.        More agile BI&A teams and less Agile methodology zealotry
BI&A pros have known for decades that agility is mandatory in our work and waterfall methods do not work. Requirements are generally not completely known in advance and only revealed through prototyping and iterative development. Value should be delivered on an ongoing basis. On the other hand, some of the main tenets of the now-revered Agile movement do not work all that well in the BI&A space. We do not develop structured applications as much as we strive to create environments where our users can create their own applications. This limits the development of workable user stories in advance. The evolution of our platforms over time and need for stable development and support teams does not lend itself well to the scrum concept as it is typically advocated. Yes, we need to be agile (small A) but not necessarily Agile (capital A.)

Business Intelligence and Analytics has never been more recognized as vital to success in business, government science and education. Our tools and technology are better than ever. Those wishes have come true. Now, I wish for all who read this a happy and successful 2016.

Sunday, November 1, 2015

DIY Analytics – Is it right for you?

As I consider the all-too-long list of things that need fixing in my house and watch yet another spot from the big-box home improvement retailers, I realize why the Do It Yourself (DIY) trend has legs. You know your own house and DIY gives you control over the schedule, quality, cost, and outcome of the work. Of course you had better have the necessary expertise and can acquire the right materials and tools. If you don’t have that level of confidence, you outsource the work and hope you can keep any eye on things and get your money’s worth.

The large scale home improvement retailers succeed by making the materials, tools and expertise available and supplying that confidence. Many times, you start with something easy, it works out and you feel empowered. Other times, you take on a little too much, your work does not hold up and you end up calling in the pros and putting your tools away.

We often see the same story in business analytics. Finance, marketing, HR, and supply chain specialists are lured by tool vendors into thinking they are better off with DIY analytics and freeing themselves of the IT pros with their long schedules, hefty price tags and results that can be, shall we say, less than satisfying.

In this case, the folks that make money are the BI and Analytics (BIA) technology vendors who convince business organizations that DIY Analytics is the way to go and happily sell tools and some training.  Then they move on to the next sales cycle. Sometimes it works out, but often these applications fall apart over time for lack of reliable raw materials (data) or solutions that are not built to last and cannot be adequately supported by those who developed them. At this point, the IT pros are brought back in to fix things and the tools end up on the shelf.

There is a better way. IT organizations have found success by taking a lesson from the home improvement retailers and supporting DIY analytics successfully. They do not insist on building all the reports, dashboards, and analytic applications themselves. Instead, for those customers that prefer the DIY model, they provide a set of shared tools and trusted data. Their customers then build and enhance the application to their own preferences and on their own schedules; while controlling costs by paying for much of their own labor.


There is a critical ingredient that is necessary to make this model work though. The home improvement retailers enable their customers with expert advice, training videos and communities of other DIYers sharing knowledge. IT shops can provide that same kind of support structure. It often comes in the form of a formal dedicated organization within IT. I have seen it called a BI Community of Practice, a Competency Center, or a Center of Excellence. Whatever the name, the mission is the same: Crate a resource that gives its customers the necessary technology resources to succeed, and the confidence to do it themselves with a flexible set of support models and services best left to the pros. These include everything from security, backups, capacity planning, performance tuning, professional training, documentation, and proactive knowledge sharing that helps the entire community use their resources efficiently and effectively. This creates an environment where the entire business wins with better service, better decisions, and better performance. Oh, and when the pipes leak, just call a plumber.

Monday, July 6, 2015

Cyber Security and Business Analytics: Imperfect Together

This week, I was reading an excellent piece here about the cyclical nature if the Business Intelligence/Analytics industry (BI). The assertion here is that priority tends to swing between periods of high business-driven enablement and IT-driven governance.  The former tends to be brought on by advances in technology, and the latter by external events, regulation, and necessity.  We are currently at the apex of an enablement cycle at the expense of governance. One casualty of lax governance is often cyber security.

Recently, we have seen a rash of high-profile data breaches. One of these was the large scale theft of data from the health insurer Anthem. This one was notable as it was the result of a vulnerable data warehouse where sensitive data was left unencrypted.

Those of us who practice BI and Data Warehousing professionally have a paradox to deal with. We have always been evaluated on our ability to make more data available to more users on more devices with the least effort to support business decisions. In the process, we tend to create ‘one-stop shopping’ and slew of potential vulnerabilities to those who would access proprietary data with criminal intent.

The software vendors in our space have been all too complicit in this. After all, what sounds better to the business decision-makers they market to: “multi-factor authentication” or “dashboards across all your mobile devices”? “advanced animated visualizations” or “intrusion detection”? “data blending” or “end-end data encryption”?

How about “self-service business analytics” or “help yourself to our data”? Consider how easy we make it for the users in an enterprise to export just the useful parts of a customer database, along with summaries of transaction history to a USB stick and walk out the door with it?

This idea that BI and data warehousing requires more attention to security is starting to gain traction, however. A quick web search reveals that the academics are starting to study it and the leading established vendors in the space are starting to feature it in their marketing in ways I have not seen before. See the current headline on the MicroStrategy website for one example.

The main takeaway here is that BI and data warehousing practitioners need to consider cyber security in architectures and applications the same way it is done in transaction processing:
  •          Get a complete BI vulnerability assessment from a cyber-security professional
  •          Calculate the expected value of an incident (probability of an event times the cost to recover) and allocate budgets accordingly
  •          Demand proven security technology from your vendors and integrators around features such as authentication, end-end encryption, and selective access controls by organizational role
  •          Don’t be afraid of the cloud. The leading vendors of cloud services employ state of the art security technology out of market necessity and are often the most cost effective solution available.



What’s old is new again - BI edition


Those of us with a long history as business intelligence (BI) practitioners have pretty clear memories of all the days when we saw an overhyped technology promise to change the game by freeing business organizations of IT tyranny with a new class of products that made self-service reporting and analytics better, faster, and cheaper. We saw this with the arrival relational databases. Believe it or not, they were originally all about data access not transaction processing. We saw it again when Online Analytical Processing (OLAP) was available on top of Online Transaction Processing (OLTP).  OLAP brought data access directly to our spreadsheets and PowerPoints where we really wanted it. In both cases, business organizations bought this technology and built organizations to use it thinking they could declare their independence from IT. It worked splendidly for a while. IT created extract files from their applications and celebrated getting out from under a backlog of reporting requests. Businesses felt empowered and responsive as they created reports, dashboards, and even derivative databases integrating internal and external data within their siloed subject areas.

Then reality set in.

All these new products required maintenance, documentation, training, version control, and general governance. “Shadow IT” organizations sprung up. They often became, in aggregate, far more expensive and just as cumbersome as what they replaced. Worse, the software vendors happily exploited this balkanization of larger organizations by selling redundant technology that had to be rationalized over time causing licenses to become unused and not transferable. Wouldn’t it be nice to buy a slightly used BI software license at a deep discount?

The fatal flaw in this arrangement is the proliferation of overlapping and inconsistent data presentations that we call multiple versions of the truth. These create mistrust and cause executives to go with their guts in lieu of their data.

Each of these technology advances, along with even faster hardware evolution, did have the impact of making decision support and analytics far more powerful even as the open source movement made it more accessible. This, in turn, created competitive advantage for those who learned to exploit it and made a strong analytics capability mandatory in today’s commercial climate.

One problem still remains. As we like to say, you can buy technology, but you can’t buy your data. Today’s analytics require integrated and governed data across finance, operations and marketing, online and offline, internal and external.

That brings us to the current generation of revolutionary BI tools like the latest data visualization technology that is all the rage right now. (I won’t name names, but think “T” and “Q”.) Just like the previous BI waves, they exploit technology advances very effectively with features like in-memory architectures, wonderful animated graphics for storytelling and dashboards, and even data integration through “blending” and Hadoop access. These products have been hugely successful in the marketplace and are forcing the bigger established players to emulate and/or acquire them. The buyers and advocates are usually not IT organizations, but business units who want to be empowered now.

What does this mean for business decision makers? Just like the technology waves that preceded them, these new visualization tools do not address the organizational and process requirements of a highly functional and sustainable BI capability. Data and tools must be governed and architected together to create effective decision support.  Otherwise, you end up with unsupported applications producing powerful independent presentations of untrustworthy data.

We have seen this movie before and we know how it ends.


Mr. Robinson is currently a Business Intelligence and Analytics consultant with Booz Allen Hamilton. He has previously held practice and consulting leadership positions with Ernst & Young, Oracle, Cox Automotive (AutoTrader.com) and Home Depot.com

Saturday, November 29, 2014

You say you want the real BI requirements?

We often preach how important effective and accurate business requirements are to the success if BI projects.  We also lament that gathering these requirements can be difficult.  There are any number of reasons for this. They range from analysts claiming “users can’t tell me what they want” to statements from customers like “just give me the data and I’ll know what to do with it.” That story usually does not end well.

Others like to play the Agile card and claim that documented requirements are not necessary and they will get to the right place through prototyping and iterative development.  That may happen in some cases, but it becomes hard to predict how long this will take and what the costs night be using this method exclusively.

Let me make a couple of suggestions that might help make your requirements more effective. The first has to do with process.  As business analysts, we are taught to discover, design, and document business processes so that we may find ways to enable them with information technology. But, you may say (and I have) that BI has nothing in common with a traditional business process like order-to-cash, for example. The result of the preceding question determines the next one, right? This lack of predictability makes traditional requirements gathering difficult if one goes about it using conventional methods.

Simply documenting capabilities is another approach, but it is a cop out.  How worthless to a developer is a spreadsheet that says “the System Shall…” about 150 times?
The fact is that we are in the business of supporting decision making and decision making IS a process. Think about the purchase decisions we all make.  We gather information from a variety of sources, some of it structured like price lists, supplemented by unstructured information like reviews and often aided by collaboration with friends over social media. This is a defined process that we enable with technology.

Business analysts should never begin requirements gathering by trying to mock up reports or defining the necessary data.  Instead, find out what the decisions are that are to be supported. Discover how they are made now, and then work to improve the process with BI technology.  Then the data acquisition, integration, presentation, and collaboration requirements will fall right out and you will end up with a document that takes a systemic approach to a better decision support capability.


I promised a second tip, so here it is.  When you set up requirements interviews with your customers, ask them to produce the spreadsheets they use now. Even if your customer has trouble telling you what they need, the spreadsheets usually speak volumes. They are the best evidence of how data should be managed and presented, as they are usually used to do what current reporting systems cannot.  Oh, and don’t be satisfied with just replicating what the spreadsheet is doing.  Think of them as prototypes that are limited by the technology that you are there to improve. 

Monday, October 13, 2014

Confusing BI with traditional IT applications development – 25 years later

Apologies for the delay between posts.  The long weekend afforded me the time to get back at it.

When I began my career as BI practitioner over 25 years ago, I came to it trained in Decision Sciences academically and from a professional background as a reformed financial analyst and IT business analyst / project manager.  This made me well qualified to take on early BI projects as, in the early days, most BI work was financial in nature. What I discovered pretty quickly is that traditional IT methods and skill sets did not translate very well to BI projects.  In fact, I had to unlearn much of what I had picked up managing transaction process application development in order to succeed with BI.

Part of this was the direct result of the technology.  ROLAP might have used the same DBMS technology as I was accustomed to, but data modeling took on a whole new meaning with de-normalized read-only star schema.  MOLAP?  That was another animal altogether.  Beyond that, the goal of an enterprise data warehouse required an eye toward not only the current project, but laying the groundwork for inclusion of additional subject areas regardless of how one went about that <insert Inman vs Kimball religious debate here>.

Far more important, however, was the need to ditch standard waterfall methodology in favor of iterative development cycles that would often start with a proof of concept, perhaps part of a vendor bake-off, followed by a prototype, followed by iterative builds of the application.  The good news was that the user interfaces came out of the box; some even had Excel sitting in front of them, so little development or customization was needed there.  This alone was a departure from what we were accustomed to.  All of the effort needed to be focused on the design and optimization of the underlying data structures, along with the extraction jobs from source applications that fed them.

This in turn created a need for an entire new kind of business analysis discipline that defined flexible environments to describe data navigation and investigation rather than pre-defined use cases of deterministic user interactions and transactions. Environments are defined by events, attributes and dimensions instead of business rules that characterize traditional requirements for transaction processing and simple reporting systems.  I elaborate on this in my previous post on Agile BI.

Project managers, for their part had to practice agility long before Agile became fashionable.  They learned quickly that their customers would not tolerate lengthy expensive data warehouse development cycles.  They demanded the rapid value the technology vendors promised.  This mandated breaking big projects up into smaller deliverables and iterative development that allowed users to experience capability they had never seen before and discover opportunities to create additional value even as they refined their original requirements.  Scope creep had to be assumed and managed, along with expectations.  Entirely new processes around data governance and end-user computing management needed to be developed.

BI developers came to understand that they needed to develop a certain tolerance for ambiguity in requirements and need for flexibility in their products, at times even predicting what the next set of requirements might be given their knowledge and experience with the underlying data.  This was a huge advantage on any BI project.

QA folks, for their part also needed to rethink their craft.  For one thing, it became necessary to work very closely with the BAs (if the BAs did not assume this responsibility altogether.) Assuring data quality is a very different task from testing transactions against a set of business rules. It puts an emphasis on automated comparisons with source data and working with users who have tribal experiential knowledge of the data as part of the user acceptance testing process.

So why bring all of this up now if it was all learned a generation ago? For one thing I have come to understand that there are still application development shops out there run by folks without direct BI experience that are just starting to take on BI responsibilities from their users. For another, recent technologies such as Hadoop have created a need to rethink aspects of the BI development discipline and the Agile movement has given the false impression that it translates literally to BI projects without modification.


I will comment in my next post on what characteristics to look for when staffing BI project teams in my next post. Until then, all comments welcome.