Let me begin this series of posts with some historical context:
Ever since
the earliest days of what we refer to as self-service computing, there has been
controversy in Corporate America around the relative roles of IT and the user
(Business) departments in the development and administration of applications
that support knowledge workers. This balance of power has been disrupted by
several waves of technology, but the basic issues are always the same and
persist today. In the beginning, there was pure mainframe computing; not really
accessible to end users unless they were willing to write 3rd
generation language (e.g. COBOL) programs on punch cards, feed them into readers,
and then wait for a job to complete and get results out of a printer. Then came
time sharing - the original clouds - which made things a bit more accessible
and immediate, but IT was still in firm control. Even then, some business
departments like Finance started hiring tech-oriented staff to configure and
administrate highly customizable applications like general ledgers and the
early ERP systems. These also featured proprietary programming languages that completely
blurred the line between configuration and full-on application development.
Self-service
computing became a wave, and quickly gained the most traction within decision
support applications that were renamed ‘Business Intelligence’. IT, fearful
that ‘amateurs’ were putting critical processes at risk, moved to take over
that activity. This over the objections of business side management that valued
their control over these functions and priorities. Turf wars ensued.
Things only
got more complicated as higher level “4th generation” data handling,
planning, and modeling languages, like SAS and FOCUS, were marketed to end-user
programmers and their management that paid for them. At this point, there was
no turning back on the expectation that application development could occur outside
IT. One common reaction was for IT to create entirely separate environments,
based on somewhat smaller and cheaper hardware that used extract data from the
mainframes to perform dedicated modeling, planning and reporting activity in
real time. This enabled dedicated end-user computing organizations that needed
IT support and feeding; but otherwise operated independently as long as they
stuck to supporting non mission-critical applications. (In practice, more than
a few slipped through.) One immediate consequence was a massive proliferation of
often redundant data extracts. IT responded by building data warehouses in the
hope of regaining some order and control over the source data, if not what
happened downstream.
From there,
the PC revolution came along with spreadsheets, databases and more powerful
higher level languages. Another layer of individual autonomy was created with all
the attendant risks and opportunities. Over time, the PCs became smaller and
more interconnected through wired and wireless networks until they became
completely mobile, not to mention owned by users. Things became really chaotic
and limited at the same time in the sense that these devices cannot natively
support the level of collaboration we want or capacity to process the huge
volume of transaction data we now collect.
IT was and
is still saddled with the responsibility of maintaining data integrity,
security, and availability across a set of processes operating over devices,
applications, and networks largely outside of their direct control. Currently,
this trend is being extended to main operational applications, from which
internal data is still primarily sourced, as they are relocated to SAAS, or
software as a service clouds. In a sense, the technology has come full circle,
but the basic dichotomy of how to balance control, responsibility, resources,
and workload between IT and users remains.
The other
major factor driving demand for true end-user BI is the service that we have
come to expect as consumers via the Internet from the likes of Google that
understand natural language and make an enormous variety and volume of the
world’s knowledge and data searchable and available on seconds notice. We then
use social networks to share the knowledge and collaborate with friends and
colleagues. We want that same power and ease of use in the workplace. As BI
professionals, we strive to provide such capability that we refer to these days
as self-service BI; and the software cloud service vendors are trying to sell
it to us.
In the next
post, I will detail some of the best practices that I have learned from
organizations that have put successful self-service BI capabilities in place
across many diverse technology architectures.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.