From Business Intelligence to Enterprise IT Architecture, Part 6

Originally published 1 September 2010

This ongoing series of articles describes a new enterprise IT architecture that emerges from a reevaluation of modern business intelligence needs and technologies. The second article introduced the Business Integrated Insight (BI2) architecture as an evolution of data warehousing / business intelligence, while parts 3 through 5 drilled into the information layer, the Business Information Resource (BIR). This article examines the process layer of the architecture—the Business Function Assembly (BFA).

The BI2 Architecture

Way back in part 2 of this series, I described the Business Integrated Insight (BI2) architecture shown in Figure 1, the three layers of which are:
  1. Business Information Resource (BIR): The information, all the information and nothing but the information required by the business (described in parts 3, 4 and 5 of this series).

  2. Business Function Assembly (BFA): All (and I mean all) of the processes carried out in the business from conception through to death and beyond, using the information in the BIR.

  3. Personal Action Domain (PAD): Each and every interaction of people with the processes of the business, and through the processes with the BIR.
Having read my previous articles on the BIR (you have, haven’t you?) and comparing the structures of the BIR and BFA layers in Figure 1, I’m sure you can guess that I’m about to discuss this layer also in terms of axes—in this case, two. You are correct, but first, I need to do some myth-busting.

The Business Function Assembly

Part 2 of this series introduced the Business Function Assembly (BFA). As we saw there, a fundamental precept of the BFA is that everything, literally everything, that is done within a business is part of a process. This thought is probably business-as-usual for developers and users of operational systems. In fact, to suggest otherwise to these folks might well be considered heresy. Traditional processes and business models focused on the strictly defined and regimented sets of procedures and steps required to automate manual processes. Whether talking about an automobile assembly line or the tasks involved in assessing and paying out on an insurance claim, the old approach of work-study (as described by F. W. Taylor in the early 20th century) and, subsequently, workflow software emphasized the definition of individual tasks and their linking together into optimized procedures or processes. Such processes were analyzed in depth and defined in advance, implemented in workflows and presented to users to follow slavishly and without question. Operational or production systems, of course, still benefit enormously from such approaches, gaining efficiency, productivity and reliability. However, for this discussion, the most important aspect of such workflows is the inclusion within them of decision points—steps in the process where the workflow bifurcates (or trifurcates or more) on the basis of a decision. This decision, however simple or automated, is nonetheless a decision. Hold that thought!

Myth #1: Decision making is a “no process zone.”
In the informational world, the idea that decision makers work within a process is anathema to many. Where, they ask, is there space in a process for creative thinking? How can users be innovative if confined to the fixed steps of a workflow? Such thinking, I believe, may demonstrate a somewhat outdated understanding of what processes and workflows imply.

Decision making has historically been seen as a stand-alone activity. Use cases typically revolve around managers analyzing sales performance, business analysts seeking to understand historical trends or predict future directions, and even executives tracking key performance indicators (KPIs). The focus is on the analysis and hopefully creative thinking that follows. Very seldom is the question asked: What happens next? The simple answer is that an existing process either continues as it was or is changed. The change may be simple and within the existing process structure (such as reassigning salespeople to different accounts) or it may involve a fundamental change in the process itself—implementing a new leads analysis step within the sales process. The point is that there was an existing process that came to a decision point, an analysis was performed, a decision made and the process resumed. In times of yore, the time between the decision point and process resumption could often be quite extended. In today's business environment, speedy decision making is a high priority. Operational business intelligence (BI) pushes this to the limit, attempting to reduce the time gap to minutes in many cases, and with automated processes where decision making is moved from people to software algorithms, sub-second response time as seen in automated stock trading, for example.

This business need for ever timelier decision making in ever more situations raises a fundamental architectural question for application development: Is there a valid boundary to be drawn between the operational and informational environments? While there certainly continues to be a substantial set of decisions with relaxed time frames (and we'll come back to them in a moment), my experience suggests that many decisions that were previously considered to be tactical—requiring resolution within a day or two—have gradually moved into that gray area between operational and informational applications where resolution occurs between half a minute and a couple of hours. From an implementation viewpoint, such “near-operational” decisions pose a greater challenge to data warehouse developers than to the providers of operational systems. In the latter case, with decision points occurring naturally in the workflow, implementations based on a service oriented architecture (SOA) or similar approaches can simply call whatever service is required to instantiate the decision point, assuming for the moment that the relevant information is available. The process flows naturally through the decision point from the before to the after situation. The data warehouse developer, on the other hand, is faced with the prospect of taking a previously asynchronous procedure—populating the warehouse and downstream data marts—and somehow bridging the gap between the operational processes before and after the decision point.

Of course, the operational system’s assumption that the relevant information is available at the decision point is invalid in cases where current, cross-application or reconciled data is required to make the decision. Hence, the emergence of operational BI and an increasingly blurred boundary between operational and informational systems. From an architectural point of view, the increasingly prevalent blurring of this boundary was a key driver for the concept of BI2.

Now, back to longer-running decisions. Is there any process involved in them? Or, perhaps more accurately, is there any process around them? Realistically, almost all business decisions of any significance demand the involvement of a number of people. Even the traditional business analyst, often seen sitting alone for hours in front of a screen full of spreadsheets, graphs, tables and pie charts, usually collaborates extensively with other people in the business both before and after this PC-centric phase. The prior work involves meetings with business people to understand their needs and investigations with peers and IT to source the required data. Subsequent work involves reviews with peers and managers, as well as presentations at meetings, all of which often drives substantial rework on the analysis, both revisiting business requirements and acquiring further data. This is work that all relies on a process, however light-weight or ad hoc in nature.

Similar considerations also apply to decision making at a more senior level. Such work is highly collaborative, involving the creation of specialized teams, research into market trends, gathering existing documentation from a wide variety of sources, meetings, rework of initial and interim analyses, and so on. As anyone who has been involved in such decision making knows, this is a process (and sometimes, an exceedingly tedious process at that!). Not only is it a process, but it is a process that is repeatable, at least in part. The next time a similar decision is required, a similar procedure—perhaps with subtle differences—will be followed, including some or all of the same people. In this, we can see a workflow with steps and connectors, and even decision points, but of a more collaborative and variable nature than traditional operational workflows.

With a little further thought, we can see a similar process applies to the case of our business analyst above and, in fact, to all decision making. Decision making, rather than being a “no process zone”, is actually more correctly an adaptive workflow. So, let’s restate the myth as Modern Premise #1: Everything a business needs to do today, whether operational or informational in nature, has a process or workflow of varying flexibility underlying it.

Myth #2: Decision makers are always seeking information, and the more the better.
The vast majority of data warehousing and BI tools focus almost exclusively on data and information. Vendors sell them based on how much data they can handle, the analytic functions they provide and the types of decisions they can support. IT buyers tend to evaluate these tools in similar terms. You have to step beyond the pure BI market to performance or process management to find tools that position BI function within a process.

However, my experience of interviewing business users during the requirements gathering phases of data warehouse projects is that it often takes considerable effort to get most users to define their information needs in any detail. Again and again, these decision makers describe what they're trying to do, what they hope to achieve and the steps they envisage in getting there. Surely this is process thinking, pure and simple?

Similarly, the drive to analyze and store ever increasing volumes of information comes as much from vendors and IT as it does from the business users themselves. For sure, there is a small cadre of statisticians and business analysts who understand the value of “big data” and know how to use it to best effect. But, the vast majority of managers and other day-to-day users of business intelligence tools work with relatively small quantities of data, and only when there is no other option do they consider increasing the volumes of data analyzed.

Which leads us to Modern Premise #2: Business users are focused on results and the actions they need to take to get them; process trumps information—always.

Myth #3: Business and IT processes are fundamentally separate in nature.
Business processes started out as written checklists and simple manuals that users referred to until they knew the procedures they needed. And they rarely took out the book except to check on a seldom-used procedure. As the procedures were documented, the opportunity arose to automate them using tools to help define, manage and edit them as needed. Then IT took over and created an application that supported the process. This is all very well as long as any possible process change has been embedded in the supporting application. Of course, that’s seldom the case. The business people, and especially the marketing and sales folks, often come up with bright ideas to include in the process, or perhaps just get bored with the way it is! When this occurs, IT processes are fired up. We have requirements gathering, change management, coding, testing, etc. And when the code is delivered, the business people run the new process. This is fine, except that sometimes the business folks change their minds faster than IT can deliver. Hence, workflow tools and business process managers appeared, leading on to SOA where users can change the workflow as needed and swap out one service and swap in another to react flexibly and quickly to market changes.

If you look closely at what has occurred here, a set of activities—(re)defining the workflow and replacing functional components of the “application”—has just been moved from IT’s responsibility to that of the business. Former IT process just became business process. Of course, the part where IT designs and writes the services remains firmly in the IT process area—except when we look at what’s happening in the social networking, collaborative world of Web 2.0 and beyond. Here the users are taking more and more control of what they want to see, right down to the definition and delivery of the services themselves.

Of course, IT will always have to run the processes for managing the infrastructure, deploying applications and so on. Really? I notice a lot of substantial hardware and software sitting beneath users’ desks, being managed by the users themselves.

The point here is not whether this is good or bad. The reality is that users have been becoming more sophisticated in their ability to use computers since PCs were introduced in the 1980s. And to some extent, computers have become friendlier. Many activities that once demanded trained IT personnel are now easily handled by business users. The time has come to re-examine the business-IT boundary. By the way, the same argument applies to data, but here we’re focusing on process. And here comes Modern Premise #3: Business and IT processes are all part of the same process environment; they all exist to serve business ends and need to be fully interlinked and interchangeable.


I’ve deliberately spent some time on fundamental, and indeed possibly revolutionary, process thinking here because my experience is that data-oriented people like many readers of BeyeNETWORK don’t think too much about this aspect of IT. I believe that this needs to be rectified. If data warehousing professionals don’t step up to the needs of process-orientation, the process-oriented approach of operational systems will encroach into BI without any consideration for the unique needs of decision making. That would do a great disservice to the business.

So, looking deeper into the structure of the Business Function Assembly will have to wait until Part 7. And I promise, yes, we will be looking at the two dimensions of the BFA layer next.

SOURCE: From Business Intelligence to Enterprise IT Architecture, Part 6

  • Barry DevlinBarry Devlin
    Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

    Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

    Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

    Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

Recent articles by Barry Devlin



Want to post a comment? Login or become a member today!

Be the first to comment!