Blog: Mike Ferguson Subscribe to this blog's RSS feed!

Mike Ferguson

Welcome to my blog on the UK Business Intelligence Network. I hope to help you stay in touch with hot topics and reality on the ground in the UK and European business intelligence markets and to provide content, opinion and expertise on business intelligence (BI) and its related technologies. I also would relish it if you too can share your own valuable experiences. Let's hear what's going on in BI in the UK.

About the author >

Mike Ferguson is Managing Director of Intelligent Business Strategies Limited, a leading information technology analyst and consulting company. As lead analyst and consultant, he specializes in enterprise business intelligence, enterprise business integration, and enterprise portals. He can be contacted at +44 1625 520700 or via e-mail at

Recently in Main Category

As BI professionals we would have to have our heads buried in the sand to miss the storm building on the horizon. That storm is one of events. Lots of them. Think about RFIDs, credit cards transactions, telephone call data records (CDRs) or even partial CDRs for pre-paid customers. Also click streams, business process messages, and SOA where the whole thing runs on event messaging. Where is this taking us? We are headed right into event driven automatic analysis, and volumes of data like we haven't seen the like of before.

In my hotel room recently I was watching CNN and a discussion about how Hong Kong Airport is starting roll out RFID baggage tagging added to our luggage labels. In a year or two it will just be part of the norm in every airport. Retail supply chains and logistics operations are all heading down the RFID road and have been for some time. Anti-money laundering (AML) and Risk Management in banks are applications that demand automated action. Customer level risk management is starting to require credit reduction or 'close off' across all credit risk products as soon as one goes delinquent. Fraud is already a real-time automatic analysis application in banking.

In my opinion it is time we started to think about preparation for event-driven BI. Many of us are already doing event-driven data integration but that is just the beginning. The consequences of event driven are significant. With the demand for lower and lower latency of data we may not have time to move data into a warehouse and out to a data mart before analysis. We may have to analyse as soon as the data arrives in a warehouse (this capability is already supported in Teradata and IBM DB2 for example). This puts pressure on DBMS scalability. In some cases we may have to analyse before data reaches a warehouse (i.e. Business Activity Monitoring). This can be done by invoking scoring or predictive analytics services from a data integration flow or even from scoring model workflows that are event driven.

In my opinion this means we are heading for an era where automatic analysis will start to come into its own. Cognos has recently acquired Celequest in this area. Other large BI vendors will no doubt follow the Cognos lead and head into this market. Well before the Celequest acquisition this market has been growing and is vibrant today with vendors like Actimize, Fair Isaac, SAS, SeeWhy, SPSS, ThinkAnalytics and Tibco , to name a few, all competing for business.

In the hype of BI 2.0 there is a new lease of life to data mining. Back in the '90s the image of data mining was seen as only for the few and we had visions power users with double PHDs in statistical analysis. Today we are way beyond that. Now the bright power users are still there but the real value is once these models are deployed to be the 'look-outs' for patterns in data and for specific events occurring all over the enterprise. Event-driven in-place real-time scoring in the database is already happening in several of my clients and is growing, especially as SOA takes hold in organisations. The implication is lots instances of atomated analyses running concurrently in the BI system alongside 'classic' reporting and analysis. Workload management in DBMS products and BI servers will become even more critical to allow this new workload to run alonglside increasing numbers of concurrent users. This storm is coming. Time to get ready and re-visit your BI architecture to see if you can support it.

Posted February 11, 2007 11:44 PM
Permalink | 1 Comment |

Over that last decade or more I have reviewed many different BI Systems for clients. During that time and still today, one thing has continually cropped up time and again irrespective of vertical industry is the problem of inconsistent metrics. You see it in different data marts, different BI tools and of course across spreadsheets that may access BI systems. This problem has plagued enterprises for years and despite many efforts is still there today and particularly acute in Excel spreadsheets that fly around the enterprise as people supply intelligence in spreadsheet form to others. Historically this problem has often arisen when companies have built different BI systems at the line of business level rather than at an enterprise level. You might think that this is a crazy idea but time and again it has happened as pressure from line of business executives has grown to deliver some kind of BI to support decision making in their particular area.

Take any large bank for example. Here you may find separate product based risk management BI systems for mortgage products, card products, loan products and savings products. Product based risk management BI systems are a classic set-up in many financial services organisations and it is only recently that many financial institutions are seeing the need to re-visit risk management at the customer levels to see risk exposure across all products owed by a customer.

Another example is when an IT department has built one or more BI systems and then given the user base total freedom to select their own BI tools. This is the “Field of Dreams” approach to building a data warehouse i.e. the “build it and they will come” strategy. Therefore the user base in this example may be choc full of different BI tools chosen by different user departments to access the data stores that IT have built. A third example is the parallel build approach whereby companies have spawned different BI teams around a large enterprise in order to speed up BI system development. These teams often end up building separate data marts as stand alone initiatives and so no common data definitions get used across BI systems.

Looking at these examples there is clearly been lots of opportunity over the years for inconsistency to creep into the BI implementations. This has not happened deliberately. It is more of an inadvertent consequence of stand-alone developments and lack of communication across BI development teams and line of business departments.

The consequence today is that is not uncommon to find different BI tools in an enterprise each offering up the same metric under different data names. You may also find that what looks like the same metric may have different formulae for different instances of it because of different user interpretations of it. Worse still is different metrics with the same name and different formulae. This is something that can cause significant confusion among business users often meaning that they are not clear on what BI metrics mean. In situations like this it is not uncommon to see users resorting to creating their own spreadsheets with their own version of the metrics they need and with their own version of formulae. Then of course they start emailing people attached spreadsheets and so the rot sets in and inconsistency starts to spread undermining all the investment in BI. I sense a few heads nodding out there – would I be right? So what can you do about it? Fundamentally a major problem with many BI tools is that they offer up what is marketed as “flexibility” to the end user by giving them the chance to create their own metrics without policing this. So it is rare to see a BI tools either preventing users from creating duplicate metrics to those that already exist or to at least warn them that a metric already exists to avoid re-inventing it. This lack of policing is at its worst among the Excel spreadsheet “junkies” that exist out there.

Amidst a climate of increasingly stringent compliance regulations this problem is beginning to seriously worry executives such as CFOs that often carry ownership of the compliance problem. What they want is common metrics definitions and for all tools to share access to these definitions. In particular, for reporting tools and spreadsheets to have access to such common metrics definitions and to prevent authorised users from inadvertently creating the sane metrics again and again without knowledge of what already exists. Is it any wonder therefore that common metadata is becoming increasingly important year on year.

What we want are "metrics cops" that prevent users from creating inconsistent metrics definitions. Therefore if a user tries to create a metric perhaps with a new name but with the same formulae that already exists elsewhere, a “metrics cop” would intercept this request and inform the user that a metric with the same formulae already exists and they should re-use the commonly defined metric if they need that metric in a report. Equally, if two users each try to create a metric with the same names but different formulae, a metrics cop should once again intervene and prevent this from happening so that clarity, consistency and compliance are all maintained. It was therefore with interest that I looked at the recent e-Spreadsheet 9 announcement from Actuate ( which has the capability to generate spreadsheets for many different users while guaranteeing metrics re-use across all spreadsheets. No only that but if the same metric is needed in different spreadsheets, this product inserts the same formulae in all spreadsheets that need it, thereby guaranteeing consistency. Several other BI vendors are beginning to look at this problem but often only across their own BI tools. As we move towards shared metadata across all BI systems to enforce compliance, the need for “metrics cop” functionality in BI tools such as reporting and OLAP is becoming a must for many enterprises. In addition performance management tools and applications need to also enforce this.

Therefore look closely to see how BI vendors stack up in terms of policing metric creation in their BI tools and performance management applications to help prevent your users from inadvertently contributing to metrics inconsistency, reconciliation and compliance problems. Also look to see how BI tools can discover what metrics already exist in your BI systems and how they can import metrics definitions from other technologies where they are defined. Finally check to see how BI platforms allow you to share common metrics definitions across 3rd party BI tools. This is especially important when it comes to Excel.

Posted October 10, 2006 9:18 PM
Permalink | No Comments |

In response to the question from David Jackson (great question by the way) on federated MDM, I have several thoughts on this kind of approach. The question is how these non-overlapping views are managed at different levels. In other words are these virtual views rendered on-demand by an EII tool for example or are they views as per straight forward relational DBMS views on a persistent master data store (sometimes referred to as a hub) that has been established in the enterprise. That is the first question. The second is "are the views using the same data names as the inderlying master data i.e. common data names that would be associated with master data.

Looking at the first example, if these non-overlapping MDM views are virtual and created 'on-the-fly' from disparate data sources by federated query EII tools, then EII technology would need to support global IDs and be capable of mapping disparate IDs associated with the master data in disparate systems to these IDs. I am somewhat sceptcal of this approach mainly because of the limitations that some EII products place on enterprises. Staying with an EII federated query approach, the next question is what if you wanted to update these non-overlapping virtual views of disparate master data that have been rendered by EII. In that sense, the EII product has to support heterogeneous distributed transaction processing across DBMS and non-DBMS sources. This is supported by some EII tools but certainly not all of them. EII is still primarily use in a read only capacity. I would be very interested in any experiences of companies using EII on its own to manage master data. Please share with us what you are doing out there!! In that sense the registry approach MDM products (e.g. Purisma) may be a more robust way of dynamically assembling data from disparate systems on-demand but again the question is how is the data maintained i.e. what is the system of entry (SOE). Is the MDM system the SOE as well as a system of record or are the line of business operational systems still SOEs.

If the master data is already integrated and persisted in an MDM data hub, then providing views of this is potentially achieveable via straight forward relational views. Again however the question of update comes to mind with view updateability . This is a very wel documented topic that stretches way back to the '80s with writings from leading relational authorities such as Dr E.F. Codd and Chris Date. So the processes around maintaining non-overlapping views, which system or systems remain master data SOEs and the technical approach taken to implement this all need to be considered.

For me the bigger question is data names in these MDM views. You could argue that views (virtual or otherwise) allow you to render master data using different data names in every view. To be fair, David's question stated non-overlapping views. In my opinion the data names in these views should remain as the common enterprise wide data names and data definitions associated with the master data. After all this is MASTER data that should retain common data names and definitions if at all possible. I accept that when subsets of master data are pushed out to disparate applications then the subset of data once consumed by the receiving application may end up being described using application specific data definitions (because that is how data in the application specific data model is defined). But if we are to create views of master data at different levels of the enterprise, in my opinion we should insist on common enterprise wide data names in these non-overlapping MDM views to uphold consistentcy and common understanding. Any portals that present this data or new applications and processes that consume it should if at all possible retain those common definitions. Again, in my opinion, common understanding and enterprise wide data definitions (i.e. master metadata) is king.

In fact, I would argue that without common data definitions (which is sometimes referred to as a shared business vocabulary) that a Federated MDM approach using non-overlapping views would fail because it is the common metadata definitions that hold the whole thing together. This brings up another point and that master data should be marked-up using common data names wherever it goes and that metadata management is just as fundamental to success in any MDM strategy as the data content itself. A shared business vocabulary (SBV) and understanding the mappings between SBV common definitions for master data and the disparate definitions for it in disparate systems is absolutely key.

Let me know what you think and thank you David for a truly exellent question.


Posted September 6, 2006 10:33 AM
Permalink | No Comments |

Having travelled to San Francisco just over a week ago to speak at the Shared Insights (formerly DCI) Data Warehouse and Business Intelligence conference, the experience of going through Manchester and Heathrow Airport security was something I don’t think I’ll forget for a long time. I started on my travels on the Saturday morning just after the mid-week scare in the UK when everything went critical. It was the first time in my life that I have travelled on business without my laptop due to the policy of absolutely no hand baggage. I have to say I felt almost naked without my laptop, but I didn’t want to run the risk of losing it or having it crushed as it travelled through the checked baggage system given the fact that I have reached many a destination airport only to find that my luggage did not. So I took the safe route and took my presentation on EII In-Depth on a memory stick and packed that in my suitcase.

I have to say as an airline passenger I don’t think I have ever felt so safe given the fact that security searched everyone (and were fairly pleasant in the process) at Manchester and again at Heathrow. Belts, shoes, jackets, watches and just about everything else was x-rayed. Nothing other than your travel documents and prescribed medicine was allowed on the plane, and so I watched every video British Airways had on offer! Even when we got on board, there was a one-hour delay while details of everyone who boarded were sent to the U.S. for checking before we were allowed to take off. When I got to the U.S. and picked up my suitcase, I was relieved to know that I had retrieved my PowerPoint presentation on my memory stick that I packed in my suitcase and could fulfil my obligation of presenting at the conference. You might think I should have e-mailed the presentation in advance – well, I did that too, but as many of you may agree, it is best to have a backup. Talk about secure – having got through passport control, I was then selected at random to have another check by U.S. customs who looked through my luggage and asked me a few questions and then sent me on my way with a “Welcome back to the U.S., sir.” By the time I got to my hotel, all I could think of was the seriously secure BI presentation that travelled with me through security on my memory stick. It was like I was being protected so well in order to get me safely to the conference to do my presentation. I dutifully then turned at the conference having requested a laptop in advance to present from – which Shared Insights happily provided. Of course, I asked them if they had the presentation I emailed in advance, and the answer was NO! So the moral of this story is always take a backup, and thanks to airport security for looking after my memory stick!!

An article on EII In-Depth will be the feature article the UK Business Intelligence Network Newsletter. Be sure to subscribe (it is free!) so you can be one of the first to read all about EII. The newsletter will also feature a great article about the globalisation of business intelligence.

Let me know what you think and what you are doing with enterprise information integration (EII) and business intelligence here in the UK.

Posted August 25, 2006 5:48 PM
Permalink | No Comments |

Welcome to my blog on the UK Business Intelligence Network. It seems strange that I should be writing a UK blog from Italy! Rapallo, to be precise. What can I say, if it's Friday and you can't figure out why your ETL job is not working or if your metrics won't calculate correctly on your OLAP server, just leave it, get out of the office and come to this stunning place for a weekend. You won't regret it. Just fly to Genoa and get a cab - it's about a 30 minute drive

Anyway, I hope to help you stay in touch with hot topics and reality on the ground in the UK and European business intelligence markets and to provide content, opinion and expertise on business intelligence (BI) and its related technologies. I also would relish it if you too can share your own valuable experiences on the discussion forums and surveys. Let's hear what's going on in BI in the UK. If you've got a gripe, let's hear it. If you think a product is cool, let's hear that too, and most of all, let's hear what your using BI for in your business and what you want to see covered. I'll try to explore established and newly emerging technologies in BI including data integration, BI platform tools such as reporting, OLAP, dashboard builders, predictive analytics and performance management software such as scorecarding and planning. I'll also look at data warehouse appliances, data visualisation, portals and operational BI and at BI applications.

I'll start the ball rolling by looking at the platform market. Consolidation is upon us in the business intelligence market with large BI vendors and software giants battling it out for market share. Business Objects, Cognos, Hyperion and SAS are the largest of the independent BI vendors competing with IBM, Microsoft, Oracle and SAP for the BI platform crown in most enterprises. Of course, there is plenty of action elsewhere from may other BI vendors. An example of that is the increasing number of data warehouse appliance offerings on the market from start-ups and established BI vendors including (in alphabetical order) Calpont, DATAllegro, Greenplum, IBM BCU, Netezza and SAP BI Accelerator - with more to come I am sure.

Much talk over the last few years has been about the move to single vendor BI platforms. In the UK, I hear a lot of talk about it. Companies building BI systems for the first time are definitely going for single supplier BI platform solutions. However, companies with long established best-of-breed BI systems still need to be convinced but will most likely slowly move to fewer suppliers if there is value in doing so. I think the impact of Microsoft in the BI platform market with SQL Server 2005 BI platform tooling (SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, SQL Server Notification Services and SQL Server Report Builder) appears to be having an impact on BI platform pricing. Certainly many companies talking to me have been raising the issue of pricing. Emerging open source BI initiatives (e.g., Jasper, Pentaho, BIRT and Palo, to name a few) are also bound to add to that pricing pressure forcing BI platform pricing down over the next 18 months. If this does happen, it is obvious that vendors will compensate by shifting the value to performance management tools and applications, vertical BI applications and the explosion of interest in data integration and enterprise data management.

Enterprise data management seems to be taking on a life of its own with data integration now breaking free of the BI platform and going enterprise-wide. Here we have ETL, data quality, EII, data modelling and metadata management all coming together into a single tool set. Much is being written about EAI, ETL and EII (or as I like to call it EIEIO !!), and I'll look into these areas in upcoming blogs and articles.

Posted August 1, 2006 10:13 PM
Permalink | 1 Comment |