Blog: Mike Ferguson Subscribe to this blog's RSS feed!

Mike Ferguson

Welcome to my blog on the UK Business Intelligence Network. I hope to help you stay in touch with hot topics and reality on the ground in the UK and European business intelligence markets and to provide content, opinion and expertise on business intelligence (BI) and its related technologies. I also would relish it if you too can share your own valuable experiences. Let's hear what's going on in BI in the UK.

About the author >

Mike Ferguson is Managing Director of Intelligent Business Strategies Limited, a leading information technology analyst and consulting company. As lead analyst and consultant, he specializes in enterprise business intelligence, enterprise business integration, and enterprise portals. He can be contacted at +44 1625 520700 or via e-mail at

October 2006 Archives

As I attend the massive IBM Information on Demand conference here in Los Angeles, IBM today announced their new IBM Information Server which combines data replication, data discovery, data federation, data integration ,data quality, metadata management and more. This is a very powerful product with an ability to support ETL for data warehousing systems, EII, and support operational data integration and data management. I will write more about this announcements in up and coming blogs.

Posted October 16, 2006 8:03 PM
Permalink | No Comments |

Over that last decade or more I have reviewed many different BI Systems for clients. During that time and still today, one thing has continually cropped up time and again irrespective of vertical industry is the problem of inconsistent metrics. You see it in different data marts, different BI tools and of course across spreadsheets that may access BI systems. This problem has plagued enterprises for years and despite many efforts is still there today and particularly acute in Excel spreadsheets that fly around the enterprise as people supply intelligence in spreadsheet form to others. Historically this problem has often arisen when companies have built different BI systems at the line of business level rather than at an enterprise level. You might think that this is a crazy idea but time and again it has happened as pressure from line of business executives has grown to deliver some kind of BI to support decision making in their particular area.

Take any large bank for example. Here you may find separate product based risk management BI systems for mortgage products, card products, loan products and savings products. Product based risk management BI systems are a classic set-up in many financial services organisations and it is only recently that many financial institutions are seeing the need to re-visit risk management at the customer levels to see risk exposure across all products owed by a customer.

Another example is when an IT department has built one or more BI systems and then given the user base total freedom to select their own BI tools. This is the “Field of Dreams” approach to building a data warehouse i.e. the “build it and they will come” strategy. Therefore the user base in this example may be choc full of different BI tools chosen by different user departments to access the data stores that IT have built. A third example is the parallel build approach whereby companies have spawned different BI teams around a large enterprise in order to speed up BI system development. These teams often end up building separate data marts as stand alone initiatives and so no common data definitions get used across BI systems.

Looking at these examples there is clearly been lots of opportunity over the years for inconsistency to creep into the BI implementations. This has not happened deliberately. It is more of an inadvertent consequence of stand-alone developments and lack of communication across BI development teams and line of business departments.

The consequence today is that is not uncommon to find different BI tools in an enterprise each offering up the same metric under different data names. You may also find that what looks like the same metric may have different formulae for different instances of it because of different user interpretations of it. Worse still is different metrics with the same name and different formulae. This is something that can cause significant confusion among business users often meaning that they are not clear on what BI metrics mean. In situations like this it is not uncommon to see users resorting to creating their own spreadsheets with their own version of the metrics they need and with their own version of formulae. Then of course they start emailing people attached spreadsheets and so the rot sets in and inconsistency starts to spread undermining all the investment in BI. I sense a few heads nodding out there – would I be right? So what can you do about it? Fundamentally a major problem with many BI tools is that they offer up what is marketed as “flexibility” to the end user by giving them the chance to create their own metrics without policing this. So it is rare to see a BI tools either preventing users from creating duplicate metrics to those that already exist or to at least warn them that a metric already exists to avoid re-inventing it. This lack of policing is at its worst among the Excel spreadsheet “junkies” that exist out there.

Amidst a climate of increasingly stringent compliance regulations this problem is beginning to seriously worry executives such as CFOs that often carry ownership of the compliance problem. What they want is common metrics definitions and for all tools to share access to these definitions. In particular, for reporting tools and spreadsheets to have access to such common metrics definitions and to prevent authorised users from inadvertently creating the sane metrics again and again without knowledge of what already exists. Is it any wonder therefore that common metadata is becoming increasingly important year on year.

What we want are "metrics cops" that prevent users from creating inconsistent metrics definitions. Therefore if a user tries to create a metric perhaps with a new name but with the same formulae that already exists elsewhere, a “metrics cop” would intercept this request and inform the user that a metric with the same formulae already exists and they should re-use the commonly defined metric if they need that metric in a report. Equally, if two users each try to create a metric with the same names but different formulae, a metrics cop should once again intervene and prevent this from happening so that clarity, consistency and compliance are all maintained. It was therefore with interest that I looked at the recent e-Spreadsheet 9 announcement from Actuate ( which has the capability to generate spreadsheets for many different users while guaranteeing metrics re-use across all spreadsheets. No only that but if the same metric is needed in different spreadsheets, this product inserts the same formulae in all spreadsheets that need it, thereby guaranteeing consistency. Several other BI vendors are beginning to look at this problem but often only across their own BI tools. As we move towards shared metadata across all BI systems to enforce compliance, the need for “metrics cop” functionality in BI tools such as reporting and OLAP is becoming a must for many enterprises. In addition performance management tools and applications need to also enforce this.

Therefore look closely to see how BI vendors stack up in terms of policing metric creation in their BI tools and performance management applications to help prevent your users from inadvertently contributing to metrics inconsistency, reconciliation and compliance problems. Also look to see how BI tools can discover what metrics already exist in your BI systems and how they can import metrics definitions from other technologies where they are defined. Finally check to see how BI platforms allow you to share common metrics definitions across 3rd party BI tools. This is especially important when it comes to Excel.

Posted October 10, 2006 9:18 PM
Permalink | No Comments |

In my blog on “MDM – Straightforward Implementation or Iceberg Project,” I highlighted the difference between master data integration and enterprise master data management (MDM). A key difference between the two is that enterprise MDM involves the MDM system being the single system of entry (SOE) as well as a system of record (SOR) while master data hubs persist integrated master data and are SORs while existing operational applications remain SOEs. I also highlighted the significant effort involved in transitioning from master data integration to enterprise master data management, and it is here that I want to focus in this blog.

The reason for singling out this transition is because another technology often being implemented in IT presents an opportunity for companies to start switching off screens, form fields, etc., to do with operational application SOEs and move towards a single SOE enterprise MDM system. That technology is enterprise portal technology. Example enterprise portal products include BEA AquaLogic Interaction Portal, IBM WebSphere Portal Server, Microsoft Office SharePoint Portal, Oracle Fusion Portal and SAP NetWeaver Portal.

As companies implement enterprise portal software, one of the tasks that needs to get done is the integration of applications into a single, personalised and integrated user interface served up by portal technology. For many operational applications that are master data SOEs today, integrating them into a portal often means that their user interface needs to be redeveloped to become a portlet-based user interface with multiple portlets appearing on portal pages served up to users. If existing operational master data SOE applications are slated to have their UIs redeveloped to be plugged into a portal, then MDM developers should seize the opportunity to request user interface changes to those systems in order to decommission application-specific master data entry screens and master data attributes on line-of-business application forms. They can then introduce equivalent master data forms or screens that maintain master data as portlets in the portal. These new portlets would directly maintain master data in MDM data hubs rather than updating line-of-business operational application local data stores. Note that portlets associated with MDM systems can co-exist on a portal page alongside operational application portlets, and so the user still sees that they can maintain master data as they did before. The difference here is that data entry on some portlets may be transaction data held in the application data store, while data entry on other portlets maintains master data in the MDM system. By doing this, the transition to enterprise MDM can take place gradually and “piggyback” the budgeted and planned redevelopment of application user interfaces as they get integrated into enterprise portals. This approach enables two things to happen at once: application UI redevelopment for integration into a portal and gradual switch to using MDM maintenance portlets as a single SOE method to maintain data in the MDM system. Enterprise MDM systems can then synchronise changes to master data with other applications hat make use of this data.

So don’t miss the portal opportunity as a vehicle to transition to enterprise MDM!

Posted October 4, 2006 3:50 PM
Permalink | No Comments |