Blog: Mike Ferguson Welcome to my blog on the UK Business Intelligence Network. I hope to help you stay in touch with hot topics and reality on the ground in the UK and European business intelligence markets and to provide content, opinion and expertise on business intelligence (BI) and its related technologies. I also would relish it if you too can share your own valuable experiences. Let's hear what's going on in BI in the UK. Copyright 2011 Wed, 18 Aug 2010 08:06:42 -0700 Pervasive Rush To Take On The Challenge of Scalable Data Integration Boulder BI Brain Trust (BBBT), I sat in on a session given by Pervasive Software Chief Technology Officer (CTO) and Executive Vice President Mike Hoskins last week.  The session started out covering Pervasive financial performance of $47.2 million revenue (Fiscal 2010) with 38 consecutive quarters of profitability before getting into the technology itself. Headquartered in Austin, Pervasive offer their PSQL embedded database, a data an application exchange (Pervasive Business Xchange) as well as their Pervasive Data integrator and Pervasive Data Quality products which can connect to a wide range of data sources using their Pervasive Universal Connect suite of connectors.  They also offer a number of data solutions.  Pervasive has has success in embedding its technology in ISV offerings and in SaaS solutions on the Cloud.  However, what caught my eye in what was a very good session was their new scalable data integration engine DataRush.

More ....
]]> Wed, 18 Aug 2010 08:06:42 -0700
MicroStrategy Takes BI Mobile - What are The Implications of Mobile BI for BI Platforms? Having just got back from the MicroStrategy World Conference in beautiful Cannes, I thought I would cover what was announced this week at the event.  CEO Michael Saylor launching MicroStrategy Mobile for iPhone, iPad and Blackberry describing it as "the most significant launch in MicroStrategy history".  In his opening keynote he talked about mobile as "the 5th major wave of computing" starting with mainframe, then mini-computers, then personal computers, desktop internet and now mobile internet.  Their vision here is a good one - BI all the time, everywhere and for everyone. Mobile device access to BI has been around for a while in some offerings but I was impressed with the work MicroStrategy have put into the mobile user interface on touch sensitive 'gesture' devices like Apple iPhones and iPads.   They have taken advantage of the full set of Apple gestures and also added BI specific gestures including Drill down and Page By.  They have also released an Objective C software development kit (SDK) for MicroStrategy Mobile.  This allows developers to build custom widgets and embed them in the MicroStrategy Mobile application or embed MicroStrategy Mobile in your own application.

More ........

]]> Thu, 08 Jul 2010 08:57:15 -0700
BITunes on the Cloud? - The Emergence Of Subscription Based On-Demand BI BIRT On-Demand platform as a service (PaaS) solution (which is very easy to use). It was only a matter of minutes before I was up and running with a Mashboard.  A few weeks back in New Orleans I used Dundas Dashboard to quickly build a dashboard from pre-built components. Similarly Microsoft SQL Server 2010 has the ability in ReportBuilder 3.0  to quickly build up a library of components that can be dragged and dropped into a report.

More .........
]]> Fri, 02 Jul 2010 07:41:40 -0700
Microsoft Opens Up Collaborative and Self-Service BI Just over a week ago I was invited to attend an analyst briefing at the Microsoft BI conference in New Orleans that was running alongside the Microsoft TechEd conference.  The conference itself was very well attended with several thousand delegates.  Several things were on show at this event including SharePoint 2010, SQL Server 2008 R2, Office 2010,  PowerPivot, PerformancePoint services 2010. Also on show was  SQL Server Data Warehousing Edition (also known as the Madison project) - the massively parallel edition of SQLServer that will be shipped later this year.

The one thing that stood out for me was the seismic shift towards collaborative BI.   As my friend Colin White so aptly put it in the analyst briefing, "Microsoft have brought BI to collaboration rather than collaboration to BI".  This is an important point because what it is says is that there is little point adding collaborative features to a BI platform if these are not the services associated with a mainstream collaborative platform.  There is far more value in integrating a BI platform with the company collaboration software to tap into things like collaborative workspaces, presence awareness, unified communication, shared calendar etc. etc.  In Microsoft"s case this is of course the SharePoint product which has become viral in most organisations.

It is no surprise therefore that Microsoft's BI initiative is built around 3 main components and not just SQL Server.  These are:

  • Office,
  • SharePoint
  • Microsoft SQL Server 2008 R2

Note that SQL Server 2008 R2 includes StreamInsight, Microsoft's complex event processing (CEP) engine and Microsoft Master Data Services

While there we were take through an excellent demo to show the power of collaboration and what it can do when integrated with BI.  It even included the Microsoft Round Table device which although it has been available for some four years, was the first time I have actually encountered one.

What the demo showed me was the speed with which BI and BI 'components' can be spread among a community of users. My conclusion is that integration of SQL Server 2008 R2 with Sharepoint 2010 takes this to another level in that the rate that business intelligence can be shared it is almost 'twitter speed'.  For those of you using twitter, you will know that as soon as something of interest breaks, re-tweets can spread it across masses of people in a matter of minutes.  This is the feeling I got during the demo.  It fuels mass sharing, mass reuse and mass development of BI applications and artifacts.  In particular reports and dashboards. It certainly fits with Microsoft"s vision of BI for everyone.

Several new features open up the flood gates for collaborative BI to share intelligence with other without the need for IT. For example,

BI reports can be managed by Sharepoint in document libraries. You can also preview reports before opening them up.

Also Microsoft is fueling development by business users on the back of what power users have done, thereby bypassing IT.  This is because there is now a capability whereby Microsoft ReportBuilder 3.0 can access PowerPivot workflows uploaded to SharePoint sites.  You can also export to Excel from PowerPivot.  Power users using PowerPivot (originally referred to as Gemini), can take data from different data sources (including newly supported Atom feeds), merge and join that data. Relationships between tables can be managed inside of PowerPivot.  PowerPivot power users can then create workflows that process this data and can upload these to Sharepoint sites.  ReportBuilder 3.0 (or any BI client) can then treat the PowerPivot workflow as a data source.  Not only that but ReportBuilder can create report parts which are sharable in a report part gallery do that other users can reuse them by simply dragging an dropping the report parts onto a new report for rapid development without having to know the detail underneath.

Hopefully by now you have got the picture - power users building their own workflows in PowerPivot, publishing them to SharePoint, other users using them as data sources in reports, report parts created, and a gallery of parts to be shared across a community of users.  Powerful stuff, and we are not done yet.

In Sharepoint 2010 there is a new site template called Business Intelligence Center.  What you can now do is create a new site in SharePoint using the Business Intelligence Center template. This template includes chart web parts and Excel services workbook access. It also includes a PerformancePoint library so that you can start building your dashboard very rapidly including access to reports and report parts. With is mechanism, Microsoft is opening up dashboard development to the masses and also allowing 'social' performance management whereby dashboards and/or dashboard components can be rated.  All this integrated with SharePoint and Office is in my opinion going to take self-service BI development to another level that it could easily have a 'popcorn effect' with masses of BI being produced rapidly and IT nowhere in sight.  There is no doubt that it opens up the flood gates for business innovation and sharing.  Personalised dashboard development using PerformancePoint Services 2010 integrated with SharePoint 2010.

A Question of Governance?

My only concern with this is the issue of governance.  What Microsoft have done is to put mass development in the hands of the business.  If you think upi have seen anythng on self-service BI, just wait until SharePoint 2010, Office 2010 and SQL Server 2008 R2 move into production in your shop. You ain't seen nothing yet.

However I see very little with respect to data governance. What about business glossaries? What about metadata lineage?  In a world of increasing regulation and legislation to prevent corporate catastrophes, can anything be audited? Can it be tracked back to where the data come from? How has the data been transformed by the power users? iWhat does the data mean?  I have as yet seen little from Microsoft in the form of metadata management and data governance despite the fact that Master Data Services is also delivered as part of this SQL Server release.  While there is no doubt that this is coming (confirmed by the Microsoft guys I spoke with on the exhibition floor booth) my only fear is will be too late.  Will the horses have already bolted with self-service BI unstoppable and off down a track without lineage to help users know that the data is trusted.

Equally, scorecard and dashboard development is bottom up. Everyone (with authority) can create their own scorecards and dashboards rapidly but there appears to be no framework whereby these can be slotted into a multi-level  strategy management unlike say SAP with SAP Strategy Management.  So what is the answer? Is it all bets are off and just let the business figure out the best way to manage on the back of socially rated scorecards and dashboards?  What happened to business strategy?  Many companies set a strategy at executive level and want enterprise wide business strategy execution.   This latter approach is top-down.  What Microsoft is fueling is bottom up.  My opinion is we need both and not one or the other.

Freedom Versus Governance - A Delicate Balancing Act

It is pretty clear then that setting aside the new SQL Server Data Warehousing Edition, this is very much a Collaborative BI release by Microsoft.  It is a major leap forward in what the business users can do for themselves.  We have two forces at work here.  Freedom versus governance.  We have to get the balance right.  Too much freedom and we could have chaos with no ability to audit what has been done or whether the BI is trusted. Too much governance and we put innovation in a straight jacket or kill it altogether.   All I would say is that IT had better get a data governance program underway soon to control data all the way out to data marts and cubes. If that is done then there is no doubt that the business can be empowered to innovate which is what should happen. Without a data governance program however, I think it is really going to be hard to get alignment with what the business is doing given the sheer speed of development that is now possible with this release.  Let's hope governance, innovation and collaboration are a winning combination. 

Follow me on Twitter

]]> Mon, 21 Jun 2010 08:50:12 -0700
Chasm Not Crossed as A Sensor Data Tsunami Comes Over The Horizon Just over a week ago I spent a day at SensorExpo in Chicago to present on Complex Event Processing (CEP) discussing how CEP engines, Predictive Analytics, business rules can be used to analyse event data in-motion to facilitate business optimisation.  This was a very busy conference.  I estimated at least 2000-3000 people on the exhibition floor with maybe 400 on the conference.  I found around 100 vendors with all kinds of sensor devices on show exhibiting their products and services.  To my surprise however I had only heard of 2 of the vendors. IBM and Texas Instruments.  The floor was heaving with people looking to instrument their business operations to measure everything from movement, temperature, energy consumption, stress, heat, fluid volumes, pipeline flows and RFIDs.  There were analog devices and digital devices.  When taking to the vendors the big common denaominator was that they are all trying to collect the data from the sensor networks to analyse it.  Yet other than IBM there was not a single BI vendor in sight. Not even a single complex event processing (CEP) vendor in sight.   I was shocked because this market is clearly a booming.   What was even more surprising was that I could not find an IT professional anywhere. 99.9% of all delegates and speakers were engineers.

Attending some of the case studies I found some fantastic applications of the use of sensor networks and RFIDs.  Healthcare with sensors all over hospitals and equipments and patients all tagged with RFIDs.  The return on investment in this case was fraud prevention on equipment and process improvement for patients.  Another session I attended was one on monitoring stress in all the bridges in the US - over 700000 of them.  Some of the stats being quoted by the speakers were staggering.  "Well we are emitting, 3 events per minute from every sensor on a 7x24 hour basis. After 6 months operating like this we have over 20 PETABYTES of data.  You read it right 20 PETABYTES.   A lot of the technical focus at the conference was on energy harvesting to prolong sensor battery life,  but the business message was clear as a bell.  Process optimisation and cost reduction comes from instrumenting business operations.  Manufacturing production lines, supply chains, product distribution.  You name it, they're measuring it.

So I have to ask, where are all the BI vendors, the analytical DBMSs, CEP products, dashboards, predictive analytics.  The volume of data coming over the horizon from the adoption of sensor networks and RFIDs is nothing short of massive.  What is also clear is that this is already going on in enterprises and IT are blissfully unaware of it in the main.  Clearly IT BI professional have got to get in touch with their Engineering colleagues and engineers have got to be made aware of mainstream data integration, analytical database and BI platform technologies as well as CEP of course.  I don't think I have ever seen a chasm between IT and business not event explored never mind crossed.  Yet the value of CEP and mainstream DW/BI to this market is nothing short of enormous.   It is symptomatic that even though this market is heaving with engineers it has yet to tied into mainstream IT to exploit far more robust software than is being used on this data at present.  What an opportunity. What a huge opportunity.  It most certainly is going to re-define large databases for set up for analysis of historical event data.  CEP will obviously go there. It has to get out of just being in the financial markets and wake up to a ton of data in motion being emitted by the growing number of devices.  An article I read recently said that Sensors empower an Internet of Things.  Well, those things are coming over the horizon emitting a Tsunami of data. It is time CEP and DW/BI vendors woke up an smelt the coffee and became aware of this rapidly growing market.  CIOs had better take heed too because they are going to have to integrate it into mainstream IT.

]]> Mon, 21 Jun 2010 08:46:29 -0700
Data Federation- Information Services Patterns - The On-Demand Information Services Pattern Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a On-Demand Information Services Pattern. This is as follows:


Pattern Description

This pattern uses data virtualization to provide on-demand integrated data to applications, reporting tools, processes and portals via a web services user interface. Structured and semi-structured data sources are supported including RDBMS, any web service (internal or external), web syndication feeds, flat files, XML, packaged applications and non-relational databases.


Pattern Diagram



Pattern Example Use Case

A company needs to different kinds of information services targeted at different role-based user communities for access via their enterprise portal.  These services include:


·         Internal operational and analytical information services

·         Information services that integrate structured and semi-structured information including internal and external syndicated web feeds

·         Information as a Service (IaaS) services that  render information in various XML formats (e.g. XBRL) for consumption by external users and applications


Reasons For Using It

Rapid development of re-usable information services for consumption by portals, composite applications, processes and reporting tools.


]]> Fri, 18 Dec 2009 03:34:47 -0700
Data Federation- Master Data Patterns - The Virtual MDM Pattern Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Master Data Virtual MDM pattern. This is as follows:


Pattern Description

This pattern uses data virtualization to provide one or more on-demand integrated views of master data entities such as customer, product, asset, employee etc. even though the master data is fractured across multiple underlying systems. Applications, processes, portals, reporting tools and data integration workflows needing master data can acquire it on-demand via a web service interface or via a query interface such as SQL.


Pattern Diagram


Pattern Example Use Case

A manufacturer needs to make sure that changes to its customer data are made available to its marketing, e-commerce, finance and distribution systems as well as its business intelligence systems to keep business operations, reporting and analysis running smoothly. A shipping group of companies needs to perform a routine maintenance upgrade on a particular type of asset. However, its assets are managed by different systems in multiple lines of business. In order to budget for this upgrade it needs to have a single view of assets to fully understand maintenance costs. 


Reasons For Using It

To obtain a single integrated views of master data for consistency across business operations quickly at a relatively low cost.

]]> Fri, 11 Dec 2009 09:24:33 -0700
Data Federation - DW Patterns - The Virtual Data Source Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Data Warehouse Virtual Data Source pattern. This is as follows:

Pattern Description

This pattern uses virtual views of federated data to create virtual data source components for use in ETL processing. The purpose of this pattern is twofold. Firstly to protect ETL workflows from structural changes to operational data sources. Secondly to create re-usable virtual data source 'components' for accessing disintegrated master and transactional data. The virtual data source pattern effectively 'ring fences' just the data associated with a customer, or a product for example, meaning that ETL workflows can be built for customer data, product data, asset data, order data etc.  This helps ETL designers to create ETL jobs dedicated to a particular type of data e.g. the customer ETL job, the product ETL job, the orders ETL job. Simplistic design of data consolidation workflows dedicated to a type of data allows these jobs to be re-used if the same data is needed elsewhere, e.g. customer data needed in two different data marts. It also guarantees that the same data is made available again and again via the same virtual data source  


Pattern Diagram


Pattern Example Use Case

Merger and acquisitions and new system releases often cause changes to operational systems data structures. This pattern can be used to shield ETL jobs that populate data warehouses and master data hubs from structural changes to source systems simply by changing the mappings in the virtual source views.


Reasons For Using It

Reasons for using this pattern include the ability to manage change more easily, lower ETL development and maintenance costs and modular design of data integration workflows associated with consolidating data.

]]> Fri, 04 Dec 2009 03:51:23 -0700
External Data Feeds BI To the Front Office Everywhere I look at the moment I see my clients talking about needing to benchmark themselves against the market, to understand customer and prospect sentiment on social networking sites and to understand competitors in much more detail. It is not just me that has recognised this need. It also seems that new young startup companies have also seen this gap in the market. Over the last few days I have spent some time talking to Andrew Yates, CEO of Artesian and Christian Koestler, CEO of Lixto about their solutions in this area.

Artesian are are focused on monitoring media news, competitors intelligence and market intelligence that can be fed into front-office processes - in particular to sales force automation. Integration with is provided as is delivery to mobile devices for mobile salespeople on the road. Their intention on media intelligence for example is to track coverage across all media channels contextually matched to commercial triggers or specific areas of interest.  What I like about Artesian is the fact that they have looked at how to drive revenue from intelligence derived from web content by plugging it into front-office processes. Also by adopting social software attached to front-office systems like's new Chatter offering it becomes possible to collaborate over this intelligence. I would like to see this solution integrate with Microsoft SharePoint and IBM Lotus Connections for more use in large enterprises. However, seeing the need to focus attention on content that has real value in the front office is a real strength of this young startup.  

Lixto has a integrated development environment that allows you to build analytic applications pulling data from web sites such as competitor price information, new competitor marketing campaign data and other information that can be loaded into their customisable analytic applications to monitor competitors for example. 

Extracting insight from external data is definately on the increase with YellowBrix and Mark Logic also in on the act. IBM jumped into the market back in October with their announcement of IBM Cognos Content Analytics. This market is heating up. It seems to me that the start-ups are out there with competitive offerings.

]]> Fri, 27 Nov 2009 10:16:09 -0700
Data Federation - DW Patterns - Virtual Data Mart

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Data Warehouse Virtual Data Mart pattern. This is as follows:

Pattern Description

This pattern uses data virtualization to create one or more virtual data marts on top of a BI system thereby providing multiple summarised views of detailed historical data in a data warehouse. Different groups of users can then run ad hoc reports and analyses on these virtual data marts without interfering with each others' analytical activity.


Pattern Diagram


Virtual DM Pattern.JPG


Pattern Example Use Case

Multiple 'power user' business analysts in the risk management department of a bank often need their own analytical environment to conduct specific in-depth analyses in order to create the best scoring and predictive models. This pattern facilitates the creation of multiple virtual data marts without the need to hold data in many different data stores


Reasons For Using It

Reduces the proliferation of data marts and also prevents inadvertent 'personal' ETL development by power users who have a tendency to want to extract their own data to create their own data marts. It is often the case that each power user wants a detailed subset of data from a data warehouse that overlaps with the data subsets required by other power users. This pattern avoids inadvertent inconsistent ETL processing on extracts of the same data by each and every power user. It also avoids the duplication of the same data in every data mart, improved power user business analyst productivity, reduces the time to create data marts and reduces the total cost of ownership.  




]]> Fri, 27 Nov 2009 07:01:32 -0700
Data Federation - DW Patterns - The Holistic Data View Pattern

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Data Warehouse Holistic Data View pattern. This is as follows.


Pattern Description

This pattern, also known as the schema extension pattern, uses data virtualization to create a holistic complete view of business activity by combining the latest most up to date operational transactional activity in one or more operational systems with detailed corresponding historical data from data warehouses and data marts.


Pattern Diagram



Holistic Data View Pattern.JPG


Pattern Example Use Case

Front-office staff in a call centre operator or a branch of a bank may need to view current risk exposure for a customer they are on the phone to while also looking at a risk exposure trend for that customer across all loan products held. A second use case is regulatory compliance reporting whereby operational and historical data may both be needed for compliance reporting.  


Reasons For Using It

This pattern allows companies to quickly show a holistic view of business activity that includes the more recent transactional activity combined with historical activity. This data can be presented for analysis and reporting even if the latest transactional data has not yet reached the data warehouse. 


Look out for the next data federation data warehouse patterns on virtual data mart and virtual data source coming soon



]]> Mon, 16 Nov 2009 10:08:04 -0700
Informatica 9 Raises The Bar for Data Management Platforms

As you probably know, Informatica announced Informatica 9 yesterday in a blaze of publicity with, I am led to believe, over 10,000 people registered to view the announcement. So I thought I would make a few comments on what was announced.

The three main strands of the announcement were

·         Relevant data through business-IT collaboration

·         Trustworthy data through pervasive data quality

·         Timely data through open SOA-based services


Relevant data through business-IT collaboration includes new Browser-based analyst tools for analysts to directly specify their business requirements, automatic generation of implementation details from business specifications, and a common metadata repository allowing business analysts and IT developers to collaborate and share specification and implementation artifacts with each other


Pervasive data quality allows data quality rules to be specified once and reused repeatedly, ensuring consistency across applications. In addition, role-based tools are offered to allow stakeholders to take ownership of their own data quality requirements. Data quality scorecards, simple analyst tools and productive developer tools are also available to empower business users, business analysts, data stewards and IT developers to be directly involved in measuring and improving data quality.


SOA data services includes support for

·         Information catalog services to enable users to discover relevant data be it on-premise or in the internet cloud.

·         Logical data objects

·         Multi-modal data provisioning services to deliver data in a multiple formats using various protocols such as web services and SQL

·         Policy-based data services governance


In my opinion, differentials include policy-based data services governance (which is very unique) and the Business Analyst tools and collaboration support. The web based Business Analyst tools look a very compelling story although I would however have liked to see more integration with Microsoft and IBM Lotus collaborative tools and workspaces.


Data federation and consolidation on the same platform off same metadata with auto generation is a very strong capability. IBM has the same function but auto-generation in their case is out of two separate tools (InfoSphere Data Architect generates data federation logical objects and mappings while InfoSphere Fast-Track generates ETL jobs for Data Stage. Both IBM tools use common metadata however). I would have liked to have seen Informatica go the extra mile and auto generate XSLTs for XML message translation by ESBs/Message Broker products. I don't see this support but equally I don't see it anywhere else either as yet.  In addition I would have like to have seen MapReduce functionality in the announcement to handle Big Data integration. No doubt this is coming.


With respect to data services, I don't see ability to publish data services to an Enterprise Service Repository so that these services can be managed centrally in a common place with all other types of service although UDDI support was announced. Some competitors can publish services to ESRs, e.g. IBM with the InforSphere Services Director. Informatica's approach to Cloud Data integration also appears seamless but more information is needed. I understand a new announcement coming soon although they have already announced support for running PowerCenter on Amazon's EC Cloud.  In terms of competition, Microsoft can already run SSIS on SQL Azure cloud to integrate cloud data. In addition, IBM also has multi-modal support on InfoSphere Information Server beyond SQL and Web Services. They also support JAVA RMI, REST as well as SOAP, SQL and X/Query.


I would also have liked to see Informatica stick their neck out and acquire a data modelling tool rather than just integrate with everyone else's products. However, overall, this is a strong announcement with another Cloud announcement to come. There is no doubt that integrated Data Management platforms are here now with Informatica and IBM leading the way with e-Clipse based tool suites. SAP BusinessObjects and SAS DataFlux are clearly not far behind.  Expect more from Oracle and Microsoft in 2010.

Looking at the trend here, it is clear that companies need to look seriously at moving from separate data management tools from many different suppliers, each with its own metadata, to single platforms with integrated shared metadata

]]> Wed, 11 Nov 2009 08:26:35 -0700
Information Led Transformation - Will IBM's New Strategy Be A Success? So here I am in Las Vegas at IBM Information On-Demand - IBM's global information management conference. The up-coming theme that will be launched here is IBM's new Information Led Transformation (ILT) initiative which opens up IBM's major play in the Business Optimization market.  IBM is pouring enourmous amounts of money into this space, stating that this market is growing twice as fast as any other initiative including Business Automation. Their estimation on market size is $105Bn.  The objective of Information Led Transformation is micro-optimisation whereby every business optimization is carried out in real-time (or should I say right time) at all points of impact. That means optimising all decisions and process activities based on the current situation as it happens by leveraging event processing, predictive analytics, rules engines for automated action management based on a base of trusted information delived on-demand and in-context where it is needed and when it is needed. IBM ILT will leverage

  • IBM's InfoSphere Information Server platform,
  • InfoSphere Streams event processing,
  • Change data captue,
  • In-memory data in SolidDB and Cognos TM1
  • Cognos Performance Management and Analytics,
  • SPSS Predictive Analytics,
  • Automated decisions via iLog rules engine and other technologies such as WebSphere Business Events and Cognos Now!
  • Collaborative decision making via Lotus Connections
  • Process Optimisation using the WebSphere BPM technologies and ESB/message Broker.

On top of this IBM will deliver solutions (both crosss industry and vertical . We are entering an era of business automation to get business optimisation whereby BI is integrated into processes and event driven automated decision making and action taking keep the business running optimally at all points of operation.

In addition, ILT has 4000 IBM consultants already in place to chase business.  Time will tell how successful this initiative is. It is very ambitious but real-time use of intelligence and predictive analytics on an event-driven and on-demand basis is definately the right direction. The challenge here is bringing all these technologies together and getting IT groups to play ball. In addition many businesses need to learn how to optimise their business. Trusted data (via Enterprise Data Governance and MDM) will be fundamental to that as will the need for companies to make an inventory of their business events. Unless companies learn what to look for in different parts of their business they will not be able to maximise the benefits of business optimization. 

]]> Fri, 23 Oct 2009 15:32:57 -0700
Data Federation - The Data Discovery Pattern Following on from my last blog, the next data federation I would like to discuss is the Data Discovery pattern. This is as follows.


Pattern Description

This pattern uses data virtualization to query structured data held in multiple underlying core operational and analytical databases and file systems to answer business questions.  It uses a search like user interface that can return results as to where data associated with items being searched on can be found e.g. a search could be done on a customer name, an order and a sales representative name. The data discovery pattern allows users to query the virtual views of a data held in multiple systems via the data virtualization server. Through this mechanism users can find relationships between different data items across systems, view the data as if in a single system to discover answers to business questions.


Data Discovery Pattern.JPG


Pattern Example Use Case

Call centres are receiving a lot of enquiries as to why their orders are not being fulfilled. Data can be queried using a customer name, products ordered and the sales representative who took the order. Results returned show all occurrences of data about orders, the customer and the sales representative across multiple systems. Using the virtual views, this data can be analysed across systems to see what the reason is for delays in deliveries are e.g. order exceeds credit limit or order cannot be fulfilled due to inventory levels being too low. 


Reasons For Using It

This pattern has the affect of broadening access to enterprise data from a much larger user base who are confident in using a search box interface but who are not aware of where the data they need is located and who do not have the time and/or skills to use BI tools.

]]> Fri, 23 Oct 2009 12:31:21 -0700
Data Federation Patterns Having seen a lot of increase in demand from my clients to start a program to create information services, I thought it might be useful to look at one way of doing that through the use of data federation software. Then I realised that it would be better to look at all the popular ways of using this technology. On that basis, this blog starts a series of blogs from me on popular patterns that companies can use to get maximum value out of the data federation software.

In order to facilitate ease of understanding, the patterns discussed have been classified into the following categories

  • Business intelligence and performance management patterns
  • Data warehousing patterns
  • Master data patterns
  • Information services patterns
  • Operational patterns
  • Data management patterns

For those of you not sure what data federation is please refer to my 2006 article on the subject.


Performance Management Patterns

 Popular business intelligence (BI) and Performance Management patterns for data virtualization software are

  • The BI/Performance Management Integration pattern
  • The Data Discovery pattern

The BI/Performance Management Integration Pattern

This pattern uses data virtualization to integrate multiple underlying line of business (LoB) BI systems with performance management enterprise level scorecards and dashboards so as to allow detailed low level LoB metrics in the underlying BI systems to be used in calculating higher level enterprise key performance indicators in performance management scorecards and dashboards. This is essentially an aggregation pattern. There are two options associated with this pattern. The first is to map the data structures in multiple underlying BI system data stores to the virtual view(s) needed by performance management


Pattern Diagram (Option 1)



The second is to map the virtual view(s) to underlying BI web services that will retrieve the necessary data from the BI systems as required. These BI web services will typically be BI tool reports and queries published as web services on the BI platform being used. The data virtualization server simply calls the appropriate BI tool(s) via a web service interface to run the report/query to get the data needed to calculate key performance indicators (KPIs) that appear in the performance management scorecard(s).

Pattern Diagram (Option 2)

Pattern 2.JPG  

Pattern Example Use Case

A manufacturer with different lines of business may want to monitor the total cost of shrinkage over all product lines to compare against targets.  A bank may have different BI systems monitoring risk exposure for each of its product lines (e.g. mortgages, credit cards, loans) and wants to monitor corporate exposure across all product lines to see if exposure is in line with targets.

Reasons For Using It

Many companies with multiple line of business (LoB) BI systems cannot answer enterprise level questions. This requires enterprise key performance indicators to be calculated by aggregating LoB metrics in multiple BI systems.


In my next blog we will look at the Data Discovery pattern. Click here for more information on Data Governance



]]> Wed, 07 Oct 2009 04:21:50 -0700