Blog: Mike Ferguson Subscribe to this blog's RSS feed!

Mike Ferguson

Welcome to my blog on the UK Business Intelligence Network. I hope to help you stay in touch with hot topics and reality on the ground in the UK and European business intelligence markets and to provide content, opinion and expertise on business intelligence (BI) and its related technologies. I also would relish it if you too can share your own valuable experiences. Let's hear what's going on in BI in the UK.

About the author >

Mike Ferguson is Managing Director of Intelligent Business Strategies Limited, a leading information technology analyst and consulting company. As lead analyst and consultant, he specializes in enterprise business intelligence, enterprise business integration, and enterprise portals. He can be contacted at +44 1625 520700 or via e-mail at

As a member of the Boulder BI Brain Trust (BBBT), I sat in on a session given by Pervasive Software Chief Technology Officer (CTO) and Executive Vice President Mike Hoskins last week.  The session started out covering Pervasive financial performance of $47.2 million revenue (Fiscal 2010) with 38 consecutive quarters of profitability before getting into the technology itself. Headquartered in Austin, Pervasive offer their PSQL embedded database, a data an application exchange (Pervasive Business Xchange) as well as their Pervasive Data integrator and Pervasive Data Quality products which can connect to a wide range of data sources using their Pervasive Universal Connect suite of connectors.  They also offer a number of data solutions.  Pervasive has has success in embedding its technology in ISV offerings and in SaaS solutions on the Cloud.  However, what caught my eye in what was a very good session was their new scalable data integration engine DataRush.

More ....

Posted August 18, 2010 8:06 AM
Permalink | No Comments |

Having just got back from the MicroStrategy World Conference in beautiful Cannes, I thought I would cover what was announced this week at the event.  CEO Michael Saylor launching MicroStrategy Mobile for iPhone, iPad and Blackberry describing it as "the most significant launch in MicroStrategy history".  In his opening keynote he talked about mobile as "the 5th major wave of computing" starting with mainframe, then mini-computers, then personal computers, desktop internet and now mobile internet.  Their vision here is a good one - BI all the time, everywhere and for everyone. Mobile device access to BI has been around for a while in some offerings but I was impressed with the work MicroStrategy have put into the mobile user interface on touch sensitive 'gesture' devices like Apple iPhones and iPads.   They have taken advantage of the full set of Apple gestures and also added BI specific gestures including Drill down and Page By.  They have also released an Objective C software development kit (SDK) for MicroStrategy Mobile.  This allows developers to build custom widgets and embed them in the MicroStrategy Mobile application or embed MicroStrategy Mobile in your own application.

More ........

Posted July 8, 2010 8:57 AM
Permalink | No Comments |
As I research more and more into the world of Cloud-based BI, it is becoming pretty evident where we are headed. In my opinion we are moving down the road to an iTunes model for BI.   Yesterday I spent some time with Actuate in London looking at their BIRT On-Demand platform as a service (PaaS) solution (which is very easy to use). It was only a matter of minutes before I was up and running with a Mashboard.  A few weeks back in New Orleans I used Dundas Dashboard to quickly build a dashboard from pre-built components. Similarly Microsoft SQL Server 2010 has the ability in ReportBuilder 3.0  to quickly build up a library of components that can be dragged and dropped into a report.

More .........

Posted July 2, 2010 7:41 AM
Permalink | No Comments |

Just over a week ago I was invited to attend an analyst briefing at the Microsoft BI conference in New Orleans that was running alongside the Microsoft TechEd conference.  The conference itself was very well attended with several thousand delegates.  Several things were on show at this event including SharePoint 2010, SQL Server 2008 R2, Office 2010,  PowerPivot, PerformancePoint services 2010. Also on show was  SQL Server Data Warehousing Edition (also known as the Madison project) - the massively parallel edition of SQLServer that will be shipped later this year.

The one thing that stood out for me was the seismic shift towards collaborative BI.   As my friend Colin White so aptly put it in the analyst briefing, "Microsoft have brought BI to collaboration rather than collaboration to BI".  This is an important point because what it is says is that there is little point adding collaborative features to a BI platform if these are not the services associated with a mainstream collaborative platform.  There is far more value in integrating a BI platform with the company collaboration software to tap into things like collaborative workspaces, presence awareness, unified communication, shared calendar etc. etc.  In Microsoft"s case this is of course the SharePoint product which has become viral in most organisations.

It is no surprise therefore that Microsoft's BI initiative is built around 3 main components and not just SQL Server.  These are:

  • Office,
  • SharePoint
  • Microsoft SQL Server 2008 R2

Note that SQL Server 2008 R2 includes StreamInsight, Microsoft's complex event processing (CEP) engine and Microsoft Master Data Services

While there we were take through an excellent demo to show the power of collaboration and what it can do when integrated with BI.  It even included the Microsoft Round Table device which although it has been available for some four years, was the first time I have actually encountered one.

What the demo showed me was the speed with which BI and BI 'components' can be spread among a community of users. My conclusion is that integration of SQL Server 2008 R2 with Sharepoint 2010 takes this to another level in that the rate that business intelligence can be shared it is almost 'twitter speed'.  For those of you using twitter, you will know that as soon as something of interest breaks, re-tweets can spread it across masses of people in a matter of minutes.  This is the feeling I got during the demo.  It fuels mass sharing, mass reuse and mass development of BI applications and artifacts.  In particular reports and dashboards. It certainly fits with Microsoft"s vision of BI for everyone.

Several new features open up the flood gates for collaborative BI to share intelligence with other without the need for IT. For example,

BI reports can be managed by Sharepoint in document libraries. You can also preview reports before opening them up.

Also Microsoft is fueling development by business users on the back of what power users have done, thereby bypassing IT.  This is because there is now a capability whereby Microsoft ReportBuilder 3.0 can access PowerPivot workflows uploaded to SharePoint sites.  You can also export to Excel from PowerPivot.  Power users using PowerPivot (originally referred to as Gemini), can take data from different data sources (including newly supported Atom feeds), merge and join that data. Relationships between tables can be managed inside of PowerPivot.  PowerPivot power users can then create workflows that process this data and can upload these to Sharepoint sites.  ReportBuilder 3.0 (or any BI client) can then treat the PowerPivot workflow as a data source.  Not only that but ReportBuilder can create report parts which are sharable in a report part gallery do that other users can reuse them by simply dragging an dropping the report parts onto a new report for rapid development without having to know the detail underneath.

Hopefully by now you have got the picture - power users building their own workflows in PowerPivot, publishing them to SharePoint, other users using them as data sources in reports, report parts created, and a gallery of parts to be shared across a community of users.  Powerful stuff, and we are not done yet.

In Sharepoint 2010 there is a new site template called Business Intelligence Center.  What you can now do is create a new site in SharePoint using the Business Intelligence Center template. This template includes chart web parts and Excel services workbook access. It also includes a PerformancePoint library so that you can start building your dashboard very rapidly including access to reports and report parts. With is mechanism, Microsoft is opening up dashboard development to the masses and also allowing 'social' performance management whereby dashboards and/or dashboard components can be rated.  All this integrated with SharePoint and Office is in my opinion going to take self-service BI development to another level that it could easily have a 'popcorn effect' with masses of BI being produced rapidly and IT nowhere in sight.  There is no doubt that it opens up the flood gates for business innovation and sharing.  Personalised dashboard development using PerformancePoint Services 2010 integrated with SharePoint 2010.

A Question of Governance?

My only concern with this is the issue of governance.  What Microsoft have done is to put mass development in the hands of the business.  If you think upi have seen anythng on self-service BI, just wait until SharePoint 2010, Office 2010 and SQL Server 2008 R2 move into production in your shop. You ain't seen nothing yet.

However I see very little with respect to data governance. What about business glossaries? What about metadata lineage?  In a world of increasing regulation and legislation to prevent corporate catastrophes, can anything be audited? Can it be tracked back to where the data come from? How has the data been transformed by the power users? iWhat does the data mean?  I have as yet seen little from Microsoft in the form of metadata management and data governance despite the fact that Master Data Services is also delivered as part of this SQL Server release.  While there is no doubt that this is coming (confirmed by the Microsoft guys I spoke with on the exhibition floor booth) my only fear is will be too late.  Will the horses have already bolted with self-service BI unstoppable and off down a track without lineage to help users know that the data is trusted.

Equally, scorecard and dashboard development is bottom up. Everyone (with authority) can create their own scorecards and dashboards rapidly but there appears to be no framework whereby these can be slotted into a multi-level  strategy management unlike say SAP with SAP Strategy Management.  So what is the answer? Is it all bets are off and just let the business figure out the best way to manage on the back of socially rated scorecards and dashboards?  What happened to business strategy?  Many companies set a strategy at executive level and want enterprise wide business strategy execution.   This latter approach is top-down.  What Microsoft is fueling is bottom up.  My opinion is we need both and not one or the other.

Freedom Versus Governance - A Delicate Balancing Act

It is pretty clear then that setting aside the new SQL Server Data Warehousing Edition, this is very much a Collaborative BI release by Microsoft.  It is a major leap forward in what the business users can do for themselves.  We have two forces at work here.  Freedom versus governance.  We have to get the balance right.  Too much freedom and we could have chaos with no ability to audit what has been done or whether the BI is trusted. Too much governance and we put innovation in a straight jacket or kill it altogether.   All I would say is that IT had better get a data governance program underway soon to control data all the way out to data marts and cubes. If that is done then there is no doubt that the business can be empowered to innovate which is what should happen. Without a data governance program however, I think it is really going to be hard to get alignment with what the business is doing given the sheer speed of development that is now possible with this release.  Let's hope governance, innovation and collaboration are a winning combination. 

Follow me on Twitter

Posted June 21, 2010 8:50 AM
Permalink | No Comments |

Just over a week ago I spent a day at SensorExpo in Chicago to present on Complex Event Processing (CEP) discussing how CEP engines, Predictive Analytics, business rules can be used to analyse event data in-motion to facilitate business optimisation.  This was a very busy conference.  I estimated at least 2000-3000 people on the exhibition floor with maybe 400 on the conference.  I found around 100 vendors with all kinds of sensor devices on show exhibiting their products and services.  To my surprise however I had only heard of 2 of the vendors. IBM and Texas Instruments.  The floor was heaving with people looking to instrument their business operations to measure everything from movement, temperature, energy consumption, stress, heat, fluid volumes, pipeline flows and RFIDs.  There were analog devices and digital devices.  When taking to the vendors the big common denaominator was that they are all trying to collect the data from the sensor networks to analyse it.  Yet other than IBM there was not a single BI vendor in sight. Not even a single complex event processing (CEP) vendor in sight.   I was shocked because this market is clearly a booming.   What was even more surprising was that I could not find an IT professional anywhere. 99.9% of all delegates and speakers were engineers.

Attending some of the case studies I found some fantastic applications of the use of sensor networks and RFIDs.  Healthcare with sensors all over hospitals and equipments and patients all tagged with RFIDs.  The return on investment in this case was fraud prevention on equipment and process improvement for patients.  Another session I attended was one on monitoring stress in all the bridges in the US - over 700000 of them.  Some of the stats being quoted by the speakers were staggering.  "Well we are emitting, 3 events per minute from every sensor on a 7x24 hour basis. After 6 months operating like this we have over 20 PETABYTES of data.  You read it right 20 PETABYTES.   A lot of the technical focus at the conference was on energy harvesting to prolong sensor battery life,  but the business message was clear as a bell.  Process optimisation and cost reduction comes from instrumenting business operations.  Manufacturing production lines, supply chains, product distribution.  You name it, they're measuring it.

So I have to ask, where are all the BI vendors, the analytical DBMSs, CEP products, dashboards, predictive analytics.  The volume of data coming over the horizon from the adoption of sensor networks and RFIDs is nothing short of massive.  What is also clear is that this is already going on in enterprises and IT are blissfully unaware of it in the main.  Clearly IT BI professional have got to get in touch with their Engineering colleagues and engineers have got to be made aware of mainstream data integration, analytical database and BI platform technologies as well as CEP of course.  I don't think I have ever seen a chasm between IT and business not event explored never mind crossed.  Yet the value of CEP and mainstream DW/BI to this market is nothing short of enormous.   It is symptomatic that even though this market is heaving with engineers it has yet to tied into mainstream IT to exploit far more robust software than is being used on this data at present.  What an opportunity. What a huge opportunity.  It most certainly is going to re-define large databases for set up for analysis of historical event data.  CEP will obviously go there. It has to get out of just being in the financial markets and wake up to a ton of data in motion being emitted by the growing number of devices.  An article I read recently said that Sensors empower an Internet of Things.  Well, those things are coming over the horizon emitting a Tsunami of data. It is time CEP and DW/BI vendors woke up an smelt the coffee and became aware of this rapidly growing market.  CIOs had better take heed too because they are going to have to integrate it into mainstream IT.

Posted June 21, 2010 8:46 AM
Permalink | No Comments |

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a On-Demand Information Services Pattern. This is as follows:


Pattern Description

This pattern uses data virtualization to provide on-demand integrated data to applications, reporting tools, processes and portals via a web services user interface. Structured and semi-structured data sources are supported including RDBMS, any web service (internal or external), web syndication feeds, flat files, XML, packaged applications and non-relational databases.


Pattern Diagram



Pattern Example Use Case

A company needs to different kinds of information services targeted at different role-based user communities for access via their enterprise portal.  These services include:


·         Internal operational and analytical information services

·         Information services that integrate structured and semi-structured information including internal and external syndicated web feeds

·         Information as a Service (IaaS) services that  render information in various XML formats (e.g. XBRL) for consumption by external users and applications


Reasons For Using It

Rapid development of re-usable information services for consumption by portals, composite applications, processes and reporting tools.


Posted December 18, 2009 3:34 AM
Permalink | No Comments |

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Master Data Virtual MDM pattern. This is as follows:


Pattern Description

This pattern uses data virtualization to provide one or more on-demand integrated views of master data entities such as customer, product, asset, employee etc. even though the master data is fractured across multiple underlying systems. Applications, processes, portals, reporting tools and data integration workflows needing master data can acquire it on-demand via a web service interface or via a query interface such as SQL.


Pattern Diagram


Pattern Example Use Case

A manufacturer needs to make sure that changes to its customer data are made available to its marketing, e-commerce, finance and distribution systems as well as its business intelligence systems to keep business operations, reporting and analysis running smoothly. A shipping group of companies needs to perform a routine maintenance upgrade on a particular type of asset. However, its assets are managed by different systems in multiple lines of business. In order to budget for this upgrade it needs to have a single view of assets to fully understand maintenance costs. 


Reasons For Using It

To obtain a single integrated views of master data for consistency across business operations quickly at a relatively low cost.

Posted December 11, 2009 9:24 AM
Permalink | No Comments |

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Data Warehouse Virtual Data Source pattern. This is as follows:

Pattern Description

This pattern uses virtual views of federated data to create virtual data source components for use in ETL processing. The purpose of this pattern is twofold. Firstly to protect ETL workflows from structural changes to operational data sources. Secondly to create re-usable virtual data source 'components' for accessing disintegrated master and transactional data. The virtual data source pattern effectively 'ring fences' just the data associated with a customer, or a product for example, meaning that ETL workflows can be built for customer data, product data, asset data, order data etc.  This helps ETL designers to create ETL jobs dedicated to a particular type of data e.g. the customer ETL job, the product ETL job, the orders ETL job. Simplistic design of data consolidation workflows dedicated to a type of data allows these jobs to be re-used if the same data is needed elsewhere, e.g. customer data needed in two different data marts. It also guarantees that the same data is made available again and again via the same virtual data source  


Pattern Diagram


Pattern Example Use Case

Merger and acquisitions and new system releases often cause changes to operational systems data structures. This pattern can be used to shield ETL jobs that populate data warehouses and master data hubs from structural changes to source systems simply by changing the mappings in the virtual source views.


Reasons For Using It

Reasons for using this pattern include the ability to manage change more easily, lower ETL development and maintenance costs and modular design of data integration workflows associated with consolidating data.

Posted December 4, 2009 3:51 AM
Permalink | No Comments |

Everywhere I look at the moment I see my clients talking about needing to benchmark themselves against the market, to understand customer and prospect sentiment on social networking sites and to understand competitors in much more detail. It is not just me that has recognised this need. It also seems that new young startup companies have also seen this gap in the market. Over the last few days I have spent some time talking to Andrew Yates, CEO of Artesian and Christian Koestler, CEO of Lixto about their solutions in this area.

Artesian are are focused on monitoring media news, competitors intelligence and market intelligence that can be fed into front-office processes - in particular to sales force automation. Integration with is provided as is delivery to mobile devices for mobile salespeople on the road. Their intention on media intelligence for example is to track coverage across all media channels contextually matched to commercial triggers or specific areas of interest.  What I like about Artesian is the fact that they have looked at how to drive revenue from intelligence derived from web content by plugging it into front-office processes. Also by adopting social software attached to front-office systems like's new Chatter offering it becomes possible to collaborate over this intelligence. I would like to see this solution integrate with Microsoft SharePoint and IBM Lotus Connections for more use in large enterprises. However, seeing the need to focus attention on content that has real value in the front office is a real strength of this young startup.  

Lixto has a integrated development environment that allows you to build analytic applications pulling data from web sites such as competitor price information, new competitor marketing campaign data and other information that can be loaded into their customisable analytic applications to monitor competitors for example. 

Extracting insight from external data is definately on the increase with YellowBrix and Mark Logic also in on the act. IBM jumped into the market back in October with their announcement of IBM Cognos Content Analytics. This market is heating up. It seems to me that the start-ups are out there with competitive offerings.

Posted November 27, 2009 10:16 AM
Permalink | No Comments |

Following on from my last blog on data federation, the next data federation pattern I would like to discuss is a Data Warehouse Virtual Data Mart pattern. This is as follows:

Pattern Description

This pattern uses data virtualization to create one or more virtual data marts on top of a BI system thereby providing multiple summarised views of detailed historical data in a data warehouse. Different groups of users can then run ad hoc reports and analyses on these virtual data marts without interfering with each others' analytical activity.


Pattern Diagram


Virtual DM Pattern.JPG


Pattern Example Use Case

Multiple 'power user' business analysts in the risk management department of a bank often need their own analytical environment to conduct specific in-depth analyses in order to create the best scoring and predictive models. This pattern facilitates the creation of multiple virtual data marts without the need to hold data in many different data stores


Reasons For Using It

Reduces the proliferation of data marts and also prevents inadvertent 'personal' ETL development by power users who have a tendency to want to extract their own data to create their own data marts. It is often the case that each power user wants a detailed subset of data from a data warehouse that overlaps with the data subsets required by other power users. This pattern avoids inadvertent inconsistent ETL processing on extracts of the same data by each and every power user. It also avoids the duplication of the same data in every data mart, improved power user business analyst productivity, reduces the time to create data marts and reduces the total cost of ownership.  




Posted November 27, 2009 7:01 AM
Permalink | No Comments |