Blog: Andy Hayler Subscribe to this blog's RSS feed!

Andy Hayler

Welcome to my blog!

About the author >

Andy Hayler is one of the world’s foremost experts on master data management. Andy started his career with Esso as a database administrator and, among other things, invented a “decompiler” for ADF, enabling a dramatic improvement in support efforts in this area.  He became the youngest ever IT manager for Esso Exploration before moving to Shell. As Technology Planning Manager of Shell UK he conducted strategy studies that resulted in significant savings for the company.  Andy then became Principal Technology Consultant for Shell international, engaging in significant software evaluation and procurement projects at the enterprise level.  He then set up a global information management consultancy business which he grew from scratch to 300 staff. Andy was architect of a global master data and data warehouse project for Shell downstream which attained USD 140M of annual business benefits. 

Andy founded Kalido, which under his leadership was the fastest growing business intelligence vendor in the world in 2001.  Andy was the only European named in Red Herring’s “Top 10 Innovators of 2002”.  Kalido was a pioneer in modern data warehousing and master data management.

He is now founder and CEO of The Information Difference, a boutique analyst and market research firm, advising corporations, venture capital firms and software companies.   He is a regular keynote speaker at international conferences on master data management, data governance and data quality. He is also a respected restaurant critic and author (www.andyhayler.com).  Andy has an award-winning blog www.andyonsoftware.com.  He can be contacted at Andy.hayler@informationdifference.com.

 

July 2009 Archives

I have spent some time recently in building up an MDM course. Normally such things are done at conferences, so unless you happen to be at some distant venue, they pass you by. However this one is different. It is an on-line course, marketed by a new company called eLearning (who have some well-known founders). The company reckons that it is harder and harder for people to justify trips to conferences for education, and this is certainly true at the moment from what I have observed and heard about technology conferences. Hence its model is to sell courses on-line.

The course is “MDM Fundamentals and Best Practice”, and you can see more about it here. It is actually quite a lot of work to put together such a course, but I am pleased with the result, and you can now get over three hours of my views on MDM for a very fair price indeed, all from the comfort of your desk or living room, and at a pace that you can control. Of course you do miss out on the trip to Las Vegas or similar, but you can’t have everything.


Posted July 31, 2009 6:54 PM
Permalink | No Comments |

We have now completed our survey of data quality. Based on 193 responses from IT and business staff from around the world, there were some very interesting findings. Amongst these was that 81% of respondents felt that data quality was much more than just customer name and address, which is the focus of most of the vendors in the market. Moreover, customer name and address data ranked only third in the list of data domains which survey respondents found most important. Both product and financial data was felt to be more important, yet product data is the focus of barely a handful of vendors (Silver Creek, Inquera, Datactics) while of all the dozens of data quality vendors out there, few indeed focus on financial data. Name and address is of course a common issue and conveniently is well structured and has plenty of well-established algorithms out there to attack it. Yet surely the vendor community is missing something when customers rate other data types as higher in importance?

Another recurring theme is the lack of attention given to measuring the costs of poor data quality. Lots of respondents fail to make any effort to measure this at all, and then complain that it is hard to make a business case for data quality. “Well duh”, as Homer Simpson might say. Estimates given by survey respondents seemed very low when compared to our experience, and also to anecdotes given in the very same survey. One striking one was this: ”Poor data quality and consistency has led to the orphaning of $32 million in stock just sitting in the warehouse that can’t be sold since it’s lost in the system.” This company at least has no difficulty in justifiying a data quality initiative. The survey had plenty of other interesting insights too.

The full survey and analysis, all 33 pages of it, can be purchased from here.


Posted July 17, 2009 2:00 PM
Permalink | No Comments |