Blog: Rick Barton Subscribe to this blog's RSS feed!

Rick Barton

Hello and welcome to my blog. I am delighted to blog for the BeyeNETWORK, and I'm really looking forward to sharing some of my thoughts with you. My main focus will be data integration, although I don't plan to restrict myself just to this topic. Data integration is a very exciting space at the moment, and there is a lot to blog about. But there is also a big, wide IT and business world out there that I'd like to delve into when it seems relevant. I hope this blog will stimulate, occasionally educate and, who knows, possibly even entertain. In turn, I wish to expand my own knowledge and hope to achieve this through feedback from you, the community, so if you can spare the time please do get involved. Rick

About the author >

Rick is the director (and founder) at edexe. He has more than 20 years of IT delivery experience, ranging from complex technical development through to strategic DI consultancy for FTSE 100 companies. With more than 10 years of DI experience from hands-on development to architecture and consulting and everything in between, Rick’s particular passion is how to successfully adopt and leverage DI tools within an enterprise setting. Rick can be contacted directly at rick.barton@edexe.com.

Last week I attended the Data Migration Matters conference http://datamigrationmatters.co.uk/ in London and what I learned was that there are differences in approach between integration and migration, however there are common factors, two of which I will cover in this blog.

The first is customer involvement.  The data in an organisation is utilised by the business, defined by the business and ultimately informs business decisions, so any project, IT or otherwise that is required to make changes to data ultimately needs business buy in and involvement.

The second is understanding the data.  Many projects have failed because of incorrect assumptions and gaps in knowledge that only manifest themselves when a project is in full flight.  It is imperative that the source data is mapped out and understood prior to coding.

In some ways these two requirements go hand in hand.  To understand and make sense of the data, you need the business to add their experience of using it to the mix.  To involve the business, you have to be able to deliver technical information in such a way that it becomes possible to interpret the data in a non-technical way.

This is where data profiling and quality tools come into their own.  These tools analyse the source data and present the user with a high level view of the data, enabling the user to view patterns and statistics and relationships at both file and field level. 

Profiling information is often a revelation for business users.  Operationally the data works, so is deemed fit for purpose, however when profiled it is not uncommon to see genuine problems with the data, such as duplicate records, missing fields and records and often just plain incorrect values.  The ability also for drill down to the actual data is imperative in order to show that the statistics marry up to real data on real systems within the organisation.

It is often at this point, when the illusion of "perfect data" evaporates, that the business buys into the project and begins to understand why the ownership of the data and the associated business rules fall squarely within their domain.  It is surprising how showing people their own data "warts and all" can have a profound effect on their involvement in a project. 

How often have we heard the term "if it isn't broken, don't fix it" and for many users their opinion is that their data isn't broken, so it is perhaps hard to understand why IT make such a fuss during a data migration. 

The truth is that for many organisations their data is, to varying degrees, somewhere between broken and fixed and it is only when it is utilised en masse, say for reporting or migration, that problems suddenly begin to appear.


Posted May 17, 2010 9:30 AM
Permalink | No Comments |

Leave a comment