Blog: Rick Barton Subscribe to this blog's RSS feed!

Rick Barton

Hello and welcome to my blog. I am delighted to blog for the BeyeNETWORK, and I'm really looking forward to sharing some of my thoughts with you. My main focus will be data integration, although I don't plan to restrict myself just to this topic. Data integration is a very exciting space at the moment, and there is a lot to blog about. But there is also a big, wide IT and business world out there that I'd like to delve into when it seems relevant. I hope this blog will stimulate, occasionally educate and, who knows, possibly even entertain. In turn, I wish to expand my own knowledge and hope to achieve this through feedback from you, the community, so if you can spare the time please do get involved. Rick

About the author >

Rick is the director (and founder) at edexe. He has more than 20 years of IT delivery experience, ranging from complex technical development through to strategic DI consultancy for FTSE 100 companies. With more than 10 years of DI experience from hands-on development to architecture and consulting and everything in between, Rick’s particular passion is how to successfully adopt and leverage DI tools within an enterprise setting. Rick can be contacted directly at rick.barton@edexe.com.

I promised in last weeks blog and one of my earlier blogs that I would provide detail on how a DW can be enhanced to enable faster and more effective delivery (as opposed to 3 months+) and I'm finally delivering on that promise in this entry.

Essentially the answer is to use a data virtualisation tool to provide a "fast track" data delivery mechanism to rapidly integrate new data into a virtual schema.

So what problems will this approach solve?  I have listed some of the more obvious ones below:

  • The business needs data next week but the delivery lifecycle requires 3 months
  • The business has a one-off data feed but adding it to the warehouse is the only option
  • The business has a proliferation of uncontrolled data marts, built because they did not have time to wait for the delivery lifecycle
  • The DW is not up to date enough for the business need
  • DW and ETL designs are just plain wrong because the business rules are wrong
  • The DW and ETL process are bloated with unnecessary data (often these one-off feeds!)

All these problems lead to raised costs and loss of competitive advantage, so how can virtualisation help?

  • The tools are simple to use by end user - new views are quick to create
  • Complete subject areas can be virtualised, such that new dimensions can be added very quickly
  • Queries can be run in real time
  • Business rules and data relationships can be prototyped and understood prior to instantiation into DW schema and ETL code
  • The tables and the data do not physically exist so when you are done with them there is nothing to clean up

Virtualisation should not be seen as a replacement for a warehouse or for ETL however.  Federated queries can impact on source systems, so balancing needs against source system impact is key, which is why I said this is a complement to the existing DW at the top of the post.

What I will say though is that virtualisation can help give the user the flexibility they need and the DW team breathing space to ensure that DW and ETL changes are more correct and focused on genuine long term DW additions.

For those who are interested in this method, composite software have recently issued a paper focused on how virtualisation can complement the DW.  It's available off their website (www.compositesoftware.com)


Posted October 3, 2009 9:44 AM
Permalink | No Comments |

Leave a comment