Ash Parikh (Informative) interviewed by Linda L. Briggs (tdwi):
True data virtualization should be built from the ground up, ideally on a comprehensive data integration platform that truly understands the complexity of data warehouse architectures. It must be based on lean integration principles so that the data integration process can be optimized to reduce delays and minimize waste. It must support a model-driven and service-oriented approach for defining and delivering data services that can be fully reused for any consuming application and across any project, without any rework. That means there must be no re-building of data integration logic and no re-deployment.
I’m still trying to visualize what data virtualization means. If you have any clear non-BS definition or example I’d really appreciate it.
Original title and link: Best Practices for Data Virtualization ( ©myNoSQL)
Business intelligence applications are moving from the traditional connection to an OLAP Data source based on relational database systems to the ability to link to and consume data from a variety of disparate sources including social networks. The ability for a modern BI application to be able to use mashups of data to provide agility when dealing with integrations of multiple types of data sources has led to NoSql being promoted by many as the next big thing within BI. Does this mean that we have seen the end of the SQL style RDBMS system within the BI area – there are many pros and cons for both systems but I believe that there are still a place for both within the BI arena.
Recently I’ve started to read about data virtualization: a common access layer to heterogenous data sources.
Original title and link: NoSQL and its Role in the BI Arena (NoSQL databases © myNoSQL)