ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

analytic database: All content tagged as analytic database in NoSQL databases and polyglot persistence

The Evolving Database Landscape

Matthew Aslett of 451group published an updated version of the database landscape graphic that is included in the group reports:

DB-landscape

Very similar, but more complete than The Database World in a Venn Diagram from Infochimps.

Original title and link: The Evolving Database Landscape (NoSQL database©myNoSQL)

via: http://blogs.the451group.com/information_management/2012/11/02/updated-database-landscape-graphic/


The Database World in a Venn Diagram

Infochimps put together a comprehensive Venn diagram of the database world in the TechCrunch article Big Data Right Now: Five Trendy Open Source Technologies

The Database World

Original title and link: The Database World in a Venn Diagram (NoSQL database©myNoSQL)


OLAP and Reporting That Feels Like 2012

Question on Hacker News:

after researching how to continuously aggregate and mine our tracking data (200GB and growing fast) for almost a week, I’m still stuck. Is it just me or didn’t I just find the right product yet? I must admit that I’m a generalist developer, no DBA - but to me it looks like all the products I’ve looked into just don’t “feel” right. JasperReports, InfiniDB, Pentaho, just to name a few… it’s all - how can I say - crusty, unintuitive, 1999. All the products look very corporate, there are basically no howto’s, no prices, no shiny, bold Open Source products that fit my bill. I wouldn’t even mind using a good commercial product that does what I want, but even the advertisements are “from DBA to DBA”. Lots of termdropping like ETLs, M/ROLAP, BI, Case-Based-Reasoning - but nothing that looks reasonably simple and straightforward. Maybe I’m spoiled by the DX (Developer Experience) that Backbone and Rails give me, but is there really nobody that has done something simpler and more straightforward? Like “these are my dimensions, these are my facts - now generate my Cube here so I can go datamining”? Now I know this is a huge field and I might sound naive but I’d like to know if there are others sharing my pain or having something closer to a solution.

As usual, lots of links and commentary. Two answers stood out while reading the thread though:

I’ve been in this field for more than a decade and I couldn’t agree more. The tools, and even the underlying theory is stuck in the past, and every project I work on is hampered by the technology not meeting the expectations of (business) users who demand a much more intuitive way of working - because that’s what the experience every day on consumer devices.

and

Almost all OLAP products provide this simple functionality, but it’s almost impossible to find where it is because it is hidden under layers and layers of “added value” in the form of tools to extract, transform and present that data in flashy executive dashboards.

Original title and link: OLAP and Reporting That Feels Like 2012 (NoSQL database©myNoSQL)


Why In-Memory Analytics Is Like Digital Photography

A great article about a type of products in search for market share.

Original title and link: Why In-Memory Analytics Is Like Digital Photography (NoSQL database©myNoSQL)

via: http://timoelliott.com/blog/2011/09/why-in-memory-analytics-is-like-digital-photography-an-industry-transformation.html


Paper: Graph Based Statistical Analysis of Network Traffic

Published by a group from Los Alamos National Lab (Hristo Djidjev, Gary Sandine, Curtis Storlie, Scott Vander Wiel):

We propose a method for analyzing traffic data in large computer networks such as big enterprise networks or the Internet. Our approach combines graph theoretical representation of the data and graph analysis with novel statistical methods for discovering pattern and timerelated anomalies. We model the traffic as a graph and use temporal characteristics of the data in order to decompose it into subgraphs corresponding to individual sessions, whose characteristics are then analyzed using statistical methods. The goal of that analysis is to discover patterns in the network traffic data that might indicate intrusion activity or other malicious behavior.

The embedded PDF and download link after the break.


What Do You Want to Do Analytically?

Curt Monash offers a 4-point evaluation guide to answer the question in the title:

There’s no perfect solution to those difficulties, but a good way to start the evaluation is by assessing:

  • The nature and value of your decisions that analytics could reasonably affect.
  • Your realistic scope for automation of analytic decisions.
  • The number and training of your “full-time analysts”“ — statisticians, SQL jocks who can program, SQL jocks who can’t really program, full-time users of BI tools, whatever.
  • The number and training of your “part-time analysts” — normal business users who can get something out of a dashboard, and perhaps even drill down into it.

Original title and link: What Do You Want to Do Analytically? (NoSQL database©myNoSQL)

via: http://www.dbms2.com/2011/06/26/what-to-think-about-before-you-make-a-technology-decision/


Infobright Rough Query: Aproximating Query Results

Very interesting idea in the latest Infobright release:

The most interesting of the group might be Rough Query, which speeds the process of finding the needle in a multi-terabyte haystack by quickly pointing users to a relevant range of data, at which point they can drill down with more-complex queries. So, in theory, a query that might have taken 20 minutes before might now take just a few minutes because Rough Query works in seconds by using only the in-memory data and the subsequent search is against a much smaller data set.

Curt Monash provides more context about Rough Queries in his post:

To understand Infobright Rough Query, recall the essence of Infobright’s architecture:

Infobright’s core technical idea is to chop columns of data into 64K chunks, called data packs, and then store concise information about what’s in the packs. The more basic information is stored in data pack nodes,* one per data pack. If you’re familiar with Netezza zone maps, data pack nodes sound like zone maps on steroids. They store maximum values, minimum values, and (where meaningful) aggregates, and also encode information as to which intervals between the min and max values do or don’t contain actual data values.

I.e., a concise, imprecise representation of the database is always kept in RAM, in something Infobright calls the “Knowledge Grid.” Rough Query estimates query results based solely on the information in the Knowledge Grid — i.e., Rough Query always executes against information that’s already in RAM.

Rough Query is not meant for BI or reporting, but rather for initial investigations data scientists would perform against BigData.

Original title and link: Infobright Rough Query: Aproximating Query Results (NoSQL database©myNoSQL)

via: http://gigaom.com/cloud/infobright-wants-to-make-big-data-faster-way-faster/


The Data Processing Platform for Tomorrow

In the blue corner we have IBM with Netezza as analytic database, Cognos for BI, and SPSS for predictive analytics. In the green corner we have EMC with Greenplum and the partnership with SAS[1]. And in the open source corner we have Hadoop and R.

Update: there’s also another corner I don’t know how to color where Teradata and its recently acquired Aster Data partner with SAS.

Who is ready to bet on which of these platforms will be processing more data in the next years?


  1. GigaOm has a good article on this subject here  

Original title and link: The Data Processing Platform for Tomorrow (NoSQL databases © myNoSQL)