Very informative post on Socialize blog about their data flow and the data analysis stack used to processing it. The post is missing the architecture diagram, so I took the time to reconstruct it based on the details in the article:
Click to view full size diagram of Socialize architecture
The traditional solution is to use aggregate functions in the RDBMS such as count() to get the aggregate results but this presents a few problems at a large scale:
- Aggregating rows in a database creates unneeded load on the server
- Data could be stored in multiple sharded databases and the aggregated results would be inaccurate.
- Data could be stored in other datastore like a NoSQL datastore or even flat log files.
- Data is stored in an uncommon format across many sources.
Original title and link: Polyglot Persistence Architecture at Socialize: Splunk for MapReduce & Big Data Analysis ( ©myNoSQL)