Since announcing the GA couple of weeks ago, I’ve been noticing quite a few data related posts on the Google Compute Engine blog:
If you look at these, you’ll notice a theme: covering data from every angle; Cassandra/DSE from DataStax for OLTP, DataTorrent for stream processing, Qubole for Hadoop, MapR for their Hadoop-like solution. I can see this continuing for a while and making Google Compute Engine a strong competitor for Amazon Web Services.
One question remains though: will they be able to come up with a good integration strategy for all these 3rd party tools?
Original title and link: Google Compute Engine and Data
A simple optimization of top-k queries that can make a huge difference: going from the default behavior of:
- sifting through all the data (necessary),
- sorting it all (necessary),
- writing all the results to disk (unnecessary—saving all the
limit results from each
map is enough), and
- having the reducer process again all the data (unnecessary—the previous step already reduced the amount of data down to the
limit * number_of_partitions).
For reference a top-k query is:
SELECT * FROM T ORDER BY a DESC LIMIT 10
Original title and link: Hive Top-K Optimization ( ©myNoSQL)
Derrick Harris for GigaOm:
Two key members of the Facebook team that created the Hadoop query language Hive are launching their own big data startup called Qubole on Thursday. […] Qubole is also optimized to run on cloud-based resources that typically don’t offer performance on a par with their physical counterparts. Thusoo said the product incorporates a specially-designed cache system that lets queries run five times faster than traditional Hadoop jobs in the cloud, and users have the option to change the types of instances their jobs are running on if the situation requires.
Running on Amazon infrastructure.
Original title and link: Qubole: New On-Demand Hadoop Service by Hive Creators ( ©myNoSQL)