ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

NoSQL meets Bitcoin and brings down two exchanges

Most of Emin Gün Sirer’s posts end up linked here, as I usually enjoy the way he combines a real-life story with something technical, all that ending with a pitch for HyperDex.

The problem here stemmed from the broken-by-design interface and semantics offered by MongoDB. And the situation would not have been any different if we had used Cassandra or Riak. All of these first-generation NoSQL datastores were early because they are easy to build. When the datastore does not provide any tangible guarantees besides “best effort,” building it is simple. Any masters student in a top school can build an eventually consistent datastore over a weekend, and students in our courses at Cornell routinely do. What they don’t do is go from door to door in the valley, peddling the resulting code as if it could or should be deployed.

Unfortunately in this case, the jump from the real problem, which was caused only by the pure incompetence, to declaring “first-generation NoSQL databases” as being bad and pitching HyperDex’s features is both too quick and incorrect1.


  1. 1) ACID guarantees wouldn’t have solved the issue; 2) All 3 NoSQL databases mentioned, actually offer a solution for this particular scenario. 

Original title and link: NoSQL meets Bitcoin and brings down two exchanges (NoSQL database©myNoSQL)

via: http://hackingdistributed.com/2014/04/06/another-one-bites-the-dust-flexcoin/


GigOM Interviews Aerospike at Structure Data 2014 on Application Scalability [sponsor]

An interview from Structure Data 2014, featuring Aeropsike:


Aerospike Technical Marketing Director, Young Paik explains how you can add rocket fuel to your big data application by running the Aerospike database on top of Hadoop for lightning fast user-profile lookups.

Original title and link: GigOM Interviews Aerospike at Structure Data 2014 on Application Scalability [sponsor] (NoSQL database©myNoSQL)


Apache Hadoop 2.4.0 released with operational improvements

Hadoop 2.4.0 continues that momentum, with additional enhancements to both HDFS & YARN:

  • Support for Access Control Lists in HDFS
  • Native support for Rolling Upgrades in HDFS
  • Smooth operational upgrades with protocol buffers for HDFS FSImage
  • Full HTTPS support for HDFS
  • Support for Automatic Failover of the YARN ResourceManager (a.k.a Phase 1 of YARN ResourceManager High Availability)
  • Enhanced support for new applications on YARN with Application History Server and Application Timeline Server
  • Support for strong SLAs in YARN CapacityScheduler via Preemption

Original title and link: Apache Hadoop 2.4.0 released with operational improvements (NoSQL database©myNoSQL)

via: http://hortonworks.com/blog/apache-hadoop-2-4-0-released/


Your Big Data Is Worthless if You Don’t Bring It Into the Real World

Building on the (exact) same premise as last week’s FT.com article Big data: are we making a big mistake?, Mikkel Krenchel and Christian Madsbjerg write for Wired:

Not only did Google Flu Trends largely fail to provide an accurate picture of the spread of influenza, it will never live up to the dreams of the big- data evangelists. Because big data is nothing without “thick data,” the rich and contextualized information you gather only by getting up from the computer and venturing out into the real world. Computer nerds were once ridiculed for their social ineptitude and told to “get out more.” The truth is, if big data’s biggest believers actually want to understand the world they are helping to shape, they really need to do just that.

While the authors actually mean the above literally, I think the valid point the article could have made is that looking at a data set alone without considering:

  1. possibly missing data,
  2. context data and knowledge,
  3. and field know-how

can lead to incorrect conclusions — the most obvious examples being the causal fallacy and the correlation-causation confusions.

✚ Somehow related to the “possibly missing data” point, the article How politics makes us stupid brings up some other very interesting points.

Original title and link: Your Big Data Is Worthless if You Don’t Bring It Into the Real World (NoSQL database©myNoSQL)

via: http://www.wired.com/2014/04/your-big-data-is-worthless-if-you-dont-bring-it-into-the-real-world/


Why are you using MySQL?

Mark Callaghan puts out a great explanation of why pitching new databases to large MySQL users will almost always fail:

Leaving out quality of service, a simple definition for scalability is that a given workload requires A people, B hardware units and C lines of automation code. For something to scale better than MySQL it should reduce some of A, B and C. For many web-scale deployments the cost of C has mostly been paid and migrating to something new means a large cost for C. Note that B represents many potential bottlenecks. The value of B might be large to get more IOPs for IO-bound workloads with databases that are much bigger than RAM. It might be large to get more RAM to keep everything cached. Unfortunately, some deployments are not going to fully describe that context (some things are secret). The value of A is influenced by the features in C and the manageability features in the DBMS but most web-scale companies don’t disclose the values of B and A.

I can still see some good reasons why a new database “vendor” should continue to try to get big users to take a look at their product:

  1. it’s the only way to learn:

    1. what’s unique for these users
    2. what’s at the top of their concerns, and
    3. how they address them

    While you won’t be able to make them switch, if your product addresses their issues, it will be prepared for the next big user. 2. there’re still chances for smaller, greenfield, internal projects to start using your product

It’s common sense to build a product using the tools you are already familiar with — it helps reducing the risks and also cutting down the time to market . What happens though is that there are always people that don’t follow this rule starting new products using tools that are they are not that familiar with. There’re also products that grow faster than the team’s know-how evolves. These are just two quick examples where a new product that has learned from big users can help.

Original title and link: Why are you using MySQL? (NoSQL database©myNoSQL)

via: http://smalldatum.blogspot.com/2014/04/why-arent-you-using-x-version-2.html


Scalable Atomic Visibility with RAMP Transactions

We’ve developed three new algorithms—called Read Atomic Multi-Partition (RAMP) Transactions—for ensuring atomic visibility in partitioned (sharded) databases: either all of a transaction’s updates are observed, or none are.

Still digesting Peter Bailis’s post and the accompanying Scalable Atomic Visibility with RAMP Transactions paper.

Original title and link: Scalable Atomic Visibility with RAMP Transactions (NoSQL database©myNoSQL)

via: http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/


Hydra takes on Hadoop

A good interview on InfoQ comparing Hadoop with AddThis’s open source Hydra:

What use case(s) is Hydra better suited for compared to Hadoop. When would Hadoop be a better choice?

Hydra is better at data exploration. You can follow a number of interesting leads from the results of a single, probably rather fast, map job. Queries on the resultant tree usually take on the order of seconds (or milliseconds).

Non-programmers can produce functioning products with a small amount of guidance. The web UI provides most everything that might be needed; it might be as simple as pressing clone on an existing job, changing the tree to use a couple different features and hitting go. In minutes they have a new URL endpoint to show your impressive new KPI on your company home page.

Hadoop has a few advantages though. It has stronger native support for very large, one-off joins. Technically speaking this just means more implicit sorting of files. Sorting huge numbers of things is expensive so we try pretty hard to avoid it, and as a result first order support for it is a little lacking. On the other hand, you might find that you don’t really need the full, perfect join and are instead content with a Bloom-filter-based probabilistic hybrid — in which case Hydra will once again save you some sweet cycles.

Original title and link: Hydra takes on Hadoop (NoSQL database©myNoSQL)

via: http://www.infoq.com/news/2014/04/hydra


Intel kills a Hadoop and feeds another

I seriously doubt you could have missed the 2nd part of this, but here’s the shortest executive summary:

  1. Intel has killed its own distribution of Hadoop — is there anyone that would disagree this is a good idea?
  2. Intel has invested $740mil in Cloudera (for 18%) — there’s no typo. 740 millions.

The main questions:

  1. where will Cloudera put the $900mil raised in the last round(s)?
  2. why Intel invested so much?

These questions were also asked by Dan Primack for CNN Money and after looking at different angles he comes out empty.

So let’s check other sources:

  1. TechCrunch has initially speculated that much of the investment went to existing shareholders.

    The post was later updated with a comment from Cloudera’s VP of marketing stating that the majority of the money went to the company. But no word on how they’ll be used.

  2. Reuters writes that Intel made the investment to ensure their leading position in server processors:

    Intel hopes that encouraging more companies to leap into Big Data analysis will lead to higher sales of its high- end Xeon server processors. The chipmaker believes that hitching its wagon to Cloudera’s version of Hadoop, instead of pushing its own version, will make that happen faster.

    Still no word on how Cloudera will be using the money.

  3. Derrick Harris for GigaOm writes that the deal makes a lot of sense for both companies1:

    Cloudera needs capital and Intel’s huge sales force to keep up its engineering efforts and grow the company internationally.

    As part of the deal, Cloudera will be an early adopter of Intel gear and will optimize its Hadoop software to run on Intel’s latest technologies. Intel will port some of its work into the Cloudera distribution and will maintain its own Hadoop engineering team that will work alongside Cloudera’s engineers to help unite the two company’s goals.

  4. Jeff Kelly for SiliconAngle emphasizes the same channel advantages:

    Cloudera’s biggest reseller partner is Oracle. Based on my reading of the Intel announcement, the deal is not an official reseller partnership, but Intel will “market and promote CDH and Cloudera Enterprise to its customers as its preferred Hadoop platform.” Not quite as nice as having the Intel salesforce closing deals for it, but Cloudera stands to gain significant new business from the arrangement.


So how about this short list on how this round will be used by Cloudera:

  1. a part goes for international expansion
  2. a larger part goes to early shareholders
  3. the largest part goes into acquisitions

As for Intel, what if this investment also sealed an exclusive deal for Hadoop-centric Cloudera-supported Intel-powered appliance?


  1. Insert snarky comment here about a $740m deal that would not make sense to one of the parties. How about not making sense to any of them? 

Original title and link: Intel kills a Hadoop and feeds another (NoSQL database©myNoSQL)


Scaling the Facebook data warehouse to 300 PB

Fascinating read, raising interesting observations on different levels:

  1. At Facebook, data warehouse means Hadoop and Hive.

    Our warehouse stores upwards of 300 PB of Hive data, with an incoming daily rate of about 600 TB.

  2. I don’t see how in-memory solutions, like Hana, will see their market expanding.

    In the Enterprise Data Warehouses and the first Hadoop squeeze, Rob Klopp predicted a squeeze of the EDW market under the pressure of in-memory DBMS and Hadoop. I still think that in-memory will become just a custom engine in the Hadoop toolkit and existing EDW products.

    On the always mentioned argument that “not everybody is Facebook”, I think that the part that is hidden under the rug is that today’s size of data is the smallest you’ll ever have.

    In the last year, the warehouse has seen a 3x growth in the amount of data stored. Given this growth trajectory, storage efficiency is and will continue to be a focus for our warehouse infrastructure.

  3. At Facebook’s scale, balancing availability and costs is again a challenge. But there’s no mention of network attached storage.

    There are many areas we are innovating in to improve storage efficiency for the warehouse – building cold storage data centers, adopting techniques like RAID in HDFS to reduce replication ratios (while maintaining high availability), and using compression for data reduction before it’s written to HDFS.

  4. For the nits and bolts of effectively optimizing compression, read the rest of the post which covers the optimization Facebook brought to the ORCFile format.

    There seem to be two competing formats at play: ORCFile (with support from Hortonworks and Facebook) and Parquet (with support from Twitter and Cloudera). Unfortunately I don’t have any good comparison of the two. And I couldn’t find one (why?).

Original title and link: Scaling the Facebook data warehouse to 300 PB (NoSQL database©myNoSQL)

via: https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/


7 quick facts about R

Based on a slide deck by David Smith:

  1. R is the highest paid IT skill (Dice.com survey, January 2014)
  2. R is the most-used data science language after SQL (O’Reilly survey, January 2014)
  3. R is used by 70% of data miners (Rexer survey, October 2013)
  4. R is #15 of all programming languages (RedMonk language rankings, January 2014)
  5. R is growing faster than any other data science language (KDNuggets survey, August 2013)
  6. R is the #1 Google Search for Advanced Analytics software (Google Trends, March 2014)
  7. R has more than 2 million users worldwide (Oracle estimate, February 2012)

I can see a couple of actionable items based on this list:

  1. if you’re interested in data science, you should consider R
  2. if you are already using R, ask for a raise

Original title and link: 7 quick facts about R (NoSQL database©myNoSQL)


Three opinions about the future of Hadoop and Data Warehouse

Building on the same data coming from Gartner and a talk from Hadoop Summit (exactly the same), Matt Asay1 and Timo Elliott2 place Hadoop on the data warehouse map.

Matt Asay writes in the ReadWrite article that Hadoop is not replacing existing data warehouses, but it’s taking all new projects:

Hadoop (and its kissing cousin, the NoSQL database) isn’t replacing legacy technology so much as it’s usurping its place in modern workloads. This means enterprises will end up supporting both legacy technology and Hadoop/NoSQL to manage both existing and new workloads […]

Of course, given “the effective price of core Hadoop distribution software and support services is nearly zero” at this point, as Jeff Kelly highlights, more and more workloads will gravitate to Hadoop. So while data warehouse vendors aren’t dead—they’re not even gasping for breath—they risk being left behind for modern data workloads if they don’t quickly embrace Hadoop and other 21st Century data infrastructure.

On his blog, Timo Elliott makes sure that there’s some SAP in that future picture and uses their Hadoop partner, Hortonworks to depict it:

No. Ignoring the many advantages of Hadoop would be dumb. But it would be just as dumb to ignore the other revolutionary technology breakthroughs in the DW space. In particular, new in- memory processing opportunities have created a brand-new category that Gartner calls “hybrid transactional/analytic platforms” (HTAP)

hadoopmodernarchitecture_thumb

The future I’d like to see is the one where:

  1. there is an integrated data platform. Note that in this ideal world, integrated does not mean any form of ETL
  2. it supports and runs in isolation different workloads from online transactions and bulk upload to various forms of analytics
  3. data is stored on dedicated mediums (spinning disks, flash, memory) depending on the workloads that touch it
  4. data would move between these storage mediums automatically, but the platform would allow fine tuning for maintaining the SLAs of the different components

  1. Matt Asay is VP of business development and corporate strategy at MongoDB 

  2. Timo Elliott is an Innovation Evangelist for SAP 

Original title and link: Three opinions about the future of Hadoop and Data Warehouse (NoSQL database©myNoSQL)


When is MongoDB the Right Tool for the Job?

This puts me in a quandary, because my recent stint on the job market has shown that just about everybody is using MongoDB, and I’ve just never been in any situation that I have needed to use it.

I also can’t foresee any situation where there is a solid technical reason for choosing MongoDB over it’s competitors either, and the last thing I want to do is lead people astray or foist my preconceptions onto them.

2438326-laughing-hysterically

Then the top comment on reddit.

Original title and link: When is MongoDB the Right Tool for the Job? (NoSQL database©myNoSQL)

via: http://daemon.co.za/2014/04/when-is-mongodb-the-right-tool/