ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Big data: All content tagged as Big data in NoSQL databases and polyglot persistence

SQL or Hadoop: What Tools Should I Use to Process My Data?

Great decision flowchart created by Aaron Cordova to help answer the question: what tools should I use to process my data:

SQL or Hadoop

Click to view full size. Credit Aaron Cordova

Original title and link: SQL or Hadoop: What Tools Should I Use to Process My Data? (NoSQL database©myNoSQL)


Oracle Big Data Appliance Released Features Cloudera Distribution of Hadoop: What You Need to Know

Oracle Big Data Appliance hardware specification

Klint Finley for ServicesANGLE:

18 Oracle Sun servers with a total of:

  • 864 GB main memory;
  • 216 CPU cores;
  • 648 TB of raw disk storage;
  • 40 Gb/s InfiniBand connectivity between nodes and other Oracle engineered systems; and,
  • 10 Gb/s Ethernet data center connectivity.

Joab Jackson for PCWorld Business Center:

The package includes 40Gb/s InfiniBand connectivity among the nodes, a rarity among Hadoop deployments, many of which use Ethernet to connect the nodes. Lumpkin said InfiniBand would speed data transfers within the system. Multiple racks can be tethered together in a cluster configuration. There is no theoretical limit to how many racks can be clustered together, though configurations of more than eight racks would require additional switches, Lumpkin said.

Oracle Big Data Appliance software specification

  • Cloudera’s Distribution including Apache Hadoop
  • Cloudera Manager
  • Open source distribution of R
  • Oracle NoSQL Database Community Edition
  • Oracle Big Data Connectors
  • Oracle Linux

Joab Jackson for PCWorld Business Center:

Along with the release, Oracle also released Oracle Big Data Connectors, a set of drivers for exchanging data between the Big Data Appliance and other Oracle products, such as the Oracle Database 11g, the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle Exalytics In-Memory Machine.

Derrick Harris for GigaOm:

However, Oracle isn’t blind to the fact that not everyone will be gung ho about buying an appliance. Its custom-built Big Data Connectors are available as separate products for those customers wanting to connect existing Hadoop clusters to Oracle database environments or R statistical-analysis environments.

Klint Finley for ServicesANGLE:

According to Oracle’s announcement “The integrated Oracle and Cloudera architecture has been fully tested and validated by Oracle, who will also collaborate with Cloudera to provide support for Oracle Big Data Appliance.”

Oracle Big Data Appliance Services

George Lumpkin, Oracle’s vice president of data warehousing product management:

Oracle will provide first-line support for the appliance and all software (including the Hadoop distribution and Cloudera Manager) through its case-tracking support infrastructure. But when particularly tough support cases arise, Oracle will tap Cloudera’s expertise.

What’s more, Oracle will refer customers to Cloudera for Hadoop training and consulting engagements.

Oracle Big Data Appliance Positioning

George Lumpkin, Oracle’s vice president of data warehousing product management:

We are positioning this as something that runs alongside other Oracle-based systems. Big data is more than just a cluster of hardware running Hadoop. It is an overall information architecture for enabling companies to analyze data and make decisions.

Doug Hanshen for Informationweek:

Oracle highlighted the Big Data Appliance as a complement to a growing family of “engineered systems” that now includes Exadata, Exalogic, and the Exalytics In-Memory Machine.

Merv Adrian (Gartner analyst) cited by Informationweek:

But what’s more remarkable is the fact that Oracle is finally looking beyond its core database. Oracle’s TimesTen and Essbase databases, which were recently upgraded for use in the Exalytics appliance, and BerkeleyDB, which was Oracle’s development starting point for the new NoSQL database, are examples of that shift.

Oracle is suddenly beginning to act as a data-management portfolio company, not just a company with a big brother and a bunch of starving siblings.

Joab Jackson for PCWorld Business Center:

Oracle is positioning the appliance for managing and analyzing large sets of data that may be too large, or otherwise unsuitable for keeping in databases, such as telemetry data, click-stream data or other log data. “You may not want to keep the data in a database, but you do want to store it and analyze it,” Lumpkin said. The appliance is intended for those organizations that want to undertake Big Data-style analysis but may not have the in-house expertise to assemble large Hadoop or NoSQL-based systems.

Pricing

Kurt Dunn, Cloudera’s chief operating officer told InformationWeek.

Oracle has put together a very comprehensive product that is priced very well.

Brian Proffitt for ITworld:

The cost of the Big Data Appliance is what will really stand out. At $500,000, this may not seem like a bargain, but in reality it is. Typically, commoditized Hadoop systems run at about $4,000 a node. To get this much data storage capacity and power, you would need about 385 nodes… which puts the price tag at around $1.54 million—three times the price of Oracle’s Cloudera-based offering (which, I should add, excludes things like support costs and power).

Doug Hanshen for Informationweek:

The hardware and software combined will sell for $450,000, with an annual support fee for both hardware and software of 12%. That’s highly competitive, working out to less than $700 per terabyte and being in line with the low costs big data practitioners expect from deployments built on commodity hardware.

Oracle - Cloudera Parternship

I wrote earlier my take on what this partnership means to both Oracle and Cloudera.

Doug Hanshen for Informationweek:

But by releasing the product early in the year in partnership with Cloudera, which has more customers and years in the market than any other Hadoop software and services provider, Oracle has made it clear that it is wasting no time and taking no chances with unproven technology.

“Cloudera brings us a couple of very important missing pieces, including its management software and assistance for a deeper second- and third-tier level of support,” said George Lumpkin, Oracle’s vice president of product management, data warehousing.

Speculations about the future of the Oracle - Cloudera partnership

Brian Proffitt for ITworld:

Students of Linux history will well remember that’s exactly what happened when Oracle partnered with Red Hat to introduce commoditized Oracle offerings… and then Larry Ellison and crew decided to roll their own Oracle Enterprise Linux in 2006 when they decided to cut Red Hat out of the stack.

This is strong historical evidence that Oracle will do the same with Cloudera, because frankly the big data market is too big for Oracle not to want to own. Big Data Appliance customers should note this, and be very prepared that future versions may not be tied to Cloudera at all, but rather Oracle’s version of Hadoop.

A few people suggested on Twitter that this partnership is a sign of a possible Oracle’s acquisition of Cloudera. TechCrunch’s Leena Rao links to an old post by Matt Asay suggesting this acquisition.

Media coverage of Oracle Big Data Appliance

Original title and link: Oracle Big Data Appliance Released Features Cloudera Distribution of Hadoop: What You Need to Know (NoSQL database©myNoSQL)


Cloudera Distribution of Hadoop Powers Oracle’s Big Data Appliance

The announcement of the Oracle Big Data Appliance was out for a couple of hours and already hit all media sites. Before looking at the details of the announcement, let’s try to understand what this announcement means for the parties involved.

What does it mean for Oracle?

  • Oracle enters a very busy Hadoop market associated with the best known company in the Hadoop ecosystem
  • With this partnership, Oracle didn’t have to make a huge investment in software development or services
  • Not having to build its own distribution of Hadoop, Oracle could focus on developing the Oracle Big Data Connectors
  • Oracle will delegate everything Hadoop to Cloudera thus it won’t have to deal with a very fast evolving open source project that might see some interesting events due to the
  • Oracle seems to have changed the message about Hadoop being used only for basic ETL.

What does it mean for Cloudera?

  • Cloudera gets access to a pool of customers (many of them possibly very large customers)
  • Cloudera will not need a big sales force to reach to these possible customers. Even if Cloudera knew about them, Oracle’s sales force will do the job
  • If Oracle spells Cloudera’s name in every sales pitch, Cloudera will see a huge publicity bump that will sooner or later lead to more customers

Truth is I was expecting yet another distribution of Hadoop. And even if Oracle’s Big Data Appliance doesn’t feature the official Apache Hadoop distribution, I think that by choosing an existing distribution, Oracle did the right thing. For them and for their customers.

Original title and link: Cloudera Distribution of Hadoop Powers Oracle’s Big Data Appliance (NoSQL database©myNoSQL)


Hadoop, Big Data Apps, Data Science Tools, Cloud Collision: Wikibon Big Data Predictions for 2012

Jeff Kelly for Wikibon Blog:

  1. 2012 Will Be the Year of Big Data Applications.
  2. Analytic Platform Vendors Add Improved Functionality, Social Capabilities for Data Scientists.
  3. The Cloud and Big Data Collide.
  4. Big Data Appliances Gain Steam.
  5. Industry Responds to Big Data Skills Gap with Training and Education Resources.
  6. The Big Data Privacy Discussion Begins In Ernest.

According to TRIGG that’s 6 Ts out of 6.

Original title and link: Hadoop, Big Data Apps, Data Science Tools, Cloud Collision: Wikibon Big Data Predictions for 2012 (NoSQL database©myNoSQL)

via: http://wikibon.org/blog/big-data-in-2012-hadoop-big-data-apps-data-science-tools-cloud-collision-and-more/


NoSQL Databases and Big Data Market: A Quick Look at Technology vs Funding Status

What are your first thoughts if you overlay the following graphics:

Hype Cycle for Cloud Computing 2011

Original title and link: NoSQL Databases and Big Data Market: A Quick Look at Technology vs Funding Status (NoSQL database©myNoSQL)


Data Jujitsu and Data Karate

David F. Carr in an article about DJ Patil and his work on Big Data at LinkedIn:

That is what he means by data jujitsu, where jujitsu is the art of using an opponent’s leverage and momentum against him. In data jujitsu, you try to use the scope of the problem to create the solution—without investing disproportionate resources at the early experimental stage. That’s as opposed to data karate, which would be a direct frontal assault to hack your way through the problem.

Original title and link: Data Jujitsu and Data Karate (NoSQL database©myNoSQL)

via: http://www.informationweek.com/thebrainyard/news/strategy/231900611/web-20-expo-linkedins-big-data-lessons-learned


Make Data Available - Open Data Manual

From the Open Data Manual:

Open data needs to be ‘technically’ open as well as legally open. Specifically the data needs be:

  1. Available — at no more than a reasonable cost of reproduction, preferably for free download on the Internet. Summary: publish your information on the Internet wherever possible.
  2. In bulk. The data should be available as a whole (a web API or service may also be very useful but is not a substitute for bulk access)
  3. In an open, machine-readable format. Machine-readability is important because it facilitates reuse, for example, tables of figures in a PDF can be read easily by humans but are very hard for a computer to use which greatly limits the ability to reuse that data.

Sir Tim Berners-Lee’s linked open data star scheme provides an unambiguous way to categorize open data. And while I’m at open data there’s also the Open Data Protocol which is meant to enable the creation of HTTP-based data services.

Original title and link: Make Data Available - Open Data Manual (NoSQL databases © myNoSQL)


Strategies for Exploiting Large-scale Data

In a guest post hosted by Cloudera blog, Bob Gourley[1] enumerates the characteristics of working with Big Data from federal agencies perspective.

I think these can be generalized to all businesses and problems that require big data:

Federal IT leaders are increasingly sharing lessons learned across agencies. But approaches vary from agency to agency.

For a long time each business worked in its own silo.

Yesterday, tools and algorithms represented the competitive advantage. Today the competitive advantage is in data. Sharing algorithms, experience, and ideas is safe.

federal thought leaders across all agencies are confronted with more data from more sources, and a need for more powerful analytic capabilities

If you are not confronted with this problem it is just because you didn’t realize it. If you think single sources of data are good enough, your business might be at risk.

Large-scale distributed analysis over large data sets is often expected to return results almost instantly.

Name a single manager or a business or a problem solver that wouldn’t like to get immediate answers.

  • Most agencies face challenges that involve combining multiple data sets — some structured, some complex — in order to answer mission questions.

  • increasingly seeking automated tools, more advanced models and means of leveraging commodity hardware and open source software to conduct distributed analysis over distributed data stores

Ditto

considering ways of enhancing the ability of citizens to contribute to government understanding by use of crowd-sourcing type models

Werner Vogels mentioned in his Strata talk using Amazon Mechanical Turk for adding human-based processing for data control, data validation and correction, and data enrichment.


  1. Bob Gourley: editor of CTOvision.com and a former Defense Intelligence Agency (DIA) CTO, @bobgourley  

Original title and link: Strategies for Exploiting Large-scale Data (NoSQL databases © myNoSQL)


Data Privacy and Data Marketplaces Future

Data gathered and sold by RapLeaf can be very specific. According to documents reviewed by the Journal, RapLeaf’s segments recently included a person’s household income range, age range, political leaning, and gender and age of children in the household, as well as interests in topics including religion, the Bible, gambling, tobacco, adult entertainment and “get rich quick” offers. In all, RapLeaf segmented people into more than 400 categories, the documents indicated.

Obscure data ownership + cryptic TOS + unregulated data marketplaces = 1984

Original title and link: Data Privacy and Data Marketplaces Future (NoSQL databases © myNoSQL)

via: http://online.wsj.com/article/SB10001424052702304410504575560243259416072.html


Everything Drives Storage

James Governor about storage and EMC:

It seems like every computing revolution drives storage volumes […]. But everything drives storage. Virtualisation drives storage (which helps explain both the rationalisation, and the huge success, of EMC’s VMware acquisition. The cloud drives storage. Big Data drives storage (obviously). Data Center consolidation drives storage. The Web drives storage.

… and they don’t believe in memory.

Original title and link: Everything Drives Storage (NoSQL databases © myNoSQL)

via: http://www.redmonk.com/jgovernor/2011/01/28/emc-summit-on-cloud-storage-big-data-and-developers/


MapReducing Big Data with Riak and Luwak

The recording of Basho’s webinar on Riak Map/Reduce and Luwak:

You should read before Baseball Batting Average, Using Riak Map/Reduce and Fixing the count

Original title and link: MapReducing Big Data with Riak and Luwak (NoSQL databases © myNoSQL)