ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

pig: All content tagged as pig in NoSQL databases and polyglot persistence

Choosing Technologies: The Library of Congress and the Twitter Archive

Remember when everyone was suggesting solutions for Twitter architecture? Now the Library of Congress is trying to figure out what technologies to use to store the Twitter archive:

The project is still very much under construction, and the team is weighing a number of different open source technologies in order to build out the storage, management and querying of the Twitter archive. While the decision hasn’t been made yet on which tools to use, the library is testing the following in various combinations: Hive, ElasticSearch, Pig, Elephant-bird, HBase, and Hadoop.

Note that in terms of storage only HBase is mentioned—Twitter’s main tweet storage is MySQL though.

Original title and link: Choosing Technologies: The Library of Congress and the Twitter Archive (NoSQL database©myNoSQL)

via: http://blogs.forbes.com/oreillymedia/2011/06/13/the-library-of-congress-twitter-archive-one-year-later/


Experimenting with Hadoop using Cloudera VirtualBox Demo

CDH Mac OS X VirtualBox VM

If you don’t count the download, you’ll get this up and running in 5 minutes tops. At the end you’ll have Hadoop, Sqoop, Pig, Hive, HBase, ZooKeeper, Oozie, Hume, Flume, and Whirr all configured and ready to experiment with.

Making it easy for users to experiment with these tools increases the chances for adoption. Adoption means business.

Original title and link: Experimenting with Hadoop using Cloudera VirtualBox Demo (NoSQL databases © myNoSQL)

via: http://www.cloudera.com/blog/2011/06/cloudera-distribution-including-apache-hadoop-3-demo-vm-installation-on-mac-os-x-using-virtualbox-cdh/


Apixio Using Hadoop, Pig and Cassandra for Advanced Analytics on Medical Records

Apixio uses Hadoop and Pig for analysing medical records and Cassandra for serving seach queries. All production machines are Amazon EC2 instances.

Bob Rogers, Apixio’s chief scientist, explained the importance of machine learning and unstructured-data analysis in the medical field. He said because of the proliferation of ontologies — area-specific terminology for everything from billing to scan results — any sort of search engine must be able to create degrees of association between the various ontologies, as well as common language.

It sounds like the perfect setup for Brisk.

Original title and link: Apixio Using Hadoop, Pig and Cassandra for Advanced Analytics on Medical Records (NoSQL databases © myNoSQL)

via: http://www.nytimes.com/external/gigaom/2011/04/01/01gigaom-apixio-is-bringing-big-data-to-medical-records-in-95148.html


Hadoop and NoSQL Databases at Twitter

Three presentations covering the various NoSQL usages at Twitter:

  1. Kevin Weil talking about data analysis using Scribe for logging, base analysis with Pig/Hadoop, and specialized data analysis with HBase, Cassandra, and FlockDB on InfoQ

  2. Ryan King’s presentation from last year’s QCon SF NoSQL track on Gizzard, Cassandra, Hadoop, and Redis on InfoQ

  3. Dmitriy Ryaboy on Hadoop from Devoxx 2010:

By looking at the powered by NoSQL page and my records, Twitter seems to be the largest adopter of NoSQL solutions. Here is an updated version of who is using Cassandra and HBase

  • Twitter: Cassandra, HBase, Hadoop, Scribe, FlockDB, Redis
  • Facebook: Cassandra, HBase, Hadoop, Scribe, Hive
  • Netflix: Amazon SimpleDB, Cassandra
  • Digg: Cassandra
  • SimpleGeo: Cassandra
  • StumbleUpon: HBase, OpenTSDB
  • Yahoo!: Hadoop, HBase, PNUTS
  • Rackspace: Cassandra

And probably many more missing from the list. But that could change if you leave a comment.

Original title and link: Hadoop and NoSQL Databases at Twitter (NoSQL databases © myNoSQL)


Cloudera’s Distribution for Apache Hadoop version 3 Beta 4

New version of Cloudera’s Hadoop distro — complete release notes available here:

CDH3 Beta 4 also includes new versions of many components. Highlights include:

  • HBase 0.90.1, including much improved stability and operability.
  • Hive 0.7.0rc0, including the beginnings of authorization support, support for multiple databases, and many other new features.
  • Pig 0.8.0, including many new features like scalar types, custom partitioners, and improved UDF language support.
  • Flume 0.9.3, including support for Windows and improved monitoring capabilities.
  • Sqoop 1.2, including improvements to usability and Oracle integration.
  • Whirr 0.3, including support for starting HBase clusters on popular cloud platforms.

Plus many scalability improvements contributed by Yahoo!.

Cloudera’s CDH is the most popular Hadoop distro bringing together many components of the Hadoop ecosystem. Yahoo remains the main innovator behind Hadoop.

Original title and link: Cloudera’s Distribution for Apache Hadoop version 3 Beta 4 (NoSQL databases © myNoSQL)

via: http://www.cloudera.com/blog/2011/02/cdh3-beta-4-now-available


Pig Latin and JSON on Amazon Elastic Map Reduce

In order to not have to learn everything about setting up Hadoop and still have the ability to leverage the power of Hadoop’s distributed data processing framework and not have to learn how to write map reduce jobs and … (this could go on for a while so I’ll just stop here). For all these reasons, I choose to use Amazon’s Elastic Map infrastructure and Pig.

I will talk you through how I was able to do all this [take my log data stored on S3 (which is in compressed JSON format) and run queries against it] with a little help from the Pig community and a lot of late nights. I will also provide an example Pig script detailing a little about how I deal with my logs (which are admittedly slightly abnormal).

Sadly such an useful tool in the Hadoop ecosystem doesn’t make the headlines.

Original title and link: Pig Latin and JSON on Amazon Elastic Map Reduce (NoSQL databases © myNoSQL)

via: http://eric.lubow.org/2011/hadoop/pig-queries-parsing-json-on-amazons-elastic-map-reduce-using-s3-data/


Pig Latin Adds Macros as Part of Becoming Turing Complete

Since direct integration of data flow and control flow is neither reasonable nor desirable, a heuristic is needed to productively combine the two. […] Compared to an approach that integrates control flow and data flow, such as PL/SQL, embedding in an existing scripting language is a much lower development and maintenance effort. It will also be much easier for users, who will be able to use existing development tools (IDEs, debuggers, etc.) to work with their scripts.

The first proposal—macro expansions— has already been committed and will be included in the next Pig Latin release.

Original title and link: Pig Latin Adds Macros as Part of Becoming Turing Complete (NoSQL databases © myNoSQL)

via: http://wiki.apache.org/pig/TuringCompletePig


The Backstory of Yahoo and Hadoop

We currently have nearly 100 people working on Apache Hadoop and related projects, such as Pig, ZooKeeper, Hive, Howl, HBase and Oozie. Over the last 5 years, we’ve invested nearly 300 person-years into these projects. […] Today Yahoo runs on over 40,000 Hadoop machines (>300k cores). They are used by over a thousand regular users from our science and development teams. Hadoop is at the center of our research in search, advertising, spam detection, personalization and many other topics.

I assume there’s no surpise to anyone I’m a big fan of Yahoo! open source initiatives.

Original title and link: The Backstory of Yahoo and Hadoop (NoSQL databases © myNoSQL)

via: http://developer.yahoo.com/blogs/hadoop/posts/2011/01/the-backstory-of-yahoo-a


Apache Pig 0.8: What is New

Dmitriy Ryaboy1 has a guest post on Cloudera blog covering the new features in Apache Pig 0.8.

Summarized:

  • Support for user defined functions (UDF) in scripting languages
  • Generic UDFs: allows invocation of static java methods
  • PigUnit: as the name suggests, a testing tool for Pig scripts
  • PigStats: once again the name should give you a hint of what it does: better visibility into Pig job through a series of stats, XML-based metadata injected into Map-Reduce jobs, and listeners for the Pig process
  • Scalar values: simplifying access to single-row relations
  • possibility to start a monitoring thread for long running executions
  • HBaseStorage: works with HBase 0.20 releases only
  • flow allows custom Map-Reduce jobs
  • automatic merge of small files
  • custom partitioners

The Pig 0.8 release includes a large number of bug fixes and optimizations, but at the core it is a feature release. It’s been in the works for almost a full year and the amount of time spent on 0.8 really shows.

You can also check Dmitriy’s presentations about the NoSQL ecosystem at Twitter: Twitter, Pig, and HBase and HBase and Pig: The Hadoop ecosystem at Twitter


  1. Dmitriy Ryaboy: Twitter engineer, @squarecog  

Original title and link: Apache Pig 0.8: What is New (NoSQL databases © myNoSQL)

via: http://www.cloudera.com/blog/2010/12/new-features-in-apache-pig-0-8/


Pig and Cheminformatics

Rajarshi Guha about Pig Latin:

While the implementation of such code [SMARTS matching and pharmacophore searching] is pretty straightforward, it’s still pretty heavyweight compared to say, performing SMARTS matching in a database via SQL. On the other hand, being able to perform these tasks in Pig Latin, lets us write much simpler code that can be integrated with other non-cheminformatics code in a flexible manner.

Extensibility over compactness.

Original title and link: Pig and Cheminformatics (NoSQL databases © myNoSQL)

via: http://blog.rguha.net/?p=748


Quick Reference: Hadoop Tools Ecosystem

Just a quick reference of the continuously growing Hadoop tools ecosystem.

Hadoop

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.

hadoop.apache.org

HDFS

A distributed file system that provides high throughput access to application data.

hadoop.apache.org/hdfs/

MapReduce

A software framework for distributed processing of large data sets on compute clusters.

Amazon Elastic MapReduce

Amazon Elastic MapReduce is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).

aws.amazon.com/elasticmapreduce/

Cloudera Distribution for Hadoop (CDH)

Cloudera’s Distribution for Hadoop (CDH) sets a new standard for Hadoop-based data management platforms.

cloudera.com/hadoop

ZooKeeper

A high-performance coordination service for distributed applications. ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

hadoop.apache.org/zookeeper/

HBase

A scalable, distributed database that supports structured data storage for large tables.

hbase.apache.org

Avro

A data serialization system. Similar to ☞ Thrift and ☞ Protocol Buffers.

avro.apache.org

Sqoop

Sqoop (“SQL-to-Hadoop”) is a straightforward command-line tool with the following capabilities:

  • Imports individual tables or entire databases to files in HDFS
  • Generates Java classes to allow you to interact with your imported data
  • Provides the ability to import from SQL databases straight into your Hive data warehouse

cloudera.com/downloads/sqoop/

Flume

Flume is a distributed, reliable, and available service for efficiently moving large amounts of data soon after the data is produced.

archive.cloudera.com/cdh/3/flume/

Hive

Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files. It provides a mechanism to put structure on this data and it also provides a simple query language called Hive QL which is based on SQL and which enables users familiar with SQL to query this data. At the same time, this language also allows traditional map/reduce programmers to be able to plug in their custom mappers and reducers to do more sophisticated analysis which may not be supported by the built-in capabilities of the language.

hive.apache.org

Pig

A high-level data-flow language and execution framework for parallel computation. Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

pig.apache.org

Oozie

Oozie is a workflow/coordination service to manage data processing jobs for Apache Hadoop. It is an extensible, scalable and data-aware service to orchestrate dependencies between jobs running on Hadoop (including HDFS, Pig and MapReduce).

yahoo.github.com/oozie

Cascading

Cascading is a Query API and Query Planner used for defining and executing complex, scale-free, and fault tolerant data processing workflows on a Hadoop cluster.

cascading.org

Cascalog

Cascalog is a tool for processing data on Hadoop with Clojure in a concise and expressive manner. Cascalog combines two cutting edge technologies in Clojure and Hadoop and resurrects an old one in Datalog. Cascalog is high performance, flexible, and robust.

github.com/nathanmarz/cascalog

HUE

Hue is a graphical user interface to operate and develop applications for Hadoop. Hue applications are collected into a desktop-style environment and delivered as a Web application, requiring no additional installation for individual users.

archive.cloudera.com/cdh3/hue

You can read more about HUE on ☞ Cloudera blog.

Chukwa

Chukwa is a data collection system for monitoring large distributed systems. Chukwa is built on top of the Hadoop Distributed File System (HDFS) and Map/Reduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying, monitoring and analyzing results to make the best use of the collected data.

incubator.apache.org/chukwa/

Mahout

A Scalable machine learning and data mining library.

mahout.apache.org

Integration with Relational databases

Integration with Data Warehouses

The only list I have is MapReduce, RDBMS, and Data Warehouse, but I’m afraid it is quite a bit old. So maybe someone could help me update it.

Anything else? Once we validate this list, I’ll probably have to move it on the NoSQL reference

Original title and link: Quick Reference: Hadoop Tools Ecosystem (NoSQL databases © myNoSQL)


Cloudera: All Your Big Data Are Belong to Us

Matt Asay (GigaOm):

Where Cloudera shines, however, is in taking these different contributions and making Hadoop relevant for enterprise IT[8], where data mining has waxed and waned over the years. […] Cloudera, in other words, is banking on the complexity of Hadoop to drive enterprise IT to its own Cloudera Enteprise tools.

Additionally, I think what Cloudera is “selling” is a good set of tools — Hadoop, HBase, Hive, Pig, Oozie, Sqoop, Flume, Zookeeper, Hue — put together based on their expertise in the field.

Original title and link: Cloudera: All Your Big Data Are Belong to Us (NoSQL databases © myNoSQL)

via: http://cloud.gigaom.com/2010/09/14/cloudera-all-your-big-data-are-belong-to-us/