ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

LinkedIn, Data Processing, and Pig

Probably one of the nicest taglines for Pig:

If Perl is the duct tape of the internet, and Hadoop is the kernel of the data center as computer, then Pig is the duct tape of Big Data.

And an advise on how to use Pig:

When I write Pig Latin code beyond a dozen lines, I check it in stages:

  • Write Pig Latin in TextMate (Saved in a git repo, otherwise I lose code)
  • Paste the code into the Grunt shell – Did it parse?
  • DESCRIBE the final output and each complex step – Did it still parse? Is the schema what I expected?
  • ILLUSTRATE the output – Does it still parse? Is the schema ok? Is the example data ok?
  • SAMPLE / LIMIT / DUMP the output – Does it still parse? Is the schema ok? Is the sampled/limited data sane?
  • STORE the final output and see if the job completes.
  • cat output_dir/part-00000 (followed by a quick ctrl-c to stop the flood) – Is the stored output on HDFS ok?

via: http://blog.linkedin.com/2010/07/01/linkedin-apache-pig/