Probably one of the nicest taglines for Pig:
If Perl is the duct tape of the internet, and Hadoop is the kernel of the data center as computer, then Pig is the duct tape of Big Data.
And an advise on how to use Pig:
When I write Pig Latin code beyond a dozen lines, I check it in stages:
- Write Pig Latin in TextMate (Saved in a git repo, otherwise I lose code)
- Paste the code into the Grunt shell – Did it parse?
- DESCRIBE the final output and each complex step – Did it still parse? Is the schema what I expected?
- ILLUSTRATE the output – Does it still parse? Is the schema ok? Is the example data ok?
- SAMPLE / LIMIT / DUMP the output – Does it still parse? Is the schema ok? Is the sampled/limited data sane?
- STORE the final output and see if the job completes.
- cat output_dir/part-00000 (followed by a quick ctrl-c to stop the flood) – Is the stored output on HDFS ok?