Most Pig tutorials you will find assume that you are working with
data where you know all the column names ahead of time, and that the
column names themselves are just labels, versus being composites of
labels and data. For example, when working with HBase, it’s actually
not uncommon for both of those assumptions to be false. Being a
columnar database, it’s very common to be working to rows that have
thousands of columns. Under that circumstance, it’s also common for
the column names themselves to encode to dimensions, such as date
and counter type.
Original title and link: Flatten Entire HBase Column Families With Pig and Python UDFs ( ©myNoSQL)
Alexander Dean’s tutorial published in SDJ:
In this article you will learn how to write a user-defined function
(“UDF”) to work with the Apache Hive platform. We will start gently
with an introduction to Hive, then move on to developing the UDF and
writing tests for it. We will write our UDF in Java, but use Scala’s
SBT as our build tool and write our tests in Scala with Specs2.
As far as I know it’s quite easy to write UDFs for Pig and Hive in any language that has a JVM implementation (Python with Jython, Ruby with JRuby, Groovy).
Original title and link: Writing Hive UDFs With Java - a Tutorial ( ©myNoSQL)