ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

flume: All content tagged as flume in NoSQL databases and polyglot persistence

Membase and Cloudera with Flume and Sqoop

James Phillips (Membase):

On the technology integration front, we have built and are making available to customers two mechanisms for integrating Membase and Cloudera Distribution for Hadoop (CDH). The first is a Membase NodeCode module that can stream data from Membase to CDH in real-time. As new operational data enters Membase, it can be massaged in real time and pumped into a CDH cluster for processing. The second is a Sqoop-derived batch loader utility that enables loading of data from Membase to CDH, and vice versa.

Real-time integration using Flume. Batch integration using Sqoop. Sounds like Cloudera’s tools are delivering.

Original title and link: Membase and Cloudera with Flume and Sqoop (NoSQL databases © myNoSQL)

via: http://www.infoq.com/news/2010/10/membase-cdh-integration


Flume Cookbook: Flume and Apache Logs

Part of the ☞ Flume cookbook:

In this post, we present a recipe that describes the common use case of using a Flume node collect Apache 2 web servers logs in order to deliver them to HDFS.

In case you want to (initially) skip Flume‘s user guide, you could start with this intro to Flume and then how does Flume and Scribe compare.

Original title and link: Flume Cookbook: Flume and Apache Logs (NoSQL databases © myNoSQL)

via: http://www.cloudera.com/blog/2010/09/using-flume-to-collect-apache-2-web-server-logs/


Cloudera: All Your Big Data Are Belong to Us

Matt Asay (GigaOm):

Where Cloudera shines, however, is in taking these different contributions and making Hadoop relevant for enterprise IT[8], where data mining has waxed and waned over the years. […] Cloudera, in other words, is banking on the complexity of Hadoop to drive enterprise IT to its own Cloudera Enteprise tools.

Additionally, I think what Cloudera is “selling” is a good set of tools — Hadoop, HBase, Hive, Pig, Oozie, Sqoop, Flume, Zookeeper, Hue — put together based on their expertise in the field.

Original title and link: Cloudera: All Your Big Data Are Belong to Us (NoSQL databases © myNoSQL)

via: http://cloud.gigaom.com/2010/09/14/cloudera-all-your-big-data-are-belong-to-us/


Short Intro to Flume

The Flume ☞ user guide is 70 screen long, so use these slides and the Flume and Scribe comparison as short intros:

Notes:

  • What problems are solved by Flume?
  • data collection in all formats
  • flexible reliability guarantees allowing careful performance tunning
  • quick iteration on new collection strategies
  • Flume is build around teh concept of flows corresponding to a type of data source and composed from chained nodes
  • A Flume node receives data from a source, optionally processes it using 1 or more decorators and outputs it via a sink
  • Nodes receiving data are called agents, nodes writing data are called collectors

Original title and link for this post: Short Intro to Flume (published on the NoSQL blog: myNoSQL)


Using Hadoop for Fraud Detection and Prevention

Hadoop can be vital for solving the fraud detection problem because:

  • Sampling does not work for rare events since the chance of missing a positive fraud case leads to significant deterioration of model quality.
  • Hadoop can solve much harder problems by leveraging multiple cores across thousands of machines and search through much larger problem domains.
  • Hadoop can be combined with other tools to manage moderate to low response latency requirements.

Nicely summarized by a commenter:

So the main point is “Cloudera has developed a tool, Flume, that can load billions of events into HDFS within a few seconds and analyze them using MapReduce.”?

And the suggestion to use ALL logs?

Or is there anything deeper that I am missing?

Original title and link for this post: Using Hadoop for Fraud Detection and Prevention (published on the NoSQL blog: myNoSQL)

via: http://www.cloudera.com/blog/2010/08/hadoop-for-fraud-detection-and-prevention/


NoSQL Databases and Data Warehousing

I didn’t know data warehousing strictly imposes a relational model:

From a philosophical standpoint, my largest problem with NoSQL databases is that they don’t respect relational theory. In short, they aren’t meant to deal with sets of data, but lists. Relational algebra was created to deal with the large sets of data and have them interact. Reporting and analytics rely on that.

I’d bet people building and using Hive, Pig, Flume and other data warehousing tools would disagree with Eric Hewitt.

NoSQL Databases and Data Warehousing originally posted on the NoSQL blog: myNoSQL

via: http://info.livelogic.net/customer-intelligence-project-success/bid/49227/Distributed-Hash-Tables-and-Concentrated-Geekdom


How Does Flume and Scribe Compare?

I read this ☞ post about Cloudera’s Flume with much interest. Flume sounds like a very interesting tool, not to mention that from Cloudera’s business perspective it makes a lot of sense:

We’ve seen our customers have great success using Hadoop for processing their data, but the question of how to get the data there to process in the first place was often significantly more challenging.

Just in case you didn’t have the time to read about Flume yet, here’s a short description from the ☞ GitHub project page:

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. The system is centrally managed and allows for intelligent dynamic management. It uses a simple extensible data model that allows for online analytic applications.

In a way this sounded a bit familiar. I thought I’ve seen something kind of similar before: ☞ Scribe:

Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups. If the central scribe server isn’t available the local scribe server writes the messages to a file on local disk and sends them when the central server recovers. The central scribe server(s) can write the messages to the files that are their final destination, typically on an nfs filer or a distributed filesystem, or send them to another layer of scribe servers.

So my question is: how does Flume and Scribe compare? What are the major differences and what scenarios are good for one or the other?

If you have the answer to any of these questions, please drop a comment or send me an email.

Update: Looks like I’ve failed to find this ☞ useful thread, but thanks to this comment mistake is corrected:

1. Flume allows you to configure your Flume installation from a central point, without having to ssh into every machine, update a configuration variable and restart a daemon or two. You can start, stop, create, delete and reconfigure logical nodes on any machine running Flume from any command line in your network with the Flume jar available.

2. Flume also has centralised liveness monitoring. We’ve heard a couple of stories of Scribe processes silently failing, but lying undiscovered for days until the rest of the Scribe installation starts creaking under the increased load. Flume allows you to see the health of all your logical nodes in one place (note that this is different from machine liveness monitoring; often the machine stays up while the process might fail).

3. Flume supports three distinct types of reliability guarantees, allowing you to make tradeoffs between resource usage and reliability. In particular, Flume supports fully ACKed reliability, with the guarantee that all events will eventually make their way through the event flow.

4. Flume’s also really extensible - it’s really easy to write your own source or sink and integrate most any system with Flume. If rolling your own is impractical, it’s often very straightforward to have your applications output events in a form that Flume can understand (Flume can run Unix processes, for example, so if you can use shell script to get at your data, you’re golden).

— Henry Robinson

In the same thread, I’m reading about another tool ☞ Chukwa:

Chukwa is a Hadoop subproject devoted to large-scale log collection and analysis. Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and MapReduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying monitoring and analyzing results, in order to make the best use of this collected data.


5 Years Old Hadoop Celebration at Hadoop Summit, Plus New Tools

I didn’t realize Hadoop has been so long on the market: 5 years. In just a couple of hours, the celebration will start at ☞ Hadoop Summit in Santa Clara.

Yahoo!, the most active contributor to Hadoop, will ☞ open source today two new tools: Hadoop with Security and Oozie, a workflow engine.

Hadoop Security integrates Hadoop with Kerberos, providing secure access and processing of business-sensitive data.This enables organizations to leverage and extract value from their data and hardware investment in Hadoop across the enterprise while maintaining data security, allowing new collaborations and applications with business-critical data.

Oozie is an open-source workflow solution to manage jobs running on Hadoop, including HDFS, Pig, and MapReduce. Oozie — a name for an elephant tamer — was designed for Yahoo!’s rigorous use case of managing complex workflows and data pipelines at global scale. It is integrated with Hadoop Security and is quickly becoming the de-facto standard for ETL (extraction, transformation, loading) processing at Yahoo!.

Update: It looks like the news are not stopping here, Cloudera making ☞ a big announcement accompanying the new release of Cloudera’s Distribution for Hadoop CDHv3 Beta2:

The additional packages include HBase, the popular distributed columnar storage system with fast read-write access to data managed by HDFS, Hive and Pig for query access to data stored in a Hadoop cluster, Apache Zookeeper for distributed process coordination and Sqoop for moving data between Hadoop and relational database systems. We’ve adopted the outstanding workflow engine out of Yahoo!, Oozie, and have made contributions of our own to adapt it for widespread use by general enterprise customers. We’ve also released – this is a big deal, and I’m really pleased to announce it – our continuous data loading system, Flume, and our Hadoop User Environment software (formerly Cloudera Desktop, and henceforth “Hue”) under the Apache Software License, version 2.

Also worth mentioning, going forward Cloudera will also have a commercial offering: ☞ Cloudera Enterprise:

Cloudera Enterprise combines the open source CDHv3 platform with critical monitoring, management and administrative tools that our enterprise customers have told us they need to put Hadoop into production. We’ve added dashboards for critical IT tasks, including monitoring cluster status and activity, keeping track of data flows into Hadoop in real time based on the services that Flume provides, and controlling access to data and resources by users and groups. We’ve integrated access controls with Active Directory and other LDAP implementations so that IT staff can control rights and identities in the same way as they do for other business platforms they use. Cloudera Enterprise is available by annual subscription and includes maintenance, updates and support.