ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Extracting and Tokenizing 30TB of Web Crawl Data

All code for this 5 step process of extracting and tokenizing Common crawl’s 30TB of data is available on GitHub:

  1. Distributed copy to get data into a Hadoop cluster
  2. Filtering text/html
  3. Using boilerpipe for extracting visible text
  4. Using Apache Tika LanguageIdentifier for filtering English content
  5. Tokenizing using the Stanford parser.

Original title and link: Extracting and Tokenizing 30TB of Web Crawl Data (NoSQL database©myNoSQL)

via: http://matpalm.com/blog/2011/12/10/common_crawl_visible_text/