NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



Zettaset: All content tagged as Zettaset in NoSQL databases and polyglot persistence

The Outer Limits of Data Warehouse Technology

The story of adopting Hadoop (through Zettaset) at Zions Bancorporation:

The quest for a solution began in 2009 with an investigation of Zion’s existing Microsoft and Oracle technologies, as well as other technologies within the firm and new solutions on the market, Wood relates. After developing a list of six potential vendors, he says, he and his team quickly focused on two Hadoop-based solutions. The team, Wood explains, recognized the potential in Hadoop for “making security decisions proactively rather than reactively, based on mining business intelligence and combining it with event data from security devices.”

Original title and link: The Outer Limits of Data Warehouse Technology (NoSQL database©myNoSQL)


How to Hadoop: Maximizing the value of big data

Brian Christian1 (Zettaset) suggests two roads for adopting Hadoop:

The first, building the capability internally, seems to hold out the promise of flexibility and control for organizations that employ it. While this has sometimes been the case for some large companies, a variety of studies indicate that even among Fortune 500 companies, less than 20 percent that began Hadoop development succeeded in deploying a solution.

The second approach entails working with a big-data, Hadoop-focused third party to develop a bespoke solution. In addition to eliminating the requirement of enormous equipment and human capital investment, this approach also enables organizations, their executives, and IT staff to focus on their core value propositions rather than being forced to become Hadoop specialists.

It would be easy if the decision what be just about CAPEX vs OPEX. Or on-premise vs managed deployments. But there are tons of variables that must be considered when going the Big Data way. Eventually pretty much everyone will do something around Big Data, but those at the forefront still have to figure out many important aspects.

  1. Brian Christian is CEO of Zettaset, which delivers a fault-tolerant and highly available solution for big data aggregation 

Original title and link: How to Hadoop: Maximizing the value of big data (NoSQL database©myNoSQL)


8 Most Interesting Companies for Hadoop’s Future

Filtering and augmenting a Q&A on Quora:

  1. Cloudera: Hadoop distribution, Cloudera Enterprise, Services, Training
  2. Hortonworks: Apache Hadoop major contributions, Services, Training
  3. MapR: Hadoop distribution, Services, Training
  4. HPCC Systems: massive parallel-processing computing platform
  5. HStreaming: real-time data processing and analytics capabilities on top of Hadoop
  6. DataStax: DataStax Enterprise, Apache Cassandra based platform accepting real-time input from online applications, while offering analytic operations, powered by Hadoop
  7. Zettaset: Enterprise Data Analytics Suite built on Hadoop
  8. Hadapt: analytic platform based on Apache Hadoop and relational DBMS technology

I’ve left aside names like IBM, EMC, Informatica, which are doing a lot of integration work.

Original title and link: 8 Most Interesting Companies for Hadoop’s Future (NoSQL database©myNoSQL)