ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

graphdb: All content tagged as graphdb in NoSQL databases and polyglot persistence

Getting started with Neo4j 2.0

Very good introductory post by Jim Webber about Neo4j and some of the new features in the 2.0 release:

In this article we’ve seen how Neo4j 2.0 and the new version of the Cypher query language can be used to store and query a range of retail data from product catalogue to customer purchases. We also saw how straightforward it was to quickly gain insight from that data, despite the domain being highly and intricately connected.

Original title and link: Getting started with Neo4j 2.0 (NoSQL database©myNoSQL)

via: http://jimwebber.org/2014/01/starting-graph-databases-with-neo4j-2-0/


Neo4j trick: using remote shell combined with Neo4j embedded

Stefan Armbruster:

In cases where Neo4j is used in embedded mode, there is often a demand for having a maintenance channel to the database, e.g. for fixing wrong data. Nothing simpler than that, there’s an easy way to enable the remote shell together with embedded mode

Really nice trick!

Original title and link: Neo4j trick: using remote shell combined with Neo4j embedded (NoSQL database©myNoSQL)

via: http://blog.armbruster-it.de/2014/01/using-remote-shell-combined-with-neo4j-embedded/


MySQL is a great Open Source project. How about open source NoSQL databases?

In a post titled Some myths on Open Source, the way I see it, Anders Karlsson writes about MySQL:

As far as code, adoption and reaching out to create an SQL-based RDBMS that anyone can afford, MySQL / MariaDB has been immensely successful. But as an Open Source project, something being developed together with the community where everyone work on their end with their skills to create a great combined piece of work, MySQL has failed. This is sad, but on the other hand I’m not so sure that it would have as much influence and as wide adoption if the project would have been a “clean” Open Source project.

The article offers a very black-and-white perspective on open source versus commercial code. But that’s not why I’m linking to it.

The above paragraph made me think about how many of the most popular open source NoSQL databases would die without the companies (or people) that created them.

Here’s my list: MongoDB, Riak, Neo4j, Redis, Couchbase, etc. And I could continue for quite a while considering how many there are out there: RavenDB, RethinkDB, Voldemort, Tokyo, Titan.

Actually if you reverse the question, the list would get extremely short: Cassandra, CouchDB (still struggling though), HBase. All these were at some point driven by community. Probably the only special case could be LevelDB.

✚ As a follow up to Anders Karlsson post, Robert Hodges posted The Scale-Out Blog: Why I Love Open Source.

Original title and link: MySQL is a great Open Source project. How about open source NoSQL databases? (NoSQL database©myNoSQL)

via: http://karlssonondatabases.blogspot.com/2014/01/some-myths-on-open-source-way-i-see-it.html


Pigs can build graphs too for graph analytics

Extremely interesting and intriguing usage and extension of Pig at Intel:

Pigs eat everything and Pig can ingest many data formats and data types from many data sources, in line with our objectives for Graph Builder. Also, Pig has native support for local file systems, HDFS, and HBase, and tools like Sqoop can be used upstream of Pig to transfer data into HDFS from relational databases. One of the most fascinating things about Pig is that it only takes a few lines of code to define a complex workflow comprised of a long chain of data operations. These operations can map to multiple MapReduce jobs and Pig compiles the logical plan into an optimized workflow of MapReduce jobs. With all of these advantages, Pig seemed like the right tool for graph ETL, so we re-architected Graph Builder 2.0 as a library of User- Defined Functions (UDF’s) and macros in Pig Latin.

Original title and link: Pigs can build graphs too for graph analytics (NoSQL database©myNoSQL)

via: http://blogs.intel.com/intellabs/2013/12/17/pigs-can-build-graphs-too/


Neo4j 2.0 released - A graph browser and query language improvements

I might be wrong, but Neo4j guys seem to go back to making a big announcement in December. It is a big announcement as the version says: Neo4j got a new data browser and Cypher, Neo4j’s graph query language,

The official announcement contains more details about what’s new in Neo4j. There’s also an interview with Michael Hunger on InfoQ about the new version.

Last, there’s also a slidedeck about the changes and improvements in Cypher:


Graph Databases Power Marvel Universe's Social Network

During the presentation, Olson used the spandex-clad archer Hawkeye as an example. Throughout his career, the character has alternated between being a villain, a hero, and a covert operative. Additionally, other characters have assumed the mantle of Hawkeye at different points in time, while the man under the mask himself, Clint Barton, has adopted other identities as well.

Ignore it if you are not into comics. Of if you are a DC fan.

Original title and link: Graph Databases Power Marvel Universe’s Social Network (NoSQL database©myNoSQL)

via: http://www.neotechnology.com/neo4j-powers-marvel-universes-social-network/


Why relationships are cool… Relationship in RDBMS vs graph databases

I have to agree with Patrick Durusau on this:

I have been trying to avoid graph “intro” slides and presentations.

There are only so many times you can stand to hear “…all the world is a graph…” as though that’s news. To anyone.

This presentation by Luca is different from the usual introduction to graphs presentation.

Original title and link: Why relationships are cool… Relationship in RDBMS vs graph databases (NoSQL database©myNoSQL)


Updated conclusions about the graph database benchmark - Neo4j can perform much better

As I expected (and was quickly confirmed by a lot of people), the results in the graph database benchmark showing Neo4j being outperformed by MySQL, Vertica, VoltDB could have been much improved:

Our conclusions from this are that, like any of the complex systems we tested, properly tuning Neo4j can be tricky and getting optimal performance may require some experimentation with parameters. Whether a user of Neo4j can expect to see runtimes on graphs like this measured in milliseconds or seconds depends on workload characteristics (warm / cold cache) and whether setup steps can be amortized across many queries or not.

Looking at the 3 improvements mentioned in the post:

  1. Excluding connection. I think the change in the benchmark is actually about not accounting for the initialization of the database rather than timing connections. The performance of establishing connections is still pretty important. (check Mark Callaghan‘s posts about the work at Facebook to improve MySQL’s connections performance)
  2. Warm cache. A benchmark should measure both empty and warm caches behavior as these are two scenarios that any application will face.
  3. Simpler algorithm. This one is quite tricky. While the application should definitely take the approach that fits your database, it’s also a matter of knowledge and complexity. You could also think that the more different approaches you can use the better results you can get. Or vice-versa, the more approaches are possible the more time you’ll spend understanding which one to use, instead of getting things done (think Python vs. Perl).

Original title and link: Updated conclusions about the graph database benchmark - Neo4j can perform much better (NoSQL database©myNoSQL)

via: http://istc-bigdata.org/index.php/benchmarking-graph-databases-updates/


Purely awesome - Chess Games and Neo4j

I wasn’t able to follow the post. I got myself lost into the superb presentation built for it. Chess game replays. Dynamic graphs. Pure awesomeness.

This is by far the most entertaining blog entry presentation I’ve seen since I’ve start reading and writing about NoSQL.

standingovation

Original title and link: Purely awesome - Chess Games and Neo4j (NoSQL database©myNoSQL)

via: http://gist.neo4j.org/?6506717


On the topic of importing data into Neo4j

This post authored by Rik van Bruggen mentions the use of Talend ETL tool which brought an import job down from 1 hour to a couple of minutes:

This is where it got interesting. The spreadsheet import mechanism worked ok - but it really wasn’t great. It took more than an hour to get the dataset to load - so I had to look for alternatives. Thanks to my French friend and colleague Cédric, I bumped into the Talend ETL (Extract - Transform - Load) tools. I found out that there was a proper neo4j connector that was developed by Zenika, a French integrator that really seems to know their stuff.

There’s also a short video demoing Talend:

✚ I’ve mentioned what I see as the complexity of importing data into graph databases in On Importing Data into Neo4j

Original title and link: On the topic of importing data into Neo4j (NoSQL database©myNoSQL)

via: http://blog.neo4j.org/2013/07/fun-with-music-neo4j-and-talend.html


On Importing Data into Neo4j

For operations where massive amounts of data flow in or out of a Neo4j database, the interaction with the available APIs should be more considerate than with your usual, ad-hoc, local graph queries.

I’ll tell you the truth: when thinking about importing large amounts of data into a graph database I don’t feel very comfortable. And it’s not about the amount. It’s about the complexity of the data. Nodes. Properties of nodes. Relationships and their properties. And direction.

I hope this series started by Michael Hunger will help me learn more about graph database ETL.

Original title and link: On Importing Data into Neo4j (NoSQL database©myNoSQL)

via: http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/


Titan: Data Loading and Transactional Benchmark

The Aurelius team describing an advanced benchmark of Titan, a massive scale property graph allowing real-time traversals and updates, sponsored by Pearson, developed and run over 5 months:

The 10 terabyte, 121 billion edge graph was loaded into the cluster in 1.48 days at a rate of approximately 1.2 million edges a second with 0 failed transactions. These numbers were possible due to new developments in Titan 0.3.0 whereby graph partitioning is achieved using a domain-basedbyte order partitioner.

✚ The answer to why Titan is built on Cassandra can be found in this interview between Aurelius CTO Matthias Broecheler and DataStax co-founder Matt Pfeil:

[…] we don’t have to worry about things like replication, backup, and snap shots because all of that stuff is handled by Cassandra. We really just focus on: “How do you distribute a graph?”, “How do you represent a graph efficiently in a big table model?”, “How do you do things like etched compression and other things that are very graph specific in order to make the database fast? And, lastly, “How do to build intelligence index structures so that the graphs traversals, which are the core of any graph database, so that those are as fast as possible?”

Original title and link: Titan: Data Loading and Transactional Benchmark (NoSQL database©myNoSQL)

via: http://www.planetcassandra.org/blog/post/educating-the-planet-with-pearson