ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Neo4j Tips & Tricks: Handling Long Transactions

An answer to the question: is write performance influenced by the size of transactions? (nb the “popular” question though is: why does my write performance drops off when performing many operations in a single transaction?):

The reason is because Neo4j keeps the transaction’s operations in memory until commit, so your JVM will eventually run out of memory and start paging to disk.

There are two solutions:

  1. split your transactions into groups of 30,000 or so (obviously you give up the ability to do a full rollback)
  2. skip the transaction part and use the BatchInserter, which writes directly to the persistence layer rather than keeping everything in memory.

via: http://tobym.posterous.com/neo4j-write-performance-fast-then-massive-slo