PostgreSQL: All content tagged as PostgreSQL in NoSQL databases and polyglot persistence
Not much information available yet on the project page but looks like bidirectional integration of PosgreSQL and Hadoop.
The Postgres Plus Connector for Hadoop provides developers easy access to massive amounts of SQL data for integration with or analysis in Hadoop processing clusters. Now large amounts of data managed by PostgreSQL or Postgres Plus Advanced Server can be accessed by Hadoop for analysis and manipulation using Map-Reduce constructs.
When speaking about PostgreSQL and Hadoop, the first thing that comes to my mind is Daniel Abadi’s HadoopDB that became not long ago the technology behind his startup which has already raised $9.5mil.
Original title and link: Postgres Plus Connector for Hadoop in Private Beta ( ©myNoSQL)
- part 1: goals and building blocks
- part 2: geo data, PostGIS, and TileStache
- part 3: client side and MongoDB
Original title and link: Tutorial: Building Interactive Maps With Polymaps, TileStach, and MongoDB ( ©myNoSQL)
Three articles about dbShards:
highscalability.com: Product: DbShards - Share Nothing. Shard Everything
What Kind Of Customer Are You Targeting With DbShards? Who Ends Up Using Your Product And Why?
The primary customers for dbShards fit into two categories:
- fast-growing Web or online applications (e.g., Gaming, Facebook apps, social network sites, analytics)
- any application involved in high volume data collection and analysis (e.g., Device Measurement). Any application that requires high rates of read/write transaction volumes with a growing data set is a good candidate for the technology.
I’ve checked the customers page and I don’t see any company listed there that corresponds to the first point above. As regards the second category, read on.
insert performance with dbShards + MySQL + InnoDB is 1500-3000 inserts per shard per second, scaling almost linearly with the number of shards. I forgot to ask how many shards this had been tested for.
I assume you are aware of some numbers for NoSQL databases. Not to mention the 750k qps NoSQLized MySQL.
dbShards has good join performance when – you guessed it! – everything being joined is co-located shard-by-shard, because the tables were distributed on the same shard key and/or replicated across each shard. Cory can’t imagine why you’d want to do an inner join under any other circumstances.
While there’s no surprise in the above quote, I’m not sure how to correlate it with the fact that dbShards targets data analysis clients.
dbms2.com: dbShards update
dbShards’ replication scheme works like this:
- A write initially goes to two places at once — to the DBMS and a dbShards agent, both running on the same server.
- The dbShards agent streams to the dbShards agent on the replica server, and receipt of the streamed write is acknowledged.
- At that point the commits start. (Cory seemed to say that the commit on the primary server happens first, but I’m not sure why.)
In essence, two-phase database commit is replaced by two-phase log synchronization.
Anyone could explain how are these different?
I know all this may come out as too negative. But while I think dbShards has a decent set of features, some of the statements out there are not doing it any favors.
Good lessons on building high availability services from Disqus, the commenting service:
Interesting to note that MongoDB is not mentioned anywhere in the talk, even if Disqus is powered by MongoDB. It is either because MongoDB scaling and high availability weren’t a concern (nb no pun intended, but I doubt that) or that MongoDB is not a central piece of Disqus architecture.
Original title and link: Disqus: Scaling the World’s Largest Django Application (NoSQL databases © myNoSQL)
This is becoming a “trend“:
That’s because you are basically taking your data and vomiting it on the hard drive without any consideration as to if your data you are writing is sensible or simply dreamed up by magic pixies.
If you missed it, make sure you watch MongoDB is Web Scale.