ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

MySQL: All content tagged as MySQL in NoSQL databases and polyglot persistence

The SSL performance overhead in MongoDB and MySQL

How to use MongoDB with SSL:

As you can see the SSL overhead is clearly visible being about 0.05ms slower than a plain connection. The median for the inserts with SSL is 0.28ms. Plain connections have a median at around 0.23ms. So there is a performance loss of about 25%. These are all just rough numbers. Your mileage may vary.

Then 2 posts on “MySQL Performance Blog“: SSL Performance Overhead in MySQL and MySQL encryption performance, revisited:

Some of you may recall my security webinar from back in mid-August; one of the follow-up questions that I was asked was about the performance impact of enabling SSL connections. My answer was 25%, based on some 2011 data that I had seen over on yaSSL’s website, but I included the caveat that it is workload-dependent, because the most expensive part of using SSL is establishing the connection.

These 2 articles are diving much deeper and more scientifically into the impact of using SSL with MySQL. The results are interesting and the recommendations are well worth spending the time reading them.

Original title and link: The SSL performance overhead in MongoDB and MySQL (NoSQL database©myNoSQL)


MySQL Error: Too many connections

Just a strange number from MySQL:

By default 151 is the maximum permitted number of simultaneous client connections in MySQL 5.5

And then the issue related to it:

MySQL uses one thread per client connection and many active threads are performance killer. Usually a high number of concurrent connections executing queries in parallel can cause significant slowdown and increase chances for deadlocks. Prior to MySQL 5.5, it doesn’t scale well although MySQL is getting better and better since then — but still if you have hundreds of active connections doing real work (this doesn’t count sleeping [idle] connections) then the memory usage will grow. Each connection requires per thread buffers. Also implicit in memory tables require more memory plus memory requirement for global buffers. On top of that, tmp_table_size/max_heap_table_size that each connection may use, although they are not allocated immediately per new connection.

Facebook has been doing a lot of work in this area. Just one example: Making MySQL accept connections faster; I strongly encourage you to read if you have a busy MySQL server.

Original title and link: MySQL Error: Too many connections (NoSQL database©myNoSQL)

via: http://www.mysqlperformanceblog.com/2013/11/28/mysql-error-too-many-connections/


Benchmarking graph databases... with unexpected results

A team from MIT CSAIL set up to benchmark a graph database and 3 relational databases with different models: row-based (MySQL), in-memory (VoltDB), and column-based (Vertica) . The results are interesting, to say the least:

We can see that relational databases outperform Neo4j on PageRank by up to two orders of magnitude. This is because PageRank involves full scanning and joining of the nodes and edges table, something that relational databases are very good at doing. Finding Shortest Paths involves starting from a source node and successively exploring its outgoing edges, a very different access pattern from PageRank. Still, we see from Figure 1(b) that relational databases match or outperform Neo4j in most cases. In fact, Vertica is more than twice faster than Neo4j. The only exception is VoltDB over Twitter dataset.

Being beaten at your own game is not a good thing. I hope this is just a fluke in the benchmark (misconfiguration) or a result particular to those data sets.

Original title and link: Benchmarking graph databases… with unexpected results (NoSQL database©myNoSQL)

via: http://istc-bigdata.org/index.php/benchmarking-graph-databases/


Google moves from MySQL to MariaDB

Jack Clark for TheRegister quoting Google senior systems engineer, Jeremy Cole’s talk at XLDB:

“Were running primarily on [MySQL] 5.1 which is a little outdated, and so we’re moving to MariaDB 10.0 at the moment,”

I’m wondering how much of this decision is technical and how much is political. While Jack Clark’s points to the previous “disagreements” between Google and Oracle, when I say political decisions I mean more than this: access to the various bits of the code (e.g. tests, security issues), control over the future of the product, etc.

Original title and link: Google moves from MySQL to MariaDB (NoSQL database©myNoSQL)

via: http://www.theregister.co.uk/2013/09/12/google_mariadb_mysql_migration/


Benchmarking the performance impact of Foreign Keys in MySQL Cluster 7.3

FOREIGN KEYs in MySQL Cluster is a big step forward. […] It is implemented natively at the Data Node level, where NDB stores its data. It is well known that FOREIGN KEYs come with an overhead. E.g., when writing a record into a child table, the existence must be checked in the parent table. Since data is distributed across multiple Data Nodes, the child record and parent record may be on different nodes or shards (Node Groups). Hence there is extra work to be done in terms of internal triggers and network communication, the latter being the more costly. The performance impact must be taken into account when doing capacity planning of the cluster. The question is how much the impact is, and that is what we will look at next.

These micro-benchmark numbers are looking good. But here are a couple of questions I couldn’t answer after reading the post:

  1. how was the data distributed inside the cluster? Basically those results could have been achieved with most of the foreign keys actually living on the same machine
  2. how does the impact on performance vary with the size of the cluster? (differently put, how effective is the routing of FK checks and what’s the impact on the cluster network traffic and locks)

Original title and link: Benchmarking the performance impact of Foreign Keys in MySQL Cluster 7.3 (NoSQL database©myNoSQL)

via: http://johanandersson.blogspot.com/2013/06/benchmarking-performance-impact-of.html


Nokia’s Big Data Ecosystem: Hadoop, Teradata, Oracle, MySQL

Nokia’s big data ecosystem consists of a centralized, petabyte-scale Hadoop cluster that is interconnected with a 100-TB Teradata enterprise data warehouse (EDW), numerous Oracle and MySQL data marts, and visualization technologies that allow Nokia’s 60,000+ users around the world tap into the massive data store. Multi-structured data is constantly being streamed into Hadoop from the relational systems, and hundreds of thousands of Scribe processes run every day to move data from, for example, servers in Singapore to a Hadoop cluster in the UK. Nokia is also a big user of Apache Sqoop and Apache HBase.

In the coming years you’ll hear more often stories—sales pitches—about single unified platforms solving all these problems at once. But platforms that will survive and thrive are those that will accomplish two things:

  1. keep the data gates open: in and out.
  2. work with different other platform to make this efficiently for users

Original title and link: Nokia’s Big Data Ecosystem: Hadoop, Teradata, Oracle, MySQL (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2013/04/customer-spotlight-nokias-big-data-ecosystem-connects-cloudera-teradata-oracle-and-others/


MySQL 5.6, InnoDB and fast storage: 240k QPS

Mark Callaghan runs some benchmarks against MySQL 5.6.11:

Using MySQL 5.6.11 and InnoDB with a few hacks the peak throughput was about 240,000 QPS and 210,000 block reads/second. The test server has 32 cores (16 physical cores, 32 logical cores with HT enabled). This is a great result that can probably be even better. Contention on fil_system->mutex was the bottleneck and I think that can be improved (see feature request #69276). I wonder if 400,000 block reads/second is possible?

Original title and link: MySQL 5.6, InnoDB and fast storage: 240k QPS (NoSQL database©myNoSQL)

via: http://mysqlha.blogspot.com/2013/05/mysql-56-innodb-and-fast-storage.html


Wikipedia Adopts MariaDB

The technical details of Wikipedia’s migration from MySQL to MariaDB:

As a read-heavy site, Wikipedia aggressively uses edge caching. Approximately 90% of pageviews are served entirely from the edge while at the application layer, we utilize both memcached and redis in addition to MySQL. Despite that, the MySQL databases serving English Wikipedia alone reach a daily peak of ~50k queries/second. Most are read queries served by load-balanced slaves, depending on consistency requirements. 80% of the English Wikipedia query load (up to 40k qps) are typically handled by just two database servers at any given time. Our most common query type (40% of all) has a median execution time of ~0.2ms and a 95th percentile time of ~50ms. To successfully use MariaDB in production, we need it to keep up with the level of performance obtained from Facebook’s MySQL fork, and to behave consistently as traffic patterns change.

As you can see in this post, the only “political” point made is hidden within true reasons:

Equally important, as supporters of the free culture movement, the Wikimedia Foundation strongly prefers free software projects; that includes a preference for projects without bifurcated code bases between differently licensed free and enterprise editions. We welcome and support the MariaDB Foundation as a not-for-profit steward of the free and open MySQL related database community.

Slightly different to Wikipedia Migrates to MariaDB.

Original title and link: Wikipedia Adopts MariaDB (NoSQL database©myNoSQL)

via: https://blog.wikimedia.org/2013/04/22/wikipedia-adopts-mariadb/


MySQL in the Cloud: Discontinuing of Xeround Cloud Database Public Service

Cloud and MySQL related:

We are deeply sorry to announce that Xeround’s public cloud offering will be discontinued soon. All Xeround FREE database instances will be terminated on May 8th, and the paid plans terminated on May 15th.

This was announced on May 1st.

✚ This only means more for Amazon RDS.

Original title and link: MySQL in the Cloud: Discontinuing of Xeround Cloud Database Public Service (NoSQL database©myNoSQL)

via: http://xeround.com/blog/2013/05/discontinuing-of-xeround-cloud-database-public-service


Wikipedia Migrates to MariaDB... but facts are facts

Jon Buys:

There was, and continues to be, concern over Oracle’s treatment of the open source competitor to their own Oracle database. I personally have wondered what motivation, if any, Oracle has to maintain MySQL. They may simply be milking the revenue stream created by MySQL AB until the well goes dry. Since MariaDB is surpassing MySQL in performance and community goodwill, that day may come sooner rather than later.

A couple of little known things:

  1. Oracle has been house for InnoDB since 2005. InnoDB was and continues to be the default, recommended engine for MySQL. Before and after Oracle acquired MySQL through Sun Microsystems.
  2. Oracle has been house for Sleepycat’s BerkleyDB since 2006. Those products are definitely not dead. Community-wise maybe they haven’t put much effort into extending it.

Facts are facts.

Original title and link: Wikipedia Migrates to MariaDB… but facts are facts (NoSQL database©myNoSQL)

via: http://ostatic.com/blog/wikipedia-migrates-to-mariadb


Amazon Web Services Annual Revenue Estimation

Over the weekend, Christopher Mims has published an article in which he derives a figure for Amazon Web Services’s annual revenue: $2.4 billions:

Amazon is famously reticent about sales figures, dribbling out clues without revealing actual numbers. But it appears the company has left enough hints to, finally, discern how much revenue it makes on its cloud computing business, known as Amazon Web Services, which provides the backbone for a growing portion of the internet: about $2.4 billion a year.

There’s no way to decompose this number into the revenue of each AWS solution. For the data space I’d be interested into:

  1. S3 revenues. This is the space Basho’s Riak CS competes into.

    After writing my first post about Riak CS, I’ve learned that in Japan, the same place where Riak CS is run by Yahoo! new cloud storage, Gemini Mobile Technologies has been offering to local ISPs a similar S3-service built on top of Cassandra.

  2. Redshift is pretty new and while I’m not aware of immediate competitors (what am I missing?), I don’t think it accounts for a significant part of this revenue. Even if some of the early users, like AirBnb, report getting very good performance and costs from it.

    Redshift is powered by ParAccell, which, over the weekend, has been acquired by Actian.

  3. Amazon Elastic MapReduce. This is another interesting space from which Microsoft wants a share with its Azure HDInsight developed in collaboration with Hortonworks.

    In this space there’s also MapR and Google Compute combination which seem to be extremely performant.

  4. Interestingly Amazon is making money also from some of the competitors of its Amazon Dynamo and RDS services. The advantage of owning the infrastructure.

Original title and link: Amazon Web Services Annual Revenue Estimation (NoSQL database©myNoSQL)


Using Redis to Optimize MySQL Queries

I somehow missed this post from Flickr team describing their use of (app enforced) capped sorted sets in Redis as sort of a reduced optimized secondary index for MySQL:

[…] the bottleneck was not in generating the list of photos for your most recently active contact, it was just in finding who your most recently active contact was (specifically if you have thousands or tens of thousands of contacts). What if, instead of fully denormalizing, we just maintain a list of your recently active contacts? That would allow us to optimize the slow query, much like a native MySQL index would; instead of needing to look through a list of 20,000 contacts to see which one has uploaded a photo recently, we only need to look at your most recent 5 or 10 (regardless of your total contacts count)!

This is the first time I’m encountaring this approach where a NoSQL database is used not to provide directly the final data (usually in a denormalized format), but rather to optimize the access to the master of data. Basically this is a metadata layer optimizer. Neat!

Original title and link: Using Redis to Optimize MySQL Queries (NoSQL database©myNoSQL)

via: http://code.flickr.net/2013/03/26/using-redis-as-a-secondary-index-for-mysql/