MySQL: All content tagged as MySQL in NoSQL databases and polyglot persistence
- MySQL works well enough most of the time that it’s worth using. Twitter values stability over features so they’ve stayed with older releases.
- MySQL doesn’t work for ID generation and graph storage.
- MySQL is used for smaller datasets of < 1.5TB, which is the size of their RAID array, and as a backing store for larger datasets.
- Typical database server config: HP DL380, 72GB RAM, 24 disk RAID10. Good balance of memory and disk.
In my summary of the talk I’ve noted:
- Use MySQL when it works, something else when not - fortunately MySQL often does work
- MySQL is used by Twitter because it’s robust, replication works and it’s easy to use and run
- MySQL doesn’t work good for graphs, auto_increment, replication lag is a problem
- MySQL replication improvements like crash safe multi-threaded slave is what they need
But Twitter is also one of the most prominent use cases of polyglot persistence.While MySQL is an important piece of the Twitter architecture, it is not the only storage or data processing engine.
The following other data solutions get mentioned in Jeremy’s talk:
- Cassandra is used for high velocity writes, and lower velocity reads. The advantage is Cassandra can run on cheaper hardware than MySQL, it can expand easier, and they like schemaless design.
- Hadoop is used to process unstructured and large datasets, hundreds of billions of rows.
- Vertica is being used for analytics and large aggregations and joins so they don’t have to write MapReduce jobs.
Yet that’s not the whole story. Twitter is using Cassandra and Memcached for real-time URL fetchers and they also experimented with using Gizzard for Redis. After buying BackType, Twitter got and then open sourced Storm, a Hadoop-like real-time data processing tool. And who knows what’s in the Twitter labs right now.
I’m embedding below Jeremy Cole’s “Big and Small Data at @Twitter”:
Q: Copy MySQL to RAM as a poor man’s memcached replacement?
A: Use the the MEMORY storage engine on a read only slave to do your reads from, is exactly what you really want and a sane setup. Forget “dumping it to disk” (?!) or other strange things.
You can even put the slave as another instance on your existing server if you can’t afford to setup a dedicated slave, but properly tuning the MySQL parameters for mostly read workloads will bring a significant performance enhancement too!
Original title and link: MySQL MEMORY as Poor Man’s Memcached Replacement ( ©myNoSQL)
Q: Considering performance only, can a MySQL Cluster beat a custom data sharding MySQL solution?
A: I would say that MySQL Cluster could achieve higher throughput/host than sharded MySQL+InnoDB provided that :
- Queries are simple
- All data fits in-memory
In terms of latency, MySQL Cluster should have more stable latency than sharded MySQL. Actual latencies for purely in-memory data could be similar. As queries become more complex, and data is stored on disk, the performance comparison becomes more confusing.
Make sure you read the complete answer as it covers some more MySQL Sharding vs MySQL Cluster pros and cons.
Original title and link: MySQL Sharding vs MySQL Cluster ( ©myNoSQL)
Micheal Shallop describes in this post how he “built and re-buit” a geospatial table, replacing several tables in MySQL with MongoDB:
The mongo geospatial repository will be replacing several tables in the legacy mySQL system – as you may know, mongodb comes with full geospatial support so executing queries against a collection (table) built in this manner is shocking in terms of it’s response speeds — especially when you compare those speeds to the traditional mySQL algorithms for extracting geo-points based on distance ranges for lat/lon coordinates. The tl;dr for this paragraph is: no more hideous trigonometric mySQL queries!
But what actually picked my attention was this paragraph:
What I learned in this exercise was that the key to architecting a mongo collection requires you to re-think how data is stored. Mongo stores data as a collection of documents. The key to successful thinking, at least in terms of mongo storage, is denormalization of your data objects.
This made me realize that MongoDB adoption is benefiting hugely from the fact that its data model and querying are the closest to the relational databases, neither requiring a radical mindshift from developers that have at least once touched a database. It is like knowing a programming language and learning a 2nd one that follows almost the same paradigms.
The same cannot be said about key-value stores, multi-dimensional maps, MapReduce algorithms, or graph databases. Any of these would require one to dismiss pretty much everything learned in the relational model and completely remodel the world. It’s a tougher job, but when used right the reward pays off.
Original title and link: MongoDB, Data Modeling, and Adoption ( ©myNoSQL)
As revealed by Ryan Carver in a web pulp TV interview:
- Besides MySQL, the stack also contains Redis and MongoDB.
- Redis is used for stashing Resque data, Vanity metrics, etc.
- MongoDB is used for storing CDN logs, basic analytics data, traffic-tracking data, etc.
- Typekit has a unique type of revenue-share deal with its Type foundry partners, distributing revenues based on the popularity/usage of font faces.
- MongoDB is particularly used for such usage-based data collection and calculation along with its built-in MapReduce framework for reporting.
- Ryan thinks on-the-fly-report-generation is technically very much possible with MapReduce.
About Typekit infrastructure:
- Ryan says Typekit currently has about a dozen servers in total, hosted on Slicehost.
- Typekit plans to shift to an EC2 environment in near future because of the easy scaling and flexibility of EC2.
- They are currently preparing a cloud formation with Chef, rebuilding Typekit’s operations infrastructure.
Now go watch the whole interview.
Original title and link: Typekit Architecture Includes Redis, MongoDB, and MySQL ( ©myNoSQL)
A new great article from Todd Hoff dissecting the DataSift architecture:
In terms of data store, DataSift architecture includes:
- MySQL (Percona server) on SSD drives
- HBase cluster (currently, ~30 hadoop nodes, 400TB of storage)
- Memcached (cache)
- Redis (still used for some internal queues, but probably going to be dismissed soon)
Leave whatever you were doing and go read it now.
Original title and link: DataSift Using MySQL, HBase, Memcached to Deal With Twitter Firehose ( ©myNoSQL)