aws: All content tagged as aws in NoSQL databases and polyglot persistence
The topic of running MongoDB on Amazon Web Services using Elastic Block Storage came up again among the 10 tips for running MongoDB from Engine Yard:
you should know that the performance of Amazon’s Elastic Block Storage (EBS) can be inconsistent.
Following up on that Mahesh P-Subramanya aptly added:
Indeed! I’d actually take it a step further and say Do not use EBS in any environment where reliability and/or performance characteristics of your disk-access are important. Or, to put it differently, asynchronous backups - OK, disk-based databases - Not So Much.
Interestingly though, some presentations earlier this year–MongoDB in the Amazon Cloud and Running MongoDB on the Cloud—left me, and others with the impression that EBS should not be dismissed so fast.
Original title and link: MongoDB and Amazon Elastic Block Storage (EBS) ( ©myNoSQL)
This is how things are rolling these days. Microsoft talks about offerring Hadoop integration with Project Isotop in 2012, Amazon is announcing immediate availability of new beefed instances (Cluster Compute Eight Extra Large (cc2.8xlarge)) and reduced prices for some of the existing instances.
Original title and link: Hadoop: Amazon Elastic MapReduce and Microsoft Project Isotop ( ©myNoSQL)
We have evaluated various options to backup data inside HBase and built a solution. This post will explain the options and also provide the solution for anyone to download and implement it for their own HBase installations.
After considering these options we developed a simple tool, which backs up data to Amazon S3 and restores it when needed. Another requirement is to take a full backup over weekend and a daily incremental backup.
In a recovery scenario, it should firstly initiate a clean environment with all tables created and populated with latest full backup data. Then it should apply all incremental backups sequentially. However, with this method, deletes are not captured and this may lead to some unnecessary data in tables. This is a known disadvantage for this method of backup and restore.
This backup program uses internally the HBase Import and Export tools to execute the programs in a Map-Reduce way.
Top 10 Features of the backup tool
- Export complete data for the given set of tables to S3 bucket.
- Export incrementally data for the given set of tables to S3 bucket.
- List all complete as well as incremental backup repositories.
- Restore a table from backup based on the given backup repository.
- Runs in Map-Reduce
- In case of connection failure, retries with increasing delays
- Handles special characters like _ which creates the export and import activities.
- Enhancement of existing Export and Import tool with detail logging to report a failure than just exiting with a program status of 1.
- Works in human readable time format for taking, listing and restoring of backup than using system tick time or unix
EPOCHtime (Time represented as a Number than readabale format as
YYYY.MM.DD 24HH:MINUTE:SECOND:MILLSECOND TIMEZONE)
- All parameters are taken from command line which allows the cron job to run this at regular interval.
Setting up the tool
- Download the package from hbackup.install.tar
This package includes the necessary jar files and the source code.
- Setup a configuration file. Download the
hbase-site.xmlfile. Add to this
- Setup the class path with all jars existing inside the
hbackup-1.0-core.jarfile bundled inside the downloaded hbackup.install.tar. Make sure
hbackup-1.0-core.jarat the beginning of the classpath. In addition to this add the configuration directory to CLASSPATH which has kept hbase-site.xml file.
Running the tool
Usage: It runs in 4 modes as [backup.full], [backup.incremental], [backup.history] and [restore].
mode=backup.full tables="comma separated tables" backup.folder=S3-Path date="YYYY.MM.DD 24HH:MINUTE:SECOND:MILLSECOND TIMEZONE"
mode=backup.full tables=tab1,tab2,tab3 backup.folder=s3://S3BucketABC/ date="2011.12.01 17:03:38:546 IST"
mode=backup.full tables=tab1,tab2,tab3 backup.folder=s3://S3BucketABC/
mode=backup.incremental tables="comma separated tables" backup.folder=S3-Path duration.mins=Minutes
Example of backup of changes occurred in the last 30 minutes:
mode=backup.incremental backup.folder=s3://S3BucketABC/ duration.mins=30 tables=tab1,tab2,tab3
Example of listing past archives. Incremental ones end with
mode=restore backup.folder=S3-Path/ArchieveDate tables="comma separated tables"
Example of adding the rows archived during that date. First apply a full backup and then apply incremental backups.
mode=backup.history backup.folder=s3://S3-Path/DAY_MON_HH_MI_SS_SSS_ZZZ_YYYY tables=tab1,tab2,tab3
Sample scripts to run the backup tool
$ cat setenv.sh for file in `ls /mnt/hbase/lib` do export CLASSPATH=$CLASSPATH:/mnt/hbase/lib/$file; done export CLASSPATH=/mnt/hbase/hbase-0.90.4.jar:$CLASSPATH export CLASSPATH=/mnt/hbackup/hbackup-1.0-core.jar:/mnt/hbackup/java-xmlbuilder-0.4.jar:/mnt/hbackup/jets3t-0.8.1a.jar:/mnt/hbackup/conf:$CLASSPATH
$ cat backup_full.sh . /mnt/hbackup/bin/setenv.sh dd=`date "+%Y.%m.%d %H:%M:%S:000 %Z"` echo Backing up for date $dd for table in `echo table1 table2 table3` do /usr/lib/jdk/bin/java com.bizosys.oneline.maintenance.HBaseBackup mode=backup.full backup.folder=s3://mybucket/ tables=$table "date=$dd" sleep 10 done
List of backups:
$ cat list.sh . /mnt/hbackup/bin/setenv.sh /usr/lib/jdk/bin/java com.bizosys.oneline.maintenance.HBaseBackup mode=backup.history backup.folder=s3://mybucket
Original title and link: Backin Up HBase to Amazon S3 ( ©myNoSQL)
Google has just announced a new (lab) product: Google Cloud SQL which is Google’s Database-as-a-Service version of Amazon RDS—based on initial information, Google Cloud SQL could be characterized as a very basic/intro version of Amazon RDS.
Main features listed in the announcement:
- Managed environment
- High reliability and availability - your data is replicated synchronously to multiple data centers. Machine, rack and data center failures are handled automatically to minimize end-user impact. It also support asynchronous replication
- Familiar MySQL database environment with JDBC support (for Java-based App Engine applications) and DB-API support (for Python-based App Engine applications). It even support data import and export using
- Simple and powerful integration with Google App Engine.
- Command line tool
- SQL prompt in the Google APIs Console
The service is free for now and Google promises a 30 days notice without giving any hints on the pricing model though.
Original title and link: Google Launches Google Cloud SQL a Relational Database as a Service ( ©myNoSQL)
I’ve been posting a lot about deployments in the cloud and especially about deploying MongoDB in the Amazon cloud:
- MongoDB on Amazon EC2 with EBS Volumes
- MongoDB on EC2
- MongoDB in the Amazon Cloud
- Setting Up MongoDB Replica Sets on Amazon EC2
- MongoDB and Amazon: Why EBS?
- Amazon EBS vs SSD: Price, Performance, QoS
- Multi-tenancy and Cloud Storage Performance
In this video Jared Rosoff covers topics like scaling and performance characteristics of running MongoDB in the cloud and he also shares some best practices when using Amazon EC2.
While many will find this new service useful, it is a bit of a disappointement that Amazon took the safe route and went with pure Memcached. The only notable feature of Amazon ElastiCache is automatic failure detection and recovery. But compared with Membase (and the soon to be released Couchbase 2.0) it is missing clustering, replication, support for virtual nodes, etc. Even if advertising a push-button scaling, ElastiCache will lose cached data on adding or removing instances.
The pace at which Amazon is launching new services is indeed impressive. I’m wondering what will be the first NoSQL database that will get official Amazon support.
Original title and link: Memcached in the Cloud: Amazon ElastiCache ( ©myNoSQL)