ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

security: All content tagged as security in NoSQL databases and polyglot persistence

Enterprise-class NoSQL

What is distinctive about an enterprise-class NoSQL database is its support for additional enterprise-scale application requirements, namely: ACID (atomic, consistent, isolated, and durable) transactions, government-grade security and elasticity, as well as automatic failover.

What is distinctive about an enterprise-class NoSQL database is what my company is selling.

If that would be true, I doubt we would have no any other databases around considering MarkLogic’ age and perfect fit.

Snarky comments aside, the enterprise requirements are so complicated, numerous, political and sometime non-technical, that I don’t think anyone would ever be able to come up with a definition or (even if extremely long) checklist of what’s enterprise-grade.

Original title and link: Enterprise-class NoSQL (NoSQL database©myNoSQL)

via: http://www.information-age.com/technology/information-management/123458126/putting-enterprise-nosql-acid-ambiguity-out


Project Rhino goal: at-rest encryption for Apache Hadoop

Although network encryption has been provided in the Apache Hadoop platform for some time (since Hadoop 2.02-alpha/CDH 4.1), at-rest encryption, the encryption of data stored on persistent storage such as disk, is not. To meet that requirement in the platform, Cloudera and Intel are working with the rest of the Hadoop community under the umbrella of Project Rhino — an effort to bring a comprehensive security framework for data protection to Hadoop, which also now includes Apache Sentry (incubating) — to implement at-rest encryption for HDFS (HDFS-6134 and HADOOP-10150).

Looks like I got this wrong: Apache Sentry will become part of Project Rhino.

Original title and link: Project Rhino goal: at-rest encryption for Apache Hadoop (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2014/06/project-rhino-goal-at-rest-encryption/


A quick guide to using Sentry authorization in Hive

A guide to Apache Sentry:

Sentry brings in fine-grained authorization support for both data and metadata in a Hadoop cluster. It is already being used in production systems to secure the data and provide fine-grained access to its users. It is also integrated with the version of Hive shipping in CDH (upstream contribution is pending), Cloudera Impala, and Cloudera Search.

Original title and link: A quick guide to using Sentry authorization in Hive (NoSQL database©myNoSQL)

via: https://blogs.apache.org/sentry/entry/getting_started


The SSL performance overhead in MongoDB and MySQL

How to use MongoDB with SSL:

As you can see the SSL overhead is clearly visible being about 0.05ms slower than a plain connection. The median for the inserts with SSL is 0.28ms. Plain connections have a median at around 0.23ms. So there is a performance loss of about 25%. These are all just rough numbers. Your mileage may vary.

Then 2 posts on “MySQL Performance Blog“: SSL Performance Overhead in MySQL and MySQL encryption performance, revisited:

Some of you may recall my security webinar from back in mid-August; one of the follow-up questions that I was asked was about the performance impact of enabling SSL connections. My answer was 25%, based on some 2011 data that I had seen over on yaSSL’s website, but I included the caveat that it is workload-dependent, because the most expensive part of using SSL is establishing the connection.

These 2 articles are diving much deeper and more scientifically into the impact of using SSL with MySQL. The results are interesting and the recommendations are well worth spending the time reading them.

Original title and link: The SSL performance overhead in MongoDB and MySQL (NoSQL database©myNoSQL)


The future of of Riak’s Security

The proposal for Riak’s security, discussed there in the open:

Thus, I propose we add authentication/authorization/TLS and auditing to Riak, to make Riak more resilient to unauthorized access. In general, I took the design cues from PostgreSQL. Another goal was to make this applicable to riak_core, so any reliance on KV primitives or features are intentionally avoided.

Andrew Thomson, the author of the proposal, mentions PostgreSQL as a source of inspiration. Besides the normal topics, authentication, authorization, and auditing, the document has an Open questions section. If you care about Riak’s future security go and help out.

Original title and link: The future of of Riak’s Security (NoSQL database©myNoSQL)


Hadoop Security and Cloudera’s new Role Based Access Control Sentry project

Security is an enterprise feature

At Hadoop Summit, Merv Adrian (VP Gartner) has shown data about Hadoop’s adoption in the enterprise space over the last 2 years and the numbers were great (actually they weren’t even good).

Hadoop vendors are becoming more aggressive in adding features that would make Hadoop enterprise ready. In some sectors (e.g. government, financial and health services) data security is regulated and this makes security features a top priority for adopting Hadoop in these spaces.

The state of Hadoop Security

Tony Baer1 has a nice guest post on ZDNet summarizing the current state of Hadoop security.

There’s a mix of activity on the open source and vendor proprietary sides for addressing the void. There are some projects at incubation stage within Apache, or awaiting Apache approval, for providing LDAP/Active Directory linked gateways (Knox), data lifecycle policies (Falcon), and APIs for processor-based encryption (Rhino). There’s also an NSA-related project for adding fine-grained data security (Accumulo) based on Google BigTable constructs. And Hive Server 2 will add the LDAP/AD integration that’s current missing.

What’s interesting to note is that many big vendors have been focusing on adding proprietary security and auditing features to Hadoop.

Cloudera’s post introducing Sentry also provides a short overview of security in Hadoop, by looking at 4 areas:

  1. Perimeter security: network security, firewall, and Kerberos authentication
  2. Data security: encryption and masking currently available through a combination of recent work in the Hadoop community and vendor solutions.
  3. Access security: fine grained ACL
  4. Visibility: monitoring access and auditing

Sentry: Role-based Access Control for Hadoop

Cloudera has announced Sentry a fine grained role-based access control solution for Hadoop meant to simplify and augment the current course-grained HDFS-level authorization available in Hadoop.

Sentry architecture

Sentry architecture

Sentry comprises a core authorization provider and a binding layer. The core authorization provider contains a policy engine, which evaluates and validates security policies, and a policy provider, which is responsible for parsing the policy. The binding layer provides a pluggable interface that can be leveraged by a binding implementation to talk to the policy engine. (Note that the policy provider and the binding layer both provide pluggable interfaces.)

At this time, we have implemented a file-based provider that can understand a specific policy file format.

According to the post, right now only Impala and Hive have bindings for Sentry. This makes me wonder how Sentry is deployed in a Hadoop cluster so other layers could take advantage of the Sentry ACL. I see such a security feature implemented very close to HDFS so it would basically work with all types of access to data stored.

For more details about Sentry, read the official post With Sentry, Cloudera Fills Hadoop’s Enterprise Security Gap.

There are also numerous rewrites of the announcement:


  1. Tony Baer is a principal analyst covering Big Data at Ovum. 

Original title and link: Hadoop Security and Cloudera’s new Role Based Access Control Sentry project (NoSQL database©myNoSQL)


Built for handling credit card numbers

Jordan Novet in a GigaOm post about the initial lack of security features in Hadoop:

Hadoop, the much-hyped software for processing large amounts of data on commodity hardware, has its roots in indexing tons of webpages for a search engine, not handling credit card numbers. And it wasn’t developed from the start with security in mind.

  1. I really don’t know how many popular and widely used storage solutions have their roots in handling credit card numbers.
  2. I really don’t know how many popular and widely used storage solutions have been developed from the start with security in mind
  3. I don’t know of any NoSQL databases that failed to be adopted because it was/is lacking security features1.

Bottom line, security features are critical for some users. Plus security features should never be taken light. And while it’s true that every lacking feature is limiting adoption, projects need to weight very well where the focus goes. Basically, if you don’t have a long list of customers handling credit card numbers, don’t focus your new database on this feature.


  1. In the early days most of the NoSQL databases completely lacked any security features. But if they didn’t get the adoption they dreamed about that wasn’t caused by this. 

Original title and link: Built for handling credit card numbers (NoSQL database©myNoSQL)


Hadoop, Security, and DataStax Enterprise

But the eWeek article demonstrates that the same concerns [nb: about security] exist where Hadoop implementations are concerned. The article says: “It [Hadoop] was not written to support hardened security, compliance, encryption, policy enablement and risk management.”

The story goes like this: in the early days of NoSQL, when no NoSQL database had any sort of security features, people behind the projects answered: “it’s too early. we’re focusing on more important features. and you can still get around security by placing your database behind firewalls”. Today, when more and more NoSQL databases are adding security features, the story these same people are telling is quite different: “ohhh, security is critical. we don’t really see how you could run a database without these features”.

Security is always critical. And exactly the same can be said about maintaining a solid, coherent story of what you are telling your users.

Original title and link: Hadoop, Security, and DataStax Enterprise (NoSQL database©myNoSQL)

via: http://www.datastax.com/2013/04/hadoop-security-and-the-enterprise


Hadoop Security Design Paper

Speaking about the buzz around Dataguise’s field-level encryption for Apache Hadoop and their 10 best practices for securing sensitive data in Hadoop, after the break1, you can find the “Hadoop Security Design” paper written by a team at Yahoo.


Dataguise Presents 10 Best Practices for Securing Sensitive Data in Hadoop

  1. Start Early! Determine the data privacy protection strategy during the planning phase of a deployment, preferably before moving any data into Hadoop. This will prevent the possibility of damaging compliance exposure for the company and avoid unpredictability in the roll out schedule.

  2. Identify what data elements are defined as sensitive within your organization. Consider company privacy policies, pertinent industry regulations and governmental regulations.

  3. Discover whether sensitive data is embedded in the environment, assembled or will be assembled in Hadoop.

  4. Determine the compliance exposure risk based on the information collected.

  5. Determine whether business analytic needs require access to real data or if desensitized data can be used. Then, choose the right remediation technique (masking or encryption). If in doubt, remember that masking provides the most secure remediation while encryption provides the most flexibility, should future needs evolve.

  6. Ensure the data protection solutions under consideration support both masking and encryption remediation techniques, especially if the goal is to keep both masked and unmasked versions of sensitive data in separate Hadoop directories.

  7. Ensure the data protection technology used implements consistent masking across all data files (Joe becomes Dave in all files) to preserve the accuracy of data analysis across every data aggregation dimensions.

  8. Determine whether a tailored protection for specific data sets is required and consider dividing Hadoop directories into smaller groups where security can be managed as a unit. ?

  9. Ensure the selected encryption solution interoperates with the company’s access control technology and that both allow users with different credentials to have the appropriate, selective access to data in the Hadoop cluster.

  10. Ensure that when encryption is required, the proper technology (Java, Pig, etc.) is deployed to allow for seamless decryption and ensure expedited access to data.

Wait… where’s point 11, buy Dataguise?

Original title and link: Dataguise Presents 10 Best Practices for Securing Sensitive Data in Hadoop (NoSQL database©myNoSQL)

via: http://www.businesspress24.com/pressrelease1213023.html


Field-Level Encryption for Apache Hadoop From Dataguise

Dataguise says the latest version of its data-protection product enables users to encrypt sensitive data right down to specific fields within an open source Apache Hadoop database.

DG for Hadoop 4.3 also makes use of the traditional Dataguise “masking” capability across single or multiple Hadoop clusters to camouflage sensitive data.

$25.000 a piece (hopefully not a piece of encrypted data though).

Apache Accumulo is known to offer a BigTable inspired open source implementation with cell-based access control.

Original title and link: Field-Level Encryption for Apache Hadoop From Dataguise (NoSQL database©myNoSQL)

via: http://news.techworld.com/security/3437999/dataguise-introduces-field-level-encryption-for-apache-hadoop-database/


Extra Security Measures for Database Projects

This means carying about your users’ data:

What we intend to do is shut off updates from the master git repo to the anonymous-git mirror, and to github, from Monday afternoon until Thursday morning. Commit-log emails to pgsql-committers will also be held for this period. This will prevent the commits that fix and document the bug from becoming visible to anyone except Postgres committers. Updates will resume as soon as the release announcement is made.

Original title and link: Extra Security Measures for Database Projects (NoSQL database©myNoSQL)

via: http://www.postgresql.org/message-id/14040.1364490185@sss.pgh.pa.us