bigdata: All content about bigdata in NoSQL databases and polyglot persistence
Security is an enterprise feature
At Hadoop Summit, Merv Adrian (VP Gartner) has shown data about Hadoop’s adoption in the enterprise space over the last 2 years and the numbers were great (actually they weren’t even good).
Hadoop vendors are becoming more aggressive in adding features that would make Hadoop enterprise ready. In some sectors (e.g. government, financial and health services) data security is regulated and this makes security features a top priority for adopting Hadoop in these spaces.
The state of Hadoop Security
There’s a mix of activity on the open source and vendor proprietary sides for addressing the void. There are some projects at incubation stage within Apache, or awaiting Apache approval, for providing LDAP/Active Directory linked gateways (Knox), data lifecycle policies (Falcon), and APIs for processor-based encryption (Rhino). There’s also an NSA-related project for adding fine-grained data security (Accumulo) based on Google BigTable constructs. And Hive Server 2 will add the LDAP/AD integration that’s current missing.
What’s interesting to note is that many big vendors have been focusing on adding proprietary security and auditing features to Hadoop.
Cloudera’s post introducing Sentry also provides a short overview of security in Hadoop, by looking at 4 areas:
- Perimeter security: network security, firewall, and Kerberos authentication
- Data security: encryption and masking currently available through a combination of recent work in the Hadoop community and vendor solutions.
- Access security: fine grained ACL
- Visibility: monitoring access and auditing
Sentry: Role-based Access Control for Hadoop
Cloudera has announced Sentry a fine grained role-based access control solution for Hadoop meant to simplify and augment the current course-grained HDFS-level authorization available in Hadoop.
Sentry comprises a core authorization provider and a binding layer. The core authorization provider contains a policy engine, which evaluates and validates security policies, and a policy provider, which is responsible for parsing the policy. The binding layer provides a pluggable interface that can be leveraged by a binding implementation to talk to the policy engine. (Note that the policy provider and the binding layer both provide pluggable interfaces.)
At this time, we have implemented a file-based provider that can understand a specific policy file format.
According to the post, right now only Impala and Hive have bindings for Sentry. This makes me wonder how Sentry is deployed in a Hadoop cluster so other layers could take advantage of the Sentry ACL. I see such a security feature implemented very close to HDFS so it would basically work with all types of access to data stored.
For more details about Sentry, read the official post With Sentry, Cloudera Fills Hadoop’s Enterprise Security Gap.
There are also numerous rewrites of the announcement:
- Rachel King for ZDNet: Cloudera intros new authorization module for Hadoop | ZDNet
- Virginia Backaitis for CMSWire: Cloudera Delivers Sentry Security For Hadoop: Regulated Enterprises Can Now Ask Big Data Questions
- Justin Lee for TheWhir: Cloudera Introduces New Authorization Module for Hadoop
- Isaac Lopez for Dataname: Cloudera Adds a Sentry to Their Stack - Datanami
- Jordan Novet for GigaOm: Cloudera keeps sensitive data hidden from prying eyes with new authorization settings — Tech News and Analysis
- Doug Henshen for InformationWeek: Cloudera Brings Role-Based Security To Hadoop - Software -
- Nick Kolakowski for Slashdot: Cloudera’s Sentry Offers Access Security for Big Data
Tony Baer is a principal analyst covering Big Data at Ovum. ↩
Original title and link: Hadoop Security and Cloudera’s new Role Based Access Control Sentry project ( ©myNoSQL)
Jordan Novet in a GigaOm post about the initial lack of security features in Hadoop:
Hadoop, the much-hyped software for processing large amounts of data on commodity hardware, has its roots in indexing tons of webpages for a search engine, not handling credit card numbers. And it wasn’t developed from the start with security in mind.
- I really don’t know how many popular and widely used storage solutions have their roots in handling credit card numbers.
- I really don’t know how many popular and widely used storage solutions have been developed from the start with security in mind
- I don’t know of any NoSQL databases that failed to be adopted because it was/is lacking security features1.
Bottom line, security features are critical for some users. Plus security features should never be taken light. And while it’s true that every lacking feature is limiting adoption, projects need to weight very well where the focus goes. Basically, if you don’t have a long list of customers handling credit card numbers, don’t focus your new database on this feature.
In the early days most of the NoSQL databases completely lacked any security features. But if they didn’t get the adoption they dreamed about that wasn’t caused by this. ↩
Original title and link: Built for handling credit card numbers ( ©myNoSQL)
Authored by a mixed team from University of Southern California, University of Texas, and Facebook, a paper about a new family of erasure codes more efficient that Reed-Solomon codes:
Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability.
This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed- Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance.
We implement our new codes in Hadoop HDFS and com- pare to a currently deployed HDFS module that uses Reed- Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2? on the repair disk I/O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14% more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replica- tion.
✚ Robin Harris has a good summary of the paper on StorageMojo:
LRC [Locally Repairable Codes] test results found several key results.
- Disk I/O and network traffic were reduced by half compared to RS codes.
- The LRC required 14% more storage than RS, information theoretically optimal for the obtained locality.
- Repairs times were much lower thanks to the local repair codes.
- Much greater reliability thanks to fast repairs.
- Reduced network traffic makes them suitable for geographic distribution.
✚ While erasure codes are meant to reduce the storage requirements, it also seems to me that they introduce a limitation into distributed data processing systems like Hadoop: having multiple copies of data available in the cluster allows for better I/O performance when compared with clusters using erasure codes where there’s only a single copy of the data.
✚ There’s also a study paper of erasure codes on Facebook warehouse cluster authored by a mixed team from Berkley and Facebook: A solution to the network challenges of data recovery in erasure-coded distributed storage systems: a study on the Facebook warehouse cluster:
Our study reveals that recovery of RS-coded [Reed-Solomon] data results in a significant increase in network traffic, more than a hundred terabytes per day, in a cluster storing multiple petabytes of RS-coded data.
To address this issue, we present a new storage code using our recently proposed_Piggybacking_ framework, that reduces the network and disk usage during recovery by 30% in theory, while also being storage optimal and supporting arbitrary design parameters.
Original title and link: Papers: Novel Erasure Codes for Big Data from Facebook ( ©myNoSQL)
Jim Walker in The Business Value of Hadoop as seen through the Big Data:
While every organization is different their big data is often very similar. For the most part, Hadoop is collecting massive amounts of data across six basic types of data: social media activity, clickstream data, server logs, unstructured (videos, docs, etc) and machine/sensor data from equipment in the field.
From these categories, I think only machine/sensor data can be considered critical data. Actually if you think of it, server logs, clickstreams, and even social media activity are also sensor data; originated in servers and respectively humans.
The future of data processing is platforms that would be able to bring together all critical data disregarding their main storage location. Some call this federated databases. Some call this logical data warehouses. The specific term doesn’t matter though. It’s the core principles that will make the difference:integration and integrated processing in close to real time.
Original title and link: Types of data that land in Hadoop ( ©myNoSQL)