SlideShare a Scribd company logo
By: Shrey Mehrotra
 A form of protection where a separation is created between the assets and the threat.
 Security in IT realm:
Application security
Computing security
Data security
Information security
Network security
Data : We have critical data in HDFS.
Resources : Each node of Hadoop cluster has resources required for executing
applications.
Applications : Web Applications and REST APIs to access cluster details.
Services : HDFS, YARN and other services running on the cluster nodes.
Network Security : Services and Application communications over network.
 Configuration
 Data confidentiality
 Service Level Authorization
 Encryption
 Authentication for Hadoop HTTP web-consoles
 Delegation Tokens
 Kerberos
core-site.xml
Parameter Value Notes
hadoop.security.authentication kerberos
simple : No authentication. (default)
kerberos : Enable authentication by Kerberos.
hadoop.security.authorization true Enable RPC service-level authorization.
hadoop.rpc.protection authentication
authentication : authentication only (default)
integrity : integrity check in addition to authentication
privacy : data encryption in addition to integrity
hadoop.proxyuser.superuser.hosts comma separated hosts from which superuser access are allowd to impersonation. * means wildcard.
hadoop.proxyuser.superuser.groups comma separated groups to which users impersonated by superuser belongs. * means wildcard.
hdfs-site.xml
Parameter Value Notes
dfs.block.access.token.enable true Enable HDFS block access tokens for secure operations.
dfs.https.enable true This value is deprecated. Use dfs.http.policy
dfs.namenode.https-address nn_host_fqdn:50470
dfs.https.port 50470
dfs.namenode.keytab.file /etc/security/keytab/nn.service.keytab Kerberos keytab file for the NameNode.
dfs.namenode.kerberos.principal nn/_HOST@REALM.TLD Kerberos principal name for the NameNode.
dfs.namenode.kerberos.internal.spnego.principal HTTP/_HOST@REALM.TLD HTTP Kerberos principal name for the NameNode.
 Superuser can submit jobs or access hdfs on behalf of another user in a secured
way.
 Superuser must have kerberos credentials to be able to impersonate another user.
Ex. A superuser “bob” wants to submit job or access hdfs cluster as “alice”
//Create ugi for joe. The login user is 'super'.
UserGroupInformation ugi =
UserGroupInformation.createProxyUser(“alice", UserGroupInformation.getLoginUser());
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
//Submit a job
JobClient jc = new JobClient(conf);
jc.submitJob(conf);
//OR access hdfs
FileSystem fs = FileSystem.get(conf);
fs.mkdir(someFilePath);
}
}
 The superuser must be configured on Namenode and ResourceManager to be
allowed to impersonate another user. Following configurations are required.
<property>
<name>hadoop.proxyuser.super.groups</name>
<value>group1,group2</value>
<description>Allow the superuser super to impersonate any members of the group group1 and group2</description>
</property>
<property>
<name>hadoop.proxyuser.super.hosts</name>
<value>host1,host2</value>
<description>The superuser can connect only from host1 and host2 to impersonate a user</description>
</property>
 Initial authorization mechanism to ensure clients connecting to a particular Hadoop service have the
necessary, pre-configured, permissions and are authorized to access the given service.
 For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to
submit jobs.
 By default, service-level authorization is disabled for Hadoop.
 To enable it set following configuration property in core-site.xml :
<property>
<name>hadoop.security.authorization</name>
<value> true</value>
</property>
 hadoop-policy.xml defines an access control list for each Hadoop service.
 Every ACL has simple format, a comma separated list of users and groups separated by space.
Example: user1,user2 group1,group2.
 Blocked Access Control Lists
security.client.protocol.acl  security.client.protocol.acl.blocked
 Refreshing Service Level Authorization Configuration
hadoop dfsadmin –refreshServiceAcl
<property>
<name>security.job.submission.protocol.acl</name>
<value>alice,bob mapreduce</value>
</property>
 Allow only users alice, bob and users in the mapreduce group to submit jobs to the MapReduce cluster:
 Allow only DataNodes running as the users who belong to the group datanodes to communicate with the NameNode:
<property>
<name>security.datanode.protocol.acl</name>
<value>datanodes</value>
</property>
 Allow any user to talk to the HDFS cluster as a DFSClient:
<property>
<name>security.client.protocol.acl</name>
<value>*</value>
</property>
 Data Encryption on RPC
• The data transfered between hadoop services and clients.
• Setting hadoop.rpc.protection to "privacy" in the core-site.xml activate data encryption.
 Data Encryption on Block data transfer
• set dfs.encrypt.data.transfer to "true" in the hdfs-site.xml.
• set dfs.encrypt.data.transfer.algorithm to either "3des" or "rc4" to choose the specific encryption
algorithm.
• By default, 3DES is used.
 Data Encryption on HTTP
• Data transfer between Web-console and clients are protected by using SSL(HTTPS).
 It implements a permissions model for files and directories that shares much of the POSIX model.
 User Identity
 simple : In this mode of operation, the identity of a client process is determined by the host operating system.
 kerberos : In Kerberized operation, the identity of a client process is determined by its Kerberos credentials.
 Group Mapping
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping
 HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity
numbers as is conventional in Unix.
 Shell Operations
• hadoop fs -chmod [-R] mode file
• hadoop fs -chgrp [-R] group file
• chown [-R] [owner][:[group]] file
 The Super-User
 The super-user is the user with the same identity as name node process itself.
 Permissions checks never fail for the super-user.
 There is no persistent notion of who was the super-user
 When the name node is started the process identity determines who is the super-user for now.
WebHDFS
 Uses Kerberos (SPNEGO) and Hadoop delegation tokens for authentication.
An ACL provides a way to set different permissions for specific named users or named groups, not only the file's owner and
the file's group.
 By default, support for ACLs is disabled.
 Enable ACLs by adding the following configuration property to hdfs-site.xml and restarting the NameNode
<property>
<name>dfs.namenode.acls.enabled</name>
<value>true</value>
</property>
ACLs Shell Commands
 hdfs dfs -getfacl [-R] <path>
 hdfs dfs -setfacl [-R] [-b|-k -m|-x <acl_spec> <path>]|[--set <acl_spec> <path>]
-R : Recursive
-m : Modify ACL.
-b : Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility
with permission bits.
-k : Remove the default ACL.
-x : Remove specified ACL entries.
<acl_spec> : Comma separated list of ACL entries.
--set : Fully replace the ACL, discarding all existing entries.
 hdfs dfs -ls <args>
ls will append a '+' character to the permissions string of any file or directory that has an ACL.
Source : Apache
 Tokens are generated for applications, containers.
 Use HMAC_ALGORITHM to generate password
for tokens.
 YARN interfaces for secret manager tokens
BaseNMTokenSecretManager
AMRMTokenSecretManager
BaseClientToAMTokenSecretManager
BaseContainerTokenSecretManager
Source : Hortonworks
 Enable ACL check in YARN
Queues ACL
 QueueACLsManager check for access of each user against the ACL defined in the queue.
 Following would restrict access to the "support" queue to the users “shrey” and the
members of the “sales" group:
 yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue
<property>
<name>yarn.acl.enable</name>
<value>true</value>
<property>
<property>
<name>yarn.scheduler.capacity.root.<queue-path>.acl_submit_applications</name>
<value>shrey sales</value>
<property>
<property>
<name>yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue s</name>
<value>sales</value>
<property>
Hadoop security
Services
Client
Plain Text or Encrypted
Password
 Kerberos is a network authentication protocol.
 It is used to authenticate the identity of the services running on different
nodes (machines) communicating over a non-secure network.
 It uses “tickets” as basic unit for authentication.
 Authentication Server
It is a service used to authenticate or verify clients. It usually checks for username of the requested client
in the system
 Ticket Granting Server
It generates Ticket Granting Tickets (TGTs) based on target service name, initial
ticket (if any) and authenticator.
 Principles
It is the unique identity to which Kerberos could assign tickets provided by Ticket
Granting Server.
Hadoop security
Hadoop security
Hadoop security
Hadoop security
Hadoop security
Hadoop security
To enable Kerberos authentication in Hadoop, we need to configure following properties
in core-site.xml
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
<!-- Giving value as "simple" disables security.-->
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
 Keytab is a file containing Kerberos principals and encrypted keys. These files are used to
login directly to Kerberos without being prompted for the password.
 Enabling kerberos for HDFS services:
A. Generating KeyTab
Create the hdfs keytab file that will contain the hdfs principal and HTTP principal. This keytab file is used for the
Namenode and Datanode
B. Associate KeyTab with YARN principle
kadmin: xst -norandkey -k yarn.keytab hdfs/fully.qualified.domain.name HTTP/fully.qualified.domain.name
sudo mv hdfs.keytab /etc/hadoop/conf/
<!-- Namenode security configs -->
<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop/hdfs.keytab</value>
<!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@YOUR-REALM.COM</value>
</property>
 Add the following properties to the hdfs-site.xml file
<!-- Datanode security configs -->
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop/hdfs.keytab</value>
<!-- path to the HDFS keytab -->
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST@YOUR-REALM.COM</value>
</property>

More Related Content

What's hot (20)

PDF
HadoopCon2015 Multi-Cluster Live Synchronization with Kerberos Federated Hadoop
Yafang Chang
 
PPTX
Hadoop Security Today & Tomorrow with Apache Knox
Vinay Shukla
 
PDF
Securing Big Data at rest with encryption for Hadoop, Cassandra and MongoDB o...
Big Data Spain
 
PDF
Improving HDFS Availability with Hadoop RPC Quality of Service
Ming Ma
 
PPTX
Hadoop ClusterClient Security Using Kerberos
Sarvesh Meena
 
PPTX
Hadoop REST API Security with Apache Knox Gateway
DataWorks Summit
 
PDF
Hadoop & Security - Past, Present, Future
Uwe Printz
 
PPTX
Hadoop Security Today and Tomorrow
DataWorks Summit
 
PPT
Hadoop Security Architecture
Owen O'Malley
 
PPTX
Hdp security overview
Hortonworks
 
PPTX
Hadoop and Kerberos: the Madness Beyond the Gate: January 2016 edition
Steve Loughran
 
PDF
Apache Sentry for Hadoop security
bigdatagurus_meetup
 
PPTX
Hadoop security
Shivaji Dutta
 
PDF
Nl HUG 2016 Feb Hadoop security from the trenches
Bolke de Bruin
 
PDF
TriHUG 2/14: Apache Sentry
trihug
 
PPTX
Open Source Security Tools for Big Data
Rommel Garcia
 
PPTX
Securing Hadoop's REST APIs with Apache Knox Gateway Hadoop Summit June 6th, ...
Kevin Minder
 
PPT
Hadoop Operations: How to Secure and Control Cluster Access
Cloudera, Inc.
 
PDF
Охота на уязвимости Hadoop
Positive Hack Days
 
PPTX
HadoopCon- Trend Micro SPN Hadoop Overview
Yafang Chang
 
HadoopCon2015 Multi-Cluster Live Synchronization with Kerberos Federated Hadoop
Yafang Chang
 
Hadoop Security Today & Tomorrow with Apache Knox
Vinay Shukla
 
Securing Big Data at rest with encryption for Hadoop, Cassandra and MongoDB o...
Big Data Spain
 
Improving HDFS Availability with Hadoop RPC Quality of Service
Ming Ma
 
Hadoop ClusterClient Security Using Kerberos
Sarvesh Meena
 
Hadoop REST API Security with Apache Knox Gateway
DataWorks Summit
 
Hadoop & Security - Past, Present, Future
Uwe Printz
 
Hadoop Security Today and Tomorrow
DataWorks Summit
 
Hadoop Security Architecture
Owen O'Malley
 
Hdp security overview
Hortonworks
 
Hadoop and Kerberos: the Madness Beyond the Gate: January 2016 edition
Steve Loughran
 
Apache Sentry for Hadoop security
bigdatagurus_meetup
 
Hadoop security
Shivaji Dutta
 
Nl HUG 2016 Feb Hadoop security from the trenches
Bolke de Bruin
 
TriHUG 2/14: Apache Sentry
trihug
 
Open Source Security Tools for Big Data
Rommel Garcia
 
Securing Hadoop's REST APIs with Apache Knox Gateway Hadoop Summit June 6th, ...
Kevin Minder
 
Hadoop Operations: How to Secure and Control Cluster Access
Cloudera, Inc.
 
Охота на уязвимости Hadoop
Positive Hack Days
 
HadoopCon- Trend Micro SPN Hadoop Overview
Yafang Chang
 

Viewers also liked (20)

PDF
オープンソースとコミュニティによる価値の創造
Rakuten Group, Inc.
 
PDF
Data Lake, Virtual Database, or Data Hub - How to Choose?
DATAVERSITY
 
PDF
マルチテナント化に向けたHadoopの最新セキュリティ事情 #hcj2014
Cloudera Japan
 
PDF
Securing MQTT - BuildingIoT 2016 slides
Dominik Obermaier
 
PDF
ブラックボックスなアドテクを機械学習で推理してみた Short ver
尚行 坂井
 
PPTX
Hdfs 2016-hadoop-summit-san-jose-v4
Chris Nauroth
 
PDF
IoT Toulouse : introduction à mqtt
Julien Vermillard
 
PPTX
これがCassandra
Takehiro Torigaki
 
PPT
Amazon Redshift ベンチマーク Hadoop + Hiveと比較
FlyData Inc.
 
PDF
「YDNの広告のCTRをオンライン学習で予測してみた」#yjdsw4
Yahoo!デベロッパーネットワーク
 
PDF
SparkとCassandraの美味しい関係
datastaxjp
 
PDF
はじめての Elastic Beanstalk
Amazon Web Services Japan
 
PDF
Data Feed Landscape 1.0 - データフィード版 カオスマップ (日本)
feedforce (株式会社フィードフォース)
 
PDF
リクルートにおけるデータのインフラ化への取組
Recruit Technologies
 
PDF
Using Kubernetes on Google Container Engine
Etsuji Nakai
 
PDF
20170302 tryswift tasting_tests
Kazuaki Matsuo
 
PDF
Hardening Microservices Security: Building a Layered Defense Strategy
Cloudflare
 
PDF
サーバレスアーキテクチャにしてみた【デブサミ2017 17-E-2】
dreamarts_pr
 
PDF
SIerもはじめる わたしたちのDevOps #jjug_ccc
Mizuki Ugajin
 
PDF
MQTT - A practical protocol for the Internet of Things
Bryan Boyd
 
オープンソースとコミュニティによる価値の創造
Rakuten Group, Inc.
 
Data Lake, Virtual Database, or Data Hub - How to Choose?
DATAVERSITY
 
マルチテナント化に向けたHadoopの最新セキュリティ事情 #hcj2014
Cloudera Japan
 
Securing MQTT - BuildingIoT 2016 slides
Dominik Obermaier
 
ブラックボックスなアドテクを機械学習で推理してみた Short ver
尚行 坂井
 
Hdfs 2016-hadoop-summit-san-jose-v4
Chris Nauroth
 
IoT Toulouse : introduction à mqtt
Julien Vermillard
 
これがCassandra
Takehiro Torigaki
 
Amazon Redshift ベンチマーク Hadoop + Hiveと比較
FlyData Inc.
 
「YDNの広告のCTRをオンライン学習で予測してみた」#yjdsw4
Yahoo!デベロッパーネットワーク
 
SparkとCassandraの美味しい関係
datastaxjp
 
はじめての Elastic Beanstalk
Amazon Web Services Japan
 
Data Feed Landscape 1.0 - データフィード版 カオスマップ (日本)
feedforce (株式会社フィードフォース)
 
リクルートにおけるデータのインフラ化への取組
Recruit Technologies
 
Using Kubernetes on Google Container Engine
Etsuji Nakai
 
20170302 tryswift tasting_tests
Kazuaki Matsuo
 
Hardening Microservices Security: Building a Layered Defense Strategy
Cloudflare
 
サーバレスアーキテクチャにしてみた【デブサミ2017 17-E-2】
dreamarts_pr
 
SIerもはじめる わたしたちのDevOps #jjug_ccc
Mizuki Ugajin
 
MQTT - A practical protocol for the Internet of Things
Bryan Boyd
 
Ad

Similar to Hadoop security (20)

PPTX
Improvements in Hadoop Security
Chris Nauroth
 
PDF
Hadoop Security: Overview
Cloudera, Inc.
 
PPTX
Securing the Hadoop Ecosystem
DataWorks Summit
 
PDF
CIS13: Big Data Platform Vendor’s Perspective: Insights from the Bleeding Edge
CloudIDSummit
 
PPT
Setting_up_hadoop_cluster_Detailed-overview
oyqhmysnxozaxsqfac
 
PPTX
Improvements in Hadoop Security
DataWorks Summit
 
PDF
Hadoop Security, Cloudera - Todd Lipcon and Aaron Myers - Hadoop World 2010
Cloudera, Inc.
 
PPTX
Curb Your Insecurity - Tips for a Secure Cluster (with Spark too)!!
Pardeep Kumar Mishra (Big Data / Hadoop Consultant)
 
PPTX
Curb your insecurity with HDP
DataWorks Summit/Hadoop Summit
 
PPTX
Apache Hadoop India Summit 2011 talk "Making Apache Hadoop Secure" by Devaraj...
Yahoo Developer Network
 
PDF
Advanced Security In Hadoop Cluster
Edureka!
 
PDF
Curb your insecurity with HDP - Tips for a Secure Cluster
ahortonworks
 
PPTX
Big Data Warehousing Meetup: Securing the Hadoop Ecosystem by Cloudera
Caserta
 
PPTX
Open Source Security Tools for Big Data
Great Wide Open
 
PPT
Hadoop Security Preview
Hadoop User Group
 
PPT
Hadoop Security Preview
Hadoop User Group
 
PPT
Hadoop Security Preview
Hadoop User Group
 
PPTX
Hadoop security @ Philly Hadoop Meetup May 2015
Shravan (Sean) Pabba
 
PPTX
Securing Hadoop in an Enterprise Context
DataWorks Summit/Hadoop Summit
 
PPTX
Securing Hadoop in an Enterprise Context (v2)
Hellmar Becker
 
Improvements in Hadoop Security
Chris Nauroth
 
Hadoop Security: Overview
Cloudera, Inc.
 
Securing the Hadoop Ecosystem
DataWorks Summit
 
CIS13: Big Data Platform Vendor’s Perspective: Insights from the Bleeding Edge
CloudIDSummit
 
Setting_up_hadoop_cluster_Detailed-overview
oyqhmysnxozaxsqfac
 
Improvements in Hadoop Security
DataWorks Summit
 
Hadoop Security, Cloudera - Todd Lipcon and Aaron Myers - Hadoop World 2010
Cloudera, Inc.
 
Curb Your Insecurity - Tips for a Secure Cluster (with Spark too)!!
Pardeep Kumar Mishra (Big Data / Hadoop Consultant)
 
Curb your insecurity with HDP
DataWorks Summit/Hadoop Summit
 
Apache Hadoop India Summit 2011 talk "Making Apache Hadoop Secure" by Devaraj...
Yahoo Developer Network
 
Advanced Security In Hadoop Cluster
Edureka!
 
Curb your insecurity with HDP - Tips for a Secure Cluster
ahortonworks
 
Big Data Warehousing Meetup: Securing the Hadoop Ecosystem by Cloudera
Caserta
 
Open Source Security Tools for Big Data
Great Wide Open
 
Hadoop Security Preview
Hadoop User Group
 
Hadoop Security Preview
Hadoop User Group
 
Hadoop Security Preview
Hadoop User Group
 
Hadoop security @ Philly Hadoop Meetup May 2015
Shravan (Sean) Pabba
 
Securing Hadoop in an Enterprise Context
DataWorks Summit/Hadoop Summit
 
Securing Hadoop in an Enterprise Context (v2)
Hellmar Becker
 
Ad

Recently uploaded (20)

PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PDF
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
PPTX
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
PDF
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
PDF
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
PDF
Per Axbom: The spectacular lies of maps
Nexer Digital
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
PPTX
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
Per Axbom: The spectacular lies of maps
Nexer Digital
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
The Future of Artificial Intelligence (AI)
Mukul
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 

Hadoop security

  • 2.  A form of protection where a separation is created between the assets and the threat.  Security in IT realm: Application security Computing security Data security Information security Network security
  • 3. Data : We have critical data in HDFS. Resources : Each node of Hadoop cluster has resources required for executing applications. Applications : Web Applications and REST APIs to access cluster details. Services : HDFS, YARN and other services running on the cluster nodes. Network Security : Services and Application communications over network.
  • 4.  Configuration  Data confidentiality  Service Level Authorization  Encryption  Authentication for Hadoop HTTP web-consoles  Delegation Tokens  Kerberos
  • 5. core-site.xml Parameter Value Notes hadoop.security.authentication kerberos simple : No authentication. (default) kerberos : Enable authentication by Kerberos. hadoop.security.authorization true Enable RPC service-level authorization. hadoop.rpc.protection authentication authentication : authentication only (default) integrity : integrity check in addition to authentication privacy : data encryption in addition to integrity hadoop.proxyuser.superuser.hosts comma separated hosts from which superuser access are allowd to impersonation. * means wildcard. hadoop.proxyuser.superuser.groups comma separated groups to which users impersonated by superuser belongs. * means wildcard. hdfs-site.xml Parameter Value Notes dfs.block.access.token.enable true Enable HDFS block access tokens for secure operations. dfs.https.enable true This value is deprecated. Use dfs.http.policy dfs.namenode.https-address nn_host_fqdn:50470 dfs.https.port 50470 dfs.namenode.keytab.file /etc/security/keytab/nn.service.keytab Kerberos keytab file for the NameNode. dfs.namenode.kerberos.principal nn/[email protected] Kerberos principal name for the NameNode. dfs.namenode.kerberos.internal.spnego.principal HTTP/[email protected] HTTP Kerberos principal name for the NameNode.
  • 6.  Superuser can submit jobs or access hdfs on behalf of another user in a secured way.  Superuser must have kerberos credentials to be able to impersonate another user. Ex. A superuser “bob” wants to submit job or access hdfs cluster as “alice” //Create ugi for joe. The login user is 'super'. UserGroupInformation ugi = UserGroupInformation.createProxyUser(“alice", UserGroupInformation.getLoginUser()); ugi.doAs(new PrivilegedExceptionAction<Void>() { public Void run() throws Exception { //Submit a job JobClient jc = new JobClient(conf); jc.submitJob(conf); //OR access hdfs FileSystem fs = FileSystem.get(conf); fs.mkdir(someFilePath); } }
  • 7.  The superuser must be configured on Namenode and ResourceManager to be allowed to impersonate another user. Following configurations are required. <property> <name>hadoop.proxyuser.super.groups</name> <value>group1,group2</value> <description>Allow the superuser super to impersonate any members of the group group1 and group2</description> </property> <property> <name>hadoop.proxyuser.super.hosts</name> <value>host1,host2</value> <description>The superuser can connect only from host1 and host2 to impersonate a user</description> </property>
  • 8.  Initial authorization mechanism to ensure clients connecting to a particular Hadoop service have the necessary, pre-configured, permissions and are authorized to access the given service.  For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to submit jobs.  By default, service-level authorization is disabled for Hadoop.  To enable it set following configuration property in core-site.xml : <property> <name>hadoop.security.authorization</name> <value> true</value> </property>
  • 9.  hadoop-policy.xml defines an access control list for each Hadoop service.  Every ACL has simple format, a comma separated list of users and groups separated by space. Example: user1,user2 group1,group2.  Blocked Access Control Lists security.client.protocol.acl  security.client.protocol.acl.blocked  Refreshing Service Level Authorization Configuration hadoop dfsadmin –refreshServiceAcl
  • 10. <property> <name>security.job.submission.protocol.acl</name> <value>alice,bob mapreduce</value> </property>  Allow only users alice, bob and users in the mapreduce group to submit jobs to the MapReduce cluster:  Allow only DataNodes running as the users who belong to the group datanodes to communicate with the NameNode: <property> <name>security.datanode.protocol.acl</name> <value>datanodes</value> </property>  Allow any user to talk to the HDFS cluster as a DFSClient: <property> <name>security.client.protocol.acl</name> <value>*</value> </property>
  • 11.  Data Encryption on RPC • The data transfered between hadoop services and clients. • Setting hadoop.rpc.protection to "privacy" in the core-site.xml activate data encryption.  Data Encryption on Block data transfer • set dfs.encrypt.data.transfer to "true" in the hdfs-site.xml. • set dfs.encrypt.data.transfer.algorithm to either "3des" or "rc4" to choose the specific encryption algorithm. • By default, 3DES is used.  Data Encryption on HTTP • Data transfer between Web-console and clients are protected by using SSL(HTTPS).
  • 12.  It implements a permissions model for files and directories that shares much of the POSIX model.  User Identity  simple : In this mode of operation, the identity of a client process is determined by the host operating system.  kerberos : In Kerberized operation, the identity of a client process is determined by its Kerberos credentials.  Group Mapping  org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback  org.apache.hadoop.security.ShellBasedUnixGroupsMapping  HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity numbers as is conventional in Unix.
  • 13.  Shell Operations • hadoop fs -chmod [-R] mode file • hadoop fs -chgrp [-R] group file • chown [-R] [owner][:[group]] file  The Super-User  The super-user is the user with the same identity as name node process itself.  Permissions checks never fail for the super-user.  There is no persistent notion of who was the super-user  When the name node is started the process identity determines who is the super-user for now. WebHDFS  Uses Kerberos (SPNEGO) and Hadoop delegation tokens for authentication.
  • 14. An ACL provides a way to set different permissions for specific named users or named groups, not only the file's owner and the file's group.  By default, support for ACLs is disabled.  Enable ACLs by adding the following configuration property to hdfs-site.xml and restarting the NameNode <property> <name>dfs.namenode.acls.enabled</name> <value>true</value> </property>
  • 15. ACLs Shell Commands  hdfs dfs -getfacl [-R] <path>  hdfs dfs -setfacl [-R] [-b|-k -m|-x <acl_spec> <path>]|[--set <acl_spec> <path>] -R : Recursive -m : Modify ACL. -b : Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits. -k : Remove the default ACL. -x : Remove specified ACL entries. <acl_spec> : Comma separated list of ACL entries. --set : Fully replace the ACL, discarding all existing entries.  hdfs dfs -ls <args> ls will append a '+' character to the permissions string of any file or directory that has an ACL.
  • 17.  Tokens are generated for applications, containers.  Use HMAC_ALGORITHM to generate password for tokens.  YARN interfaces for secret manager tokens BaseNMTokenSecretManager AMRMTokenSecretManager BaseClientToAMTokenSecretManager BaseContainerTokenSecretManager Source : Hortonworks
  • 18.  Enable ACL check in YARN Queues ACL  QueueACLsManager check for access of each user against the ACL defined in the queue.  Following would restrict access to the "support" queue to the users “shrey” and the members of the “sales" group:  yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue <property> <name>yarn.acl.enable</name> <value>true</value> <property> <property> <name>yarn.scheduler.capacity.root.<queue-path>.acl_submit_applications</name> <value>shrey sales</value> <property> <property> <name>yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue s</name> <value>sales</value> <property>
  • 20. Services Client Plain Text or Encrypted Password
  • 21.  Kerberos is a network authentication protocol.  It is used to authenticate the identity of the services running on different nodes (machines) communicating over a non-secure network.  It uses “tickets” as basic unit for authentication.
  • 22.  Authentication Server It is a service used to authenticate or verify clients. It usually checks for username of the requested client in the system  Ticket Granting Server It generates Ticket Granting Tickets (TGTs) based on target service name, initial ticket (if any) and authenticator.  Principles It is the unique identity to which Kerberos could assign tickets provided by Ticket Granting Server.
  • 29. To enable Kerberos authentication in Hadoop, we need to configure following properties in core-site.xml <property> <name>hadoop.security.authentication</name> <value>kerberos</value> <!-- Giving value as "simple" disables security.--> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property>
  • 30.  Keytab is a file containing Kerberos principals and encrypted keys. These files are used to login directly to Kerberos without being prompted for the password.  Enabling kerberos for HDFS services: A. Generating KeyTab Create the hdfs keytab file that will contain the hdfs principal and HTTP principal. This keytab file is used for the Namenode and Datanode B. Associate KeyTab with YARN principle kadmin: xst -norandkey -k yarn.keytab hdfs/fully.qualified.domain.name HTTP/fully.qualified.domain.name sudo mv hdfs.keytab /etc/hadoop/conf/
  • 31. <!-- Namenode security configs --> <property> <name>dfs.namenode.keytab.file</name> <value>/etc/hadoop/hdfs.keytab</value> <!-- path to the HDFS keytab --> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>hdfs/[email protected]</value> </property>  Add the following properties to the hdfs-site.xml file <!-- Datanode security configs --> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/hadoop/hdfs.keytab</value> <!-- path to the HDFS keytab --> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hdfs/[email protected]</value> </property>