SlideShare a Scribd company logo
www.protechskills.com
HDFS
Hadoop Distributed File System
www.protechskills.com
Topics Covered
 Design Goals
 Hadoop Blocks
 Rack Awareness, Replica Placement & Selection
 Permissions Model
 Anatomy of a File Write / Read on HDFS
 FileSystem Image and Edit Logs
 HDFS Check Pointing Process
 Directory Structure - NameNode, Secondary NameNode , DataNode
 Safe Mode, Trash, Name and Space Quota
www.protechskills.com
HDFS Design Goals
 Hardware Failure - Detection of faults and quick, automatic recovery
 Streaming Data Access - High throughput of data access (Batch Processing of data)
 Large Data Sets - Gigabytes to terabytes in size.
 Simple Coherency Model – Write once read many access model for files
 Moving computation is cheaper than moving data
www.protechskills.com
NameNode
Datanode_1 Datanode_2 Datanode_3
HDFS
Block 1
HDFS
Block 2
HDFS
Block 3 Block 4
Storage & Replication of Blocks in HDFS
Filedividedintoblocks
Block 1
Block 2
Block 3
Block 4
www.protechskills.com
Blocks
 Minimum amount of data that can be read or write - 64 MB by default
 Minimize the cost of seeks
 A file can be larger than any single disk in the network
 Simplifies the storage subsystem – Same size & eliminating metadata concerns
 Provides fault tolerance and availability
www.protechskills.com
Hadoop Rack Awareness
 Get maximum performance out of Hadoop
cluster
 To provide reliable storage when dealing with
DataNode failure issues.
 Resolution of the slave's DNS name (also IP
address) to a rack id.
 Interface provided in Hadoop
DNSToSwitchMapping, Default implementation
is ScriptBasedMapping
Rack Topology - /rack1 & /rack2
www.protechskills.com
HADOOP_CONF={${HADOOP_HOME}/conf
while [ $# -gt 0 ] ; do
nodeArg=$1
exec< ${HADOOP_CONF}/topology.data
result=""
while read line ; do
ar=( $line )
if [ "${ar[0]}" = "$nodeArg" ] ; then
result="${ar[1]}"
fi
done
shift
if [ -z "$result" ] ; then
echo -n "/default/rack "
else
echo -n "$result "
fi
done
Sample Script
192.168.56.101 /dc1/rack1
192.168.56.102 /dc1/rack2
192.168.56.103 /dc1/rack2
Sample Data (topology.data)
Rack Awareness Configuration
File : hdfs-site.xml
Property : topology.node.switch.mapping.impl
Default Value : ScriptBasedMapping class
Property : topology.script.file.name
Value : <Absolute path to script file>
Sample Command : ./topology.sh 192.168.56.101
Output : /dc1/rack1
Ref Hadoop Wiki
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
www.protechskills.com
Replica Placement
 Critical to HDFS reliability and
performance
 Improve data reliability,
availability, and network
bandwidth utilization
Distance b/w Hadoop
Client and DataNode
Same Node : d=0
Same Rack : d=2
Same Data Centre
different rack
: d=4
Different Data Centre : d=6
www.protechskills.com
Replica Placement cont..
Default Strategy :
a) First replica on the same node as the client.
b) Second replica is placed on a different rack from the first (off-rack) chosen at random
c) Third replica is placed on the same rack as the second, but on a different node chosen at random.
d) Further replicas are placed on random nodes on the cluster
Replica Selection - HDFS tries to satisfy a read request from a replica that is closest to the
reader.
www.protechskills.com
Permissions Model
 Directory or file flag, permissions, replication, owner, group, file size, modification date,
and full path.
 User Name : ‘whoami’
 Group List : ‘bash -c groups’
 Super-User & Web Server
www.protechskills.com
Anatomy of a File Write
 Client creates the file by calling create()
method
 NameNode validates & processes the
create request
 Spilt file into packets (DataQueue)
 DataStreamer asks NameNode for
block / node mapping & pipelines are
created among nodes.
www.protechskills.com
Anatomy of a File Write (cont..)
 DataStreamer streams the packets to
the first DataNode
 DataNode forwards the copied packet
to the next DataNode in the pipeline
 DFSOutputStream also maintains the
ack queue and removes the packets
after receiving acknowledgement from
the DataNodes
 Client calls close() on the stream
www.protechskills.com
Anatomy of a File Read
 Client calls open() on the FileSystem
object
 DistributedFileSystem calls the
NameNode to determine the locations of
the blocks
 NameNode validates request & for each
block returns the list of DataNodes.
 DistributedFileSystem returns an input
stream that supports file seeks to the
client
www.protechskills.com
Anatomy of a File Read (cont..)
 Client calls read() on the stream
 When the end of the block is reached,
DFSInputStream will close the connection
to the DataNode, then find the best
DataNode for the next block.
 Client calls close() on the stream
www.protechskills.com
FileSystem Image and Edit Logs
 fsimage file is a persistent checkpoint of the file-system metadata
 When a client performs a write operation, it is first recorded in the edit log.
 The NameNode also has an in-memory representation of the files-ystem
metadata, which it updates after the edit log has been modified
 Secondary NameNode is used to produce checkpoints of the primary’s in-
memory files-ystem metadata
www.protechskills.com
Check Pointing Process
 Secondary NameNode asks the primary to roll its edits file,
so new edits go to a new file.
 NameNode sends the fsimage and edits (using HTTP GET).
 Secondary NameNode loads fsimage into memory, applies
each operation from edits, then creates a new consolidated
fsimage file.
 Secondary NameNode sends the new fsimage back to the
primary (using HTTP POST).
 Primary replaces the old fsimage with the new one.
Updates the fstime file to record the time for checkpoint.
www.protechskills.com
Directory Structure
NameNode (On NameNode only)
${dfs.name.dir}/current/VERSION
/edits
/fsimage
/fstime
Secondary NameNode (On SecNameNode Only)
${fs.checkpoint.dir}/current/VERSION
/edits
/fsimage
/fstime
/previous.checkpoint/VERSION
/edits
/fsimage
/fstime
DataNode (On all DataNodes)
${dfs.data.dir}/current/VERSION
/blk_<id_1>
/blk_<id_1>.meta
/blk_<id_2>
/blk_<id_2>.meta
/...
/blk_<id_64>
/blk_<id_64>.meta
/subdir0/
/subdir1/
/...
Block Count for a directory
dfs.datanode.numblocks property
www.protechskills.com
Safe Mode
 On start-up, NameNode loads its image file (fsimage) into memory and applies the edits from the edit
log (edits).
 It does the check pointing process itself. without recourse to the Secondary NameNode.
 Namenode is running in safe mode (offers only a read-only view to clients)
 The locations of blocks in the system are not persisted by the NameNode - this information resides with
the DataNodes, in the form of a list of the blocks it is storing.
 Safe mode is needed to give the DataNodes time to check in to the NameNode with their block lists
 Safe mode is exited when the minimal replication condition is reached, plus an extension time of 30
seconds.
www.protechskills.com
HDFS Safe Mode
SafeMode Properties – Configure safe mode as per cluster recommendations.
SafeMode Admin Commands – A command option for dfsadmin command: hadoop dfsadmin -safemode
dfs.replication.min Default - 1 Minimum no. of replicas for file wrtie success
dfs.safemode.threshold.pct Default – 0.999 Min. portion of blocks statisfying the minimum replication
to leave safe mode. Value 0 – Forces NN not to start in
safemode Value 1 – Ensurs NN never leave safemode
dfs.safemode.extension Default – 30,000 Extension time in milliseconds before NN leaves safemode
get : Get the NameNode safemode status enter : NameNode enters safemode
Wait : Wait for NameNode to exit safemode leave : NameNode leaves the safemode
www.protechskills.com
HDFS Trash – Recycle Bin
When a file is deleted by a user, it is not immediately removed from HDFS. HDFS moves it to a file in the /trash directory.
A file remains in /trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the
file from the HDFS namespace.
Undelete a file: User needs to navigate the /trash directory and retrieve the file by using mv command.
File : core-site.xml
Property : fs.trash.interval
Description : Number of minutes after which the checkpoint gets deleted.
File : core-site.xml
Property : fs.trash.checkpoint.interval
Description : Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval.
www.protechskills.com
HDFS Quotas
Name Quota - a hard limit on the number of file and directory names in the tree rooted at that directory.
Space Quota - a hard limit on the number of bytes used by files in the tree rooted at that directory.
Reporting Quota - count command of the HDFS shell reports quota values and the current count of names and
bytes in use. With the -q option, also report the name quota value set for each directory, the available name quota
remaining, the space quota value set, and the available space quota remaining.
fs -count -q <directory>..
dfsadmin -setQuota <N> <directory>... Set the name quota to be N for each directory.
dfsadmin -clrQuota <directory>... Remove any name quota for each directory.
dfsadmin -setSpaceQuota <N> directory>.. Set the space quota to be N bytes for each directory.
dfsadmin -clrSpaceQuota <directory>... Remove any spce quota for each directory.
www.protechskills.com
FS Shell – Some Basic Commands
chgrp - Change group association of files.
Usage: hdfs dfs -chgrp [-R] GROUP URI [URI …]
chmod - Change the permissions of files.
Usage: hdfs dfs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI …]
chown - Change the owner of files.
Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
du - Displays sizes of files and directories
Usage: hdfs dfs -du [-s] [-h] URI [URI …]
The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.
The -h option will format file sizes in a "human-readable" fashion.
dus - Displays a summary of file lengths
Usage: hdfs dfs -dus <args>
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-
common/FileSystemShell.html
http://hadoop.apache.org/docs/r1.2.1/file_system_shell.html
www.protechskills.com
DfsAdmin Command
 bin/hadoop dfsadmin [Generic Options] [Command Options]
-safemode enter /
leave / get / wait
Safe mode maintenance command. Safe mode can also be entered manually, but then it can only be
turned off manually as well.
-report Reports basic filesystem information and statistics.
-refreshNodes Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the
Namenode and those that should be decommissioned or recommissioned.
-metasave filename Save Namenode's primary data structures to filename in the directory specified by hadoop.log.dir
property. filename is overwritten if it exists. filename will contain one line for each of the following
1. Datanodes heart beating with Namenode
2. Blocks waiting to be replicated
3. Blocks currrently being replicated
4. Blocks waiting to be deleted
www.protechskills.com
References
 Hadoop Apache Website – http://hadoop.apache.org/
 Hadoop The Definitive Guide 3rd Edition By Oreilly
www.protechskills.com

More Related Content

What's hot (20)

PPTX
Introduction to HDFS
Bhavesh Padharia
 
ODP
Hadoop HDFS by rohitkapa
kapa rohit
 
PPTX
Spark
Koushik Mondal
 
PDF
Shell scripting
Manav Prasad
 
PDF
Big Data Architecture
Guido Schmutz
 
PDF
File System Hierarchy
sritolia
 
PPSX
Hadoop
Nishant Gandhi
 
PPTX
Hadoop vs Apache Spark
ALTEN Calsoft Labs
 
PPTX
An Overview of Apache Cassandra
DataStax
 
PPT
Map reduce in BIG DATA
GauravBiswas9
 
PPTX
Hadoop Architecture
Dr. C.V. Suresh Babu
 
PDF
Hadoop YARN
Vigen Sahakyan
 
PDF
Hadoop Overview & Architecture
EMC
 
PPTX
Disk and File System Management in Linux
Henry Osborne
 
PPTX
NOSQL Databases types and Uses
Suvradeep Rudra
 
PPTX
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Simplilearn
 
PPT
Anatomy of classic map reduce in hadoop
Rajesh Ananda Kumar
 
PPTX
Introduction To Hadoop | What Is Hadoop And Big Data | Hadoop Tutorial For Be...
Simplilearn
 
PPT
Unit-3_BDA.ppt
PoojaShah174393
 
Introduction to HDFS
Bhavesh Padharia
 
Hadoop HDFS by rohitkapa
kapa rohit
 
Shell scripting
Manav Prasad
 
Big Data Architecture
Guido Schmutz
 
File System Hierarchy
sritolia
 
Hadoop vs Apache Spark
ALTEN Calsoft Labs
 
An Overview of Apache Cassandra
DataStax
 
Map reduce in BIG DATA
GauravBiswas9
 
Hadoop Architecture
Dr. C.V. Suresh Babu
 
Hadoop YARN
Vigen Sahakyan
 
Hadoop Overview & Architecture
EMC
 
Disk and File System Management in Linux
Henry Osborne
 
NOSQL Databases types and Uses
Suvradeep Rudra
 
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Simplilearn
 
Anatomy of classic map reduce in hadoop
Rajesh Ananda Kumar
 
Introduction To Hadoop | What Is Hadoop And Big Data | Hadoop Tutorial For Be...
Simplilearn
 
Unit-3_BDA.ppt
PoojaShah174393
 

Similar to Hadoop HDFS Concepts (20)

PPTX
Hadoop HDFS Architeture and Design
sudhakara st
 
PDF
HDFS Design Principles
Konstantin V. Shvachko
 
PPTX
Introduction to hadoop and hdfs
shrey mehrotra
 
PPTX
Hadoop architecture by ajay
Hadoop online training
 
PPTX
Hadoop and HDFS
SatyaHadoop
 
PPTX
Hadoop
Esraa El Ghoul
 
PDF
Big data interview questions and answers
Kalyan Hadoop
 
PPTX
DFSNov1.pptx
EngrNabidRayhanKhale
 
PDF
DataStax | Building a Spark Streaming App with DSE File System (Rocco Varela)...
DataStax
 
PPT
Hadoop -HDFS.ppt
RamyaMurugesan12
 
PPT
Hadoop professional-software-development-course-in-mumbai
Unmesh Baile
 
PPT
Hadoop-professional-software-development-course-in-mumbai
Unmesh Baile
 
PPTX
Docker: Aspects of Container Isolation
allingeek
 
PPTX
Introduction_to_HDFS sun.pptx
sunithachphd
 
PPT
Hadoop training in hyderabad-kellytechnologies
Kelly Technologies
 
PDF
Hdfs architecture
Aisha Siddiqa
 
PPTX
Big data with HDFS and Mapreduce
senthil0809
 
PPT
Apache hadoop and hive
srikanthhadoop
 
PPTX
Hadoop Interacting with HDFS
Apache Apex
 
PPTX
Cloud Computing - Cloud Technologies and Advancements
Sathishkumar Jaganathan
 
Hadoop HDFS Architeture and Design
sudhakara st
 
HDFS Design Principles
Konstantin V. Shvachko
 
Introduction to hadoop and hdfs
shrey mehrotra
 
Hadoop architecture by ajay
Hadoop online training
 
Hadoop and HDFS
SatyaHadoop
 
Big data interview questions and answers
Kalyan Hadoop
 
DFSNov1.pptx
EngrNabidRayhanKhale
 
DataStax | Building a Spark Streaming App with DSE File System (Rocco Varela)...
DataStax
 
Hadoop -HDFS.ppt
RamyaMurugesan12
 
Hadoop professional-software-development-course-in-mumbai
Unmesh Baile
 
Hadoop-professional-software-development-course-in-mumbai
Unmesh Baile
 
Docker: Aspects of Container Isolation
allingeek
 
Introduction_to_HDFS sun.pptx
sunithachphd
 
Hadoop training in hyderabad-kellytechnologies
Kelly Technologies
 
Hdfs architecture
Aisha Siddiqa
 
Big data with HDFS and Mapreduce
senthil0809
 
Apache hadoop and hive
srikanthhadoop
 
Hadoop Interacting with HDFS
Apache Apex
 
Cloud Computing - Cloud Technologies and Advancements
Sathishkumar Jaganathan
 
Ad

Recently uploaded (20)

PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PDF
Revenue streams of the Wazirx clone script.pdf
aaronjeffray
 
PDF
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
PPTX
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
PDF
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PPTX
Human Resources Information System (HRIS)
Amity University, Patna
 
PDF
MiniTool Partition Wizard Free Crack + Full Free Download 2025
bashirkhan333g
 
PDF
Alarm in Android-Scheduling Timed Tasks Using AlarmManager in Android.pdf
Nabin Dhakal
 
PDF
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
PDF
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
PDF
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
PPTX
Help for Correlations in IBM SPSS Statistics.pptx
Version 1 Analytics
 
PDF
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
PDF
4K Video Downloader Plus Pro Crack for MacOS New Download 2025
bashirkhan333g
 
PDF
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
PDF
AI + DevOps = Smart Automation with devseccops.ai.pdf
Devseccops.ai
 
PPTX
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
PPTX
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
Revenue streams of the Wazirx clone script.pdf
aaronjeffray
 
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Tally software_Introduction_Presentation
AditiBansal54083
 
Human Resources Information System (HRIS)
Amity University, Patna
 
MiniTool Partition Wizard Free Crack + Full Free Download 2025
bashirkhan333g
 
Alarm in Android-Scheduling Timed Tasks Using AlarmManager in Android.pdf
Nabin Dhakal
 
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
Help for Correlations in IBM SPSS Statistics.pptx
Version 1 Analytics
 
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
4K Video Downloader Plus Pro Crack for MacOS New Download 2025
bashirkhan333g
 
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
AI + DevOps = Smart Automation with devseccops.ai.pdf
Devseccops.ai
 
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
Migrating Millions of Users with Debezium, Apache Kafka, and an Acyclic Synch...
MD Sayem Ahmed
 
Ad

Hadoop HDFS Concepts

  • 2. www.protechskills.com Topics Covered  Design Goals  Hadoop Blocks  Rack Awareness, Replica Placement & Selection  Permissions Model  Anatomy of a File Write / Read on HDFS  FileSystem Image and Edit Logs  HDFS Check Pointing Process  Directory Structure - NameNode, Secondary NameNode , DataNode  Safe Mode, Trash, Name and Space Quota
  • 3. www.protechskills.com HDFS Design Goals  Hardware Failure - Detection of faults and quick, automatic recovery  Streaming Data Access - High throughput of data access (Batch Processing of data)  Large Data Sets - Gigabytes to terabytes in size.  Simple Coherency Model – Write once read many access model for files  Moving computation is cheaper than moving data
  • 4. www.protechskills.com NameNode Datanode_1 Datanode_2 Datanode_3 HDFS Block 1 HDFS Block 2 HDFS Block 3 Block 4 Storage & Replication of Blocks in HDFS Filedividedintoblocks Block 1 Block 2 Block 3 Block 4
  • 5. www.protechskills.com Blocks  Minimum amount of data that can be read or write - 64 MB by default  Minimize the cost of seeks  A file can be larger than any single disk in the network  Simplifies the storage subsystem – Same size & eliminating metadata concerns  Provides fault tolerance and availability
  • 6. www.protechskills.com Hadoop Rack Awareness  Get maximum performance out of Hadoop cluster  To provide reliable storage when dealing with DataNode failure issues.  Resolution of the slave's DNS name (also IP address) to a rack id.  Interface provided in Hadoop DNSToSwitchMapping, Default implementation is ScriptBasedMapping Rack Topology - /rack1 & /rack2
  • 7. www.protechskills.com HADOOP_CONF={${HADOOP_HOME}/conf while [ $# -gt 0 ] ; do nodeArg=$1 exec< ${HADOOP_CONF}/topology.data result="" while read line ; do ar=( $line ) if [ "${ar[0]}" = "$nodeArg" ] ; then result="${ar[1]}" fi done shift if [ -z "$result" ] ; then echo -n "/default/rack " else echo -n "$result " fi done Sample Script 192.168.56.101 /dc1/rack1 192.168.56.102 /dc1/rack2 192.168.56.103 /dc1/rack2 Sample Data (topology.data) Rack Awareness Configuration File : hdfs-site.xml Property : topology.node.switch.mapping.impl Default Value : ScriptBasedMapping class Property : topology.script.file.name Value : <Absolute path to script file> Sample Command : ./topology.sh 192.168.56.101 Output : /dc1/rack1 Ref Hadoop Wiki http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
  • 8. www.protechskills.com Replica Placement  Critical to HDFS reliability and performance  Improve data reliability, availability, and network bandwidth utilization Distance b/w Hadoop Client and DataNode Same Node : d=0 Same Rack : d=2 Same Data Centre different rack : d=4 Different Data Centre : d=6
  • 9. www.protechskills.com Replica Placement cont.. Default Strategy : a) First replica on the same node as the client. b) Second replica is placed on a different rack from the first (off-rack) chosen at random c) Third replica is placed on the same rack as the second, but on a different node chosen at random. d) Further replicas are placed on random nodes on the cluster Replica Selection - HDFS tries to satisfy a read request from a replica that is closest to the reader.
  • 10. www.protechskills.com Permissions Model  Directory or file flag, permissions, replication, owner, group, file size, modification date, and full path.  User Name : ‘whoami’  Group List : ‘bash -c groups’  Super-User & Web Server
  • 11. www.protechskills.com Anatomy of a File Write  Client creates the file by calling create() method  NameNode validates & processes the create request  Spilt file into packets (DataQueue)  DataStreamer asks NameNode for block / node mapping & pipelines are created among nodes.
  • 12. www.protechskills.com Anatomy of a File Write (cont..)  DataStreamer streams the packets to the first DataNode  DataNode forwards the copied packet to the next DataNode in the pipeline  DFSOutputStream also maintains the ack queue and removes the packets after receiving acknowledgement from the DataNodes  Client calls close() on the stream
  • 13. www.protechskills.com Anatomy of a File Read  Client calls open() on the FileSystem object  DistributedFileSystem calls the NameNode to determine the locations of the blocks  NameNode validates request & for each block returns the list of DataNodes.  DistributedFileSystem returns an input stream that supports file seeks to the client
  • 14. www.protechskills.com Anatomy of a File Read (cont..)  Client calls read() on the stream  When the end of the block is reached, DFSInputStream will close the connection to the DataNode, then find the best DataNode for the next block.  Client calls close() on the stream
  • 15. www.protechskills.com FileSystem Image and Edit Logs  fsimage file is a persistent checkpoint of the file-system metadata  When a client performs a write operation, it is first recorded in the edit log.  The NameNode also has an in-memory representation of the files-ystem metadata, which it updates after the edit log has been modified  Secondary NameNode is used to produce checkpoints of the primary’s in- memory files-ystem metadata
  • 16. www.protechskills.com Check Pointing Process  Secondary NameNode asks the primary to roll its edits file, so new edits go to a new file.  NameNode sends the fsimage and edits (using HTTP GET).  Secondary NameNode loads fsimage into memory, applies each operation from edits, then creates a new consolidated fsimage file.  Secondary NameNode sends the new fsimage back to the primary (using HTTP POST).  Primary replaces the old fsimage with the new one. Updates the fstime file to record the time for checkpoint.
  • 17. www.protechskills.com Directory Structure NameNode (On NameNode only) ${dfs.name.dir}/current/VERSION /edits /fsimage /fstime Secondary NameNode (On SecNameNode Only) ${fs.checkpoint.dir}/current/VERSION /edits /fsimage /fstime /previous.checkpoint/VERSION /edits /fsimage /fstime DataNode (On all DataNodes) ${dfs.data.dir}/current/VERSION /blk_<id_1> /blk_<id_1>.meta /blk_<id_2> /blk_<id_2>.meta /... /blk_<id_64> /blk_<id_64>.meta /subdir0/ /subdir1/ /... Block Count for a directory dfs.datanode.numblocks property
  • 18. www.protechskills.com Safe Mode  On start-up, NameNode loads its image file (fsimage) into memory and applies the edits from the edit log (edits).  It does the check pointing process itself. without recourse to the Secondary NameNode.  Namenode is running in safe mode (offers only a read-only view to clients)  The locations of blocks in the system are not persisted by the NameNode - this information resides with the DataNodes, in the form of a list of the blocks it is storing.  Safe mode is needed to give the DataNodes time to check in to the NameNode with their block lists  Safe mode is exited when the minimal replication condition is reached, plus an extension time of 30 seconds.
  • 19. www.protechskills.com HDFS Safe Mode SafeMode Properties – Configure safe mode as per cluster recommendations. SafeMode Admin Commands – A command option for dfsadmin command: hadoop dfsadmin -safemode dfs.replication.min Default - 1 Minimum no. of replicas for file wrtie success dfs.safemode.threshold.pct Default – 0.999 Min. portion of blocks statisfying the minimum replication to leave safe mode. Value 0 – Forces NN not to start in safemode Value 1 – Ensurs NN never leave safemode dfs.safemode.extension Default – 30,000 Extension time in milliseconds before NN leaves safemode get : Get the NameNode safemode status enter : NameNode enters safemode Wait : Wait for NameNode to exit safemode leave : NameNode leaves the safemode
  • 20. www.protechskills.com HDFS Trash – Recycle Bin When a file is deleted by a user, it is not immediately removed from HDFS. HDFS moves it to a file in the /trash directory. A file remains in /trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the file from the HDFS namespace. Undelete a file: User needs to navigate the /trash directory and retrieve the file by using mv command. File : core-site.xml Property : fs.trash.interval Description : Number of minutes after which the checkpoint gets deleted. File : core-site.xml Property : fs.trash.checkpoint.interval Description : Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval.
  • 21. www.protechskills.com HDFS Quotas Name Quota - a hard limit on the number of file and directory names in the tree rooted at that directory. Space Quota - a hard limit on the number of bytes used by files in the tree rooted at that directory. Reporting Quota - count command of the HDFS shell reports quota values and the current count of names and bytes in use. With the -q option, also report the name quota value set for each directory, the available name quota remaining, the space quota value set, and the available space quota remaining. fs -count -q <directory>.. dfsadmin -setQuota <N> <directory>... Set the name quota to be N for each directory. dfsadmin -clrQuota <directory>... Remove any name quota for each directory. dfsadmin -setSpaceQuota <N> directory>.. Set the space quota to be N bytes for each directory. dfsadmin -clrSpaceQuota <directory>... Remove any spce quota for each directory.
  • 22. www.protechskills.com FS Shell – Some Basic Commands chgrp - Change group association of files. Usage: hdfs dfs -chgrp [-R] GROUP URI [URI …] chmod - Change the permissions of files. Usage: hdfs dfs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI …] chown - Change the owner of files. Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ] du - Displays sizes of files and directories Usage: hdfs dfs -du [-s] [-h] URI [URI …] The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files. The -h option will format file sizes in a "human-readable" fashion. dus - Displays a summary of file lengths Usage: hdfs dfs -dus <args> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop- common/FileSystemShell.html http://hadoop.apache.org/docs/r1.2.1/file_system_shell.html
  • 23. www.protechskills.com DfsAdmin Command  bin/hadoop dfsadmin [Generic Options] [Command Options] -safemode enter / leave / get / wait Safe mode maintenance command. Safe mode can also be entered manually, but then it can only be turned off manually as well. -report Reports basic filesystem information and statistics. -refreshNodes Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode and those that should be decommissioned or recommissioned. -metasave filename Save Namenode's primary data structures to filename in the directory specified by hadoop.log.dir property. filename is overwritten if it exists. filename will contain one line for each of the following 1. Datanodes heart beating with Namenode 2. Blocks waiting to be replicated 3. Blocks currrently being replicated 4. Blocks waiting to be deleted
  • 24. www.protechskills.com References  Hadoop Apache Website – http://hadoop.apache.org/  Hadoop The Definitive Guide 3rd Edition By Oreilly