SlideShare a Scribd company logo
Page1 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Operating and Supporting Apache HBase -
Best Practices and Improvements
Tanvir Kherada (tkherada@hortonworks.com)
Enis Soztutar (enis@hortonworks.com)
Page2 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
About Us
Tanvir Kherada
Primary SME for HBase / Phoenix
Technical team lead @Hortonworks
support
Enis Soztutar
Committer and PMC member in Apache
HBase, Phoenix, and Hadoop
HBase/Phoenix dev @Hortonworks
Page3 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Outline
 Tools to debug: HBase UI and HBCK
 Top 3 categories of issues
 SmartSense
 Improvements for better operability Metrics and Alerts
Page4 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Tools
Page5 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase UI
 Load Distribution
 Debug Dump
 Runtime Configuration
 RPC Tasks
Page6 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase UI – Load Distribution
 Request Per Second
 Read Request Count per RegionServer
 Write Request Count per RegionServer
Page7 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase UI – Debug Dump contains Thread Dumps
Page8 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase UI – Runtime Configurations
 Runtime configurations can be reviewed from UI
 Consolidated view of every relevant configuration.
Page9 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase UI – Tasks
 Tasks can be reviewed and monitored
 Like major compactions. RPC calls
Page10 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBCK
 Covered extensively later while we discuss inconsistencies
Page11 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Regionserver Stability Issues
Page12 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region Server Crashes – JVM Pauses
 Hbase’s high availability comes from excellent orchestration conducted by ZooKeeper on
monitoring every RS and Hbase Master
 Zookeeper issues a shutdown of RS if a heartbeat check to RS is not responded within
timeout
 Extended JVM pauses at a RS can manifest as unresponsive RS causing ZK to issue a
shutdown
ZK RSHeartBeat Check
I am ok
ZK
RS
In GC
ShutDown Issued
HeartBeat Check
No Response
Page13 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region Server Crashes - Garbage Collection Pause
 What do we see in RS Logs?
 2016-06-13 18:13:20,533 WARN regionserver/b-bdata-r07f4-
prod.phx2.symcpe.net/100.80.148.53:60020 util.Sleeper: We slept 82136ms instead of
3000ms, this is likely due to a long garbage collecting pause and it's usually bad
 2016-06-13 18:13:20,533 WARN JvmPauseMonitor util.JvmPauseMonitor: Detected pause in
JVM or host machine (eg GC): pause of approximately 79669ms
GC pool 'ParNew' had collection(s): count=2 time=65742ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=14253ms
Page14 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region Server Crashes - Garbage Collection Pause
 GC Tuning Recommendation for CMS and YoungGen.
– hbase-env.sh
-Xmx32g
-Xms32g
-Xmn2500m
-XX:PermSize=128m (eliminated in Java 8)
-XX:MaxPermSize=128m (eliminated in Java 8)
-XX:SurvivorRatio=4
-XX:CMSInitiatingOccupancyFraction=50
-XX:+UseCMSInitiatingOccupancyOnly
 Also test G1 for your use case.
Page15 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
RS Crashes - Non GC JVM Pause Disk IO
 GC logs show unusual behavior
 What we’ve seen is a delta between user time and real time taken in GC logs.
2015-07-06T23:55:10.642-0700: 7271.224: [GC2015-07-06T23:55:41.688-
0700: 7302.270: [ParNew: 420401K->1077K(471872K), 0.0347330 secs]
1066189K->646865K(32453440K), 31.0811340 secs] [Times: user=0.77
sys=0.01, real=31.08 secs]
 This is that classic head scracthing moment.
Page16 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
RS Crashes - Non GC JVM Pause Disk IO
 With no further leads in RS logs and GC logs we focus on system level information.
 /var/log/message provides significant leads
 Right when the we see that unusual delta between user and real clocks in GC logs we see the
following in system logs
kernel: sd 0:0:0:0: attempting task abort! scmd(ffff8809f5b7ddc0)
kernel: sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 17 0b 1c c8 00 00 08 00
kernel: scsi target0:0:0: handle(0x0007), sas_address(0x4433221102000000), phy(2)
kernel: scsi target0:0:0: enclosure_logical_id(0x500605b009941140), slot(0)
kernel: sd 0:0:0:0: task abort: SUCCESS scmd(ffff8809f5b7ddc0)
 Enabling DEBUG logging at disk driver level clearly showed 30 seconds pauses during write
operations.
Page17 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
RS Crashes - Non GC JVM Pause CPU Halts
 RS Logs show long JVM pause
 However; it explicitly clarifies that it’s a non GC Pause
2016-02-11 04:59:33,859 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected
pause in JVM or host machine (eg GC): pause of approximately 140009ms
No GCs detected
2016-02-11 04:59:33,861 WARN [regionserver60020.compactionChecker]
util.Sleeper: We slept 140482ms instead of
 We look at other component logs on the same machine.
 DataNode logs show break in activity around the same time frame.
 We don’t see exceptions in DN logs. But certainly break in log continuation.
Page18 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
RS Crashes - Non GC JVM Pause CPU Halts
 Start looking at system level information
 dmesg buffer logs by running dmesg command provides leads on CPU pauses
INFO: task java:100759 blocked for more than 120 seconds.
Not tainted 2.6.32-431.el6.x86_64 #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
java D 000000000000001b 0 100759 100731 0x00000080
 This was identified as a kernel level Red Hat bug
 Root Cause: hpsa driver can block CPU's workqueue for up to 10 minutes timeout as it waits
for controller's acknowledgment. When this happens it results in stalled workqueue. And since
the tty work ended up in the same CPU workqueue, we have the hung task
Page19 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Mitigate JVM Pauses
 Mitigate Crashes from JVM Pauses?
– Extend ZK Tick Time in zoo.cfg
– Extend zookeeper.session.timeout to match tick time in hbase-site.xml
How Much?
$ cat hbase-hbase*.log | grep –i pause
97903ms
102732ms
106956ms
112824ms
125318ms
165652ms – Biggest Pause so Far
Consider – 180000ms
Not my favorite workaround.
Cons?
• Now ZK will wait for extended time to
issue a shutdown.
• Makes Hbase fall short on its High
Availability promises.
• Make every effort to debug and resolve
pauses.
Page20 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Read Write Performance
Page21 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Write Performance
• Write to WAL caps your write performance.
• Relies on throughput of DataNode Pipeline
• Writes to Memstore is instantaneous
• Writes build up in RS heap
• Flushes eventually on the disk
Page22 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Write Performance
 How to go about debugging Write Performance issues in really huge clusters?
– Thanks to Hbase community, starting Hbase 0.99 onwards we have DN pipeline printed for slow Hlog Sync.
– For Hlog writes slower than what is configured as hbase.regionserver.hlog.slowsync.ms we now print DN
pipeline in RS logs.
2016-06-23 05:01:06,972 INFO [sync.2] wal.FSHLog: Slow sync cost: 131006 ms, current pipeline:
[DatanodeInfoWithStorage[10.189.115.117:50010,DS-c9d2a4b4-710b-4b3a-bd9d-93e8ba443f60,DISK],
DatanodeInfoWithStorage[10.189.115.121:50010,DS-7b7ba04c-f654-4a50-ad3b-16116a593d37,DISK],
DatanodeInfoWithStorage[10.189.111.128:50010,DS-8abb86da-84ac-413f-80a3-56ea7db1cb59,DISK]]
 Tracking slow DN prior to Hbase 0.99 was a very convoluted process.
– It starts with tracking which RS has RPC call queue length backing up
– Identify the most recent WAL file associated with that RS
– Run hadoop fsck –files –blocks –locations <WAL file>
– Identify DN involved with hosting blocks for the most recent WAL file
Page23 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Read Performance
 Hbase provides block caching which can improve subsequent scans
 However first read has to follow the read path of hitting HDFS first and the disk eventually.
 Read performance ideally depends on how fast the disks are responding.
Best Practices to Improve Read Performance
 Major Compactions - Once a day during low traffic hour.
 Balanced Cluster – Even distribution of regions across all region servers
Page24 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Read Performance – Best Practices
 Major Compaction
– Consolidates multiple store files into one
– Drastically improves block locality to avoid remote calls to read data.
– Review Block Locality Metrics in RegionServer UI
Page25 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Read Performance – Best Practices
 Balanced Cluster
– Even distribution of regions across all regionserver
– Balancer if turned on runs ever 5 minutes and keeps balancing the cluster
– It prevents a regionserver from being the most sought after regionserver. Preventing Hot Spotting
 Other Configs
– Enable HDFS Short Circuit – Turned on by Default in HDP distribution.
– Client Scanner Cache hbase.client.scanner.caching. Set to 100 in HDP by default
Page26 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
Page27 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Hbase stores information in multiple places which includes
 Unhandled situation within Meta, ZK, HDFS or Master just throws the entire system out of
sync causing inconsistencies
 Region Splits is an extremely complex and orchestrated work flow. It includes interaction with
all of the above mentioned components and has very little room for error.
 We’ve seen the most inconsistencies coming out of region splits.
– Lingering reference files
– Catalog Janitor prematurely deleting parent store file. HBASE-13331
HDFS Zookeeper
META Master Memory
Page28 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Symptoms
Client Hbase
Region Not Serving
Retries/Time Out
Page29 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Tools to identify and resolve inconsistencies
HBCK – Great Tool to identify inconsistencies
• Can be executed from any hbase client machine
• Confirms if Hbase is healthy or has inconsistencies
• Provides fix options to resolve inconsistencies
HBCK not a silver bullet
• Deep dive into RS logs
• Review Znodes
• Hbase Master UI
• Won’t run if Master has not initialized
Page30 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Some of the inconsistencies we see
– ERROR: Region { meta => xxx,x1A,1440904364342.ffdece0f3fc5323055b56b4d79e99e16., hdfs
=> null, deployed => } found in META, but not in HDFS or deployed on any region server
– This is broken meta even though it says file missing on HDFS.
– hbase hbck -fixMeta
Zookeeper
Master Memory
HDFS
META
Page31 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Some of the inconsistencies we see
– ERROR: There is a hole in the region chain between X and Y. You need to create a new .regioninfo
and region dir in hdfs to plug the hole.
– This is broken HDFS. Expected region directory is missing
– hbase hbck –fixHdfsOrphans -fixHdfsHoles
ZookeeperHDFS
Master MemoryMETA
Page32 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Some of the inconsistencies we see
– ERROR: Found lingering reference file
hdfs://namenode.example.com:8020/apps/hbase/data/XXX/f1d15a5a44f966f3f6ef1db4bd2b1874/a/
d730de20dcf148939c683bb20ed1acad.5dedd121a18d32879460713467db8736
– Region Splits did not complete successfully leaving lingering reference files
– hbase hbck -fixReferenceFiles
ZookeeperHDFS
Master MemoryMETA
Page33 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Some of the inconsistencies we see
– HBCK reporting 0 inconsistencies after running the fixes.
– However hbase master UI is still reporting RIT
– Restart Hbase Master to resolve this.
ZookeeperHDFS
Master MemoryMETA
Page34 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
 Not Always Straight Forward
– ERROR: Region { meta => null, hdfs =>
hdfs://xxx/hbase/yyy/00e2eed3bd0c3e8993fb2e130dbaa9b8, deployed => } on HDFS, but not listed
in META or deployed on any region server
– Inconsistency of this nature needs deeper dive into other inconsistencies
– It also need assessment of logs.
HDFS
Master Memory
Zookeeper
META
Page35 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Inconsistencies
Hbase Hbck Best Practices
• Redirect output to a file  hbase hbck >>/tmp/hbck.txt
• Larger clusters run table specific hbck fixes
• hbase hbck –fixMeta mytable
• Avoid running hbck with –repair flag.
Page36 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
SmartSense
Page37 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
SmartSense
 Great at detecting setup/config issues proactively
– Ulimits
– Dedicated ZK drives
– Transparent Huge Pages
– Swapiness
 This is common knowledge. However; if you don’t have it setup SmartSense will prompt for
resolution
Page38 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
SmartSense
Page39 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Improvements in Ops and Stability
Page40 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Metrics
 You MUST have a metric solution to successfully operate HBase cluster(s)
– GC Times, pause times
– Gets / Puts, Scans per second
– Memstore and Block cache (use memory!)
– Queues (RPC, flush, compaction)
– Replication (lag, queue, etc)
– Load Distribution, per-server view
– Look at HDFS and system(cpu, disk) metrics as well
 Use OpenTSDB if nothing else is available
 New versions keep adding more and more metrics
– Pause times, more master metrics, per-table metrics, FS latencies, etc
 How to chose important metrics out of hundreds available?
 Region Server and Master UI is your friend
Page41 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Grafana + AMS
<insert grafana>
Page42 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Other Improvements
 Canary Tool
– Monitor per-regionserver / per-region, do actual reads and writes, create alerts
 Procedure V2 based assignments
– Robust cluster ops (HBase-2.0)
– Eliminate states in multiple places
– Less manual intervention will be needed
 Bigger Heaps
– Reduce garbage being generated
– More offheap stuff (eliminate buffer copy, ipc buffers, memstore, cells, etc)
 Graceful handling of peak loads
– RPC scheduling
– client backoff
 Rolling Upgradable, no downtime
Page43 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Thanks. Q & A

More Related Content

What's hot (20)

PPTX
Supporting Apache HBase : Troubleshooting and Supportability Improvements
DataWorks Summit
 
PDF
HBase replication
wchevreuil
 
PDF
Facebook Messages & HBase
强 王
 
PDF
QEMU Disk IO Which performs Better: Native or threads?
Pradeep Kumar
 
PPTX
Node Labels in YARN
DataWorks Summit
 
PPTX
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon
 
PPTX
HBase and HDFS: Understanding FileSystem Usage in HBase
enissoz
 
PDF
The linux networking architecture
hugo lu
 
PPTX
NGINX: High Performance Load Balancing
NGINX, Inc.
 
PDF
Iceberg: A modern table format for big data (Strata NY 2018)
Ryan Blue
 
PDF
Open vSwitch - Stateful Connection Tracking & Stateful NAT
Thomas Graf
 
PDF
Kvm performance optimization for ubuntu
Sim Janghoon
 
PDF
What is new in Apache Hive 3.0?
DataWorks Summit
 
PDF
Ceph RBD Update - June 2021
Ceph Community
 
PDF
High-Performance Networking Using eBPF, XDP, and io_uring
ScyllaDB
 
PDF
Parquet Hadoop Summit 2013
Julien Le Dem
 
PDF
Using eBPF for High-Performance Networking in Cilium
ScyllaDB
 
PDF
Scylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and Beyond
ScyllaDB
 
PDF
LISA2019 Linux Systems Performance
Brendan Gregg
 
PDF
Distributed applications using Hazelcast
Taras Matyashovsky
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
DataWorks Summit
 
HBase replication
wchevreuil
 
Facebook Messages & HBase
强 王
 
QEMU Disk IO Which performs Better: Native or threads?
Pradeep Kumar
 
Node Labels in YARN
DataWorks Summit
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon
 
HBase and HDFS: Understanding FileSystem Usage in HBase
enissoz
 
The linux networking architecture
hugo lu
 
NGINX: High Performance Load Balancing
NGINX, Inc.
 
Iceberg: A modern table format for big data (Strata NY 2018)
Ryan Blue
 
Open vSwitch - Stateful Connection Tracking & Stateful NAT
Thomas Graf
 
Kvm performance optimization for ubuntu
Sim Janghoon
 
What is new in Apache Hive 3.0?
DataWorks Summit
 
Ceph RBD Update - June 2021
Ceph Community
 
High-Performance Networking Using eBPF, XDP, and io_uring
ScyllaDB
 
Parquet Hadoop Summit 2013
Julien Le Dem
 
Using eBPF for High-Performance Networking in Cilium
ScyllaDB
 
Scylla Summit 2022: The Future of Consensus in ScyllaDB 5.0 and Beyond
ScyllaDB
 
LISA2019 Linux Systems Performance
Brendan Gregg
 
Distributed applications using Hazelcast
Taras Matyashovsky
 

Viewers also liked (20)

PPTX
Accelerating Data Warehouse Modernization
DataWorks Summit/Hadoop Summit
 
PDF
Apache HBase Workshop
Valerii Moisieienko
 
PDF
Modern data warehouse
Stephen Alex
 
PPTX
A First Look at San Francisco’s New ETL Job Platform
Safe Software
 
PPTX
Scheduling Policies in YARN
DataWorks Summit/Hadoop Summit
 
PDF
Part 4 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
PPTX
Data warehouse design
ines beltaief
 
PPTX
Apache HBase: State of the Union
DataWorks Summit/Hadoop Summit
 
PDF
Supporting Data Services Marketplace using Data Virtualization
Denodo
 
PPTX
Introduction to ETL process
Omid Vahdaty
 
PPTX
Quark Virtualization Engine for Analytics
DataWorks Summit/Hadoop Summit
 
PDF
Data Warehouse Back to Basics: Dimensional Modeling
Dunn Solutions Group
 
PDF
What's new in SQL on Hadoop and Beyond
DataWorks Summit/Hadoop Summit
 
PPTX
Kafka Security
DataWorks Summit/Hadoop Summit
 
PDF
Part 1 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
PPT
Designing and implementing_an_etl_framework
Bharat Vadlamudi
 
PDF
Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...
Denodo
 
PPTX
YARN Federation
DataWorks Summit/Hadoop Summit
 
PPTX
Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...
DataStax
 
PDF
Part 3 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
Accelerating Data Warehouse Modernization
DataWorks Summit/Hadoop Summit
 
Apache HBase Workshop
Valerii Moisieienko
 
Modern data warehouse
Stephen Alex
 
A First Look at San Francisco’s New ETL Job Platform
Safe Software
 
Scheduling Policies in YARN
DataWorks Summit/Hadoop Summit
 
Part 4 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
Data warehouse design
ines beltaief
 
Apache HBase: State of the Union
DataWorks Summit/Hadoop Summit
 
Supporting Data Services Marketplace using Data Virtualization
Denodo
 
Introduction to ETL process
Omid Vahdaty
 
Quark Virtualization Engine for Analytics
DataWorks Summit/Hadoop Summit
 
Data Warehouse Back to Basics: Dimensional Modeling
Dunn Solutions Group
 
What's new in SQL on Hadoop and Beyond
DataWorks Summit/Hadoop Summit
 
Part 1 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
Designing and implementing_an_etl_framework
Bharat Vadlamudi
 
Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...
Denodo
 
Deletes Without Tombstones or TTLs (Eric Stevens, ProtectWise) | Cassandra Su...
DataStax
 
Part 3 - Data Warehousing Lecture at BW Cooperative State University (DHBW)
Andreas Buckenhofer
 
Ad

Similar to Operating and Supporting Apache HBase Best Practices and Improvements (20)

PPTX
HBase: Where Online Meets Low Latency
HBaseCon
 
PPTX
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
Cloudera, Inc.
 
PPTX
HBase Low Latency, StrataNYC 2014
Nick Dimiduk
 
PDF
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
HBaseCon
 
PPTX
Rolling Out Apache HBase for Mobile Offerings at Visa
HBaseCon
 
PDF
Apache HBase Low Latency
Nick Dimiduk
 
PPTX
Hadoop operations-2014-strata-new-york-v5
Chris Nauroth
 
PDF
Keep your Hadoop cluster at its best!
Sheetal Dolas
 
POTX
Meet HBase 2.0 and Phoenix 5.0
Ankit Singhal
 
PPTX
Hadoop Operations - Best Practices from the Field
DataWorks Summit
 
PPTX
Hadoop operations-2015-hadoop-summit-san-jose-v5
Chris Nauroth
 
PPTX
Storage Infrastructure Behind Facebook Messages
feng1212
 
PPTX
Meet HBase 2.0 and Phoenix 5.0
DataWorks Summit
 
PPTX
HBaseCon 2013: How to Get the MTTR Below 1 Minute and More
Cloudera, Inc.
 
PPTX
Meet HBase 2.0 and Phoenix-5.0
DataWorks Summit
 
PDF
HBase tales from the trenches
wchevreuil
 
PDF
HBase Tales From the Trenches - Short stories about most common HBase operati...
DataWorks Summit
 
PPTX
IoT:what about data storage?
DataWorks Summit/Hadoop Summit
 
PPTX
HBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon
 
PPTX
Keep your Hadoop Cluster at its Best
DataWorks Summit/Hadoop Summit
 
HBase: Where Online Meets Low Latency
HBaseCon
 
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
Cloudera, Inc.
 
HBase Low Latency, StrataNYC 2014
Nick Dimiduk
 
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
HBaseCon
 
Rolling Out Apache HBase for Mobile Offerings at Visa
HBaseCon
 
Apache HBase Low Latency
Nick Dimiduk
 
Hadoop operations-2014-strata-new-york-v5
Chris Nauroth
 
Keep your Hadoop cluster at its best!
Sheetal Dolas
 
Meet HBase 2.0 and Phoenix 5.0
Ankit Singhal
 
Hadoop Operations - Best Practices from the Field
DataWorks Summit
 
Hadoop operations-2015-hadoop-summit-san-jose-v5
Chris Nauroth
 
Storage Infrastructure Behind Facebook Messages
feng1212
 
Meet HBase 2.0 and Phoenix 5.0
DataWorks Summit
 
HBaseCon 2013: How to Get the MTTR Below 1 Minute and More
Cloudera, Inc.
 
Meet HBase 2.0 and Phoenix-5.0
DataWorks Summit
 
HBase tales from the trenches
wchevreuil
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
DataWorks Summit
 
IoT:what about data storage?
DataWorks Summit/Hadoop Summit
 
HBaseCon 2015: HBase 2.0 and Beyond Panel
HBaseCon
 
Keep your Hadoop Cluster at its Best
DataWorks Summit/Hadoop Summit
 
Ad

More from DataWorks Summit/Hadoop Summit (20)

PPT
Running Apache Spark & Apache Zeppelin in Production
DataWorks Summit/Hadoop Summit
 
PPT
State of Security: Apache Spark & Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
PDF
Unleashing the Power of Apache Atlas with Apache Ranger
DataWorks Summit/Hadoop Summit
 
PDF
Enabling Digital Diagnostics with a Data Science Platform
DataWorks Summit/Hadoop Summit
 
PDF
Revolutionize Text Mining with Spark and Zeppelin
DataWorks Summit/Hadoop Summit
 
PDF
Double Your Hadoop Performance with Hortonworks SmartSense
DataWorks Summit/Hadoop Summit
 
PDF
Hadoop Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Data Science Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Apache Spark Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
PPTX
Schema Registry - Set you Data Free
DataWorks Summit/Hadoop Summit
 
PPTX
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
DataWorks Summit/Hadoop Summit
 
PDF
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
DataWorks Summit/Hadoop Summit
 
PPTX
Mool - Automated Log Analysis using Data Science and ML
DataWorks Summit/Hadoop Summit
 
PPTX
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
PPTX
HBase in Practice
DataWorks Summit/Hadoop Summit
 
PPTX
The Challenge of Driving Business Value from the Analytics of Things (AOT)
DataWorks Summit/Hadoop Summit
 
PDF
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 
PPTX
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
DataWorks Summit/Hadoop Summit
 
PPTX
Backup and Disaster Recovery in Hadoop
DataWorks Summit/Hadoop Summit
 
Running Apache Spark & Apache Zeppelin in Production
DataWorks Summit/Hadoop Summit
 
State of Security: Apache Spark & Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
Unleashing the Power of Apache Atlas with Apache Ranger
DataWorks Summit/Hadoop Summit
 
Enabling Digital Diagnostics with a Data Science Platform
DataWorks Summit/Hadoop Summit
 
Revolutionize Text Mining with Spark and Zeppelin
DataWorks Summit/Hadoop Summit
 
Double Your Hadoop Performance with Hortonworks SmartSense
DataWorks Summit/Hadoop Summit
 
Hadoop Crash Course
DataWorks Summit/Hadoop Summit
 
Data Science Crash Course
DataWorks Summit/Hadoop Summit
 
Apache Spark Crash Course
DataWorks Summit/Hadoop Summit
 
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
Schema Registry - Set you Data Free
DataWorks Summit/Hadoop Summit
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
DataWorks Summit/Hadoop Summit
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
DataWorks Summit/Hadoop Summit
 
Mool - Automated Log Analysis using Data Science and ML
DataWorks Summit/Hadoop Summit
 
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
HBase in Practice
DataWorks Summit/Hadoop Summit
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
DataWorks Summit/Hadoop Summit
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
DataWorks Summit/Hadoop Summit
 
Backup and Disaster Recovery in Hadoop
DataWorks Summit/Hadoop Summit
 

Recently uploaded (20)

PPTX
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PDF
Staying Human in a Machine- Accelerated World
Catalin Jora
 
PDF
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
PDF
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
DOCX
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
What’s my job again? Slides from Mark Simos talk at 2025 Tampa BSides
Mark Simos
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
Staying Human in a Machine- Accelerated World
Catalin Jora
 
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
What’s my job again? Slides from Mark Simos talk at 2025 Tampa BSides
Mark Simos
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 

Operating and Supporting Apache HBase Best Practices and Improvements

  • 1. Page1 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Operating and Supporting Apache HBase - Best Practices and Improvements Tanvir Kherada ([email protected]) Enis Soztutar ([email protected])
  • 2. Page2 © Hortonworks Inc. 2011 – 2014. All Rights Reserved About Us Tanvir Kherada Primary SME for HBase / Phoenix Technical team lead @Hortonworks support Enis Soztutar Committer and PMC member in Apache HBase, Phoenix, and Hadoop HBase/Phoenix dev @Hortonworks
  • 3. Page3 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Outline  Tools to debug: HBase UI and HBCK  Top 3 categories of issues  SmartSense  Improvements for better operability Metrics and Alerts
  • 4. Page4 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Tools
  • 5. Page5 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase UI  Load Distribution  Debug Dump  Runtime Configuration  RPC Tasks
  • 6. Page6 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase UI – Load Distribution  Request Per Second  Read Request Count per RegionServer  Write Request Count per RegionServer
  • 7. Page7 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase UI – Debug Dump contains Thread Dumps
  • 8. Page8 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase UI – Runtime Configurations  Runtime configurations can be reviewed from UI  Consolidated view of every relevant configuration.
  • 9. Page9 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase UI – Tasks  Tasks can be reviewed and monitored  Like major compactions. RPC calls
  • 10. Page10 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBCK  Covered extensively later while we discuss inconsistencies
  • 11. Page11 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Regionserver Stability Issues
  • 12. Page12 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region Server Crashes – JVM Pauses  Hbase’s high availability comes from excellent orchestration conducted by ZooKeeper on monitoring every RS and Hbase Master  Zookeeper issues a shutdown of RS if a heartbeat check to RS is not responded within timeout  Extended JVM pauses at a RS can manifest as unresponsive RS causing ZK to issue a shutdown ZK RSHeartBeat Check I am ok ZK RS In GC ShutDown Issued HeartBeat Check No Response
  • 13. Page13 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region Server Crashes - Garbage Collection Pause  What do we see in RS Logs?  2016-06-13 18:13:20,533 WARN regionserver/b-bdata-r07f4- prod.phx2.symcpe.net/100.80.148.53:60020 util.Sleeper: We slept 82136ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad  2016-06-13 18:13:20,533 WARN JvmPauseMonitor util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 79669ms GC pool 'ParNew' had collection(s): count=2 time=65742ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=14253ms
  • 14. Page14 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region Server Crashes - Garbage Collection Pause  GC Tuning Recommendation for CMS and YoungGen. – hbase-env.sh -Xmx32g -Xms32g -Xmn2500m -XX:PermSize=128m (eliminated in Java 8) -XX:MaxPermSize=128m (eliminated in Java 8) -XX:SurvivorRatio=4 -XX:CMSInitiatingOccupancyFraction=50 -XX:+UseCMSInitiatingOccupancyOnly  Also test G1 for your use case.
  • 15. Page15 © Hortonworks Inc. 2011 – 2014. All Rights Reserved RS Crashes - Non GC JVM Pause Disk IO  GC logs show unusual behavior  What we’ve seen is a delta between user time and real time taken in GC logs. 2015-07-06T23:55:10.642-0700: 7271.224: [GC2015-07-06T23:55:41.688- 0700: 7302.270: [ParNew: 420401K->1077K(471872K), 0.0347330 secs] 1066189K->646865K(32453440K), 31.0811340 secs] [Times: user=0.77 sys=0.01, real=31.08 secs]  This is that classic head scracthing moment.
  • 16. Page16 © Hortonworks Inc. 2011 – 2014. All Rights Reserved RS Crashes - Non GC JVM Pause Disk IO  With no further leads in RS logs and GC logs we focus on system level information.  /var/log/message provides significant leads  Right when the we see that unusual delta between user and real clocks in GC logs we see the following in system logs kernel: sd 0:0:0:0: attempting task abort! scmd(ffff8809f5b7ddc0) kernel: sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 17 0b 1c c8 00 00 08 00 kernel: scsi target0:0:0: handle(0x0007), sas_address(0x4433221102000000), phy(2) kernel: scsi target0:0:0: enclosure_logical_id(0x500605b009941140), slot(0) kernel: sd 0:0:0:0: task abort: SUCCESS scmd(ffff8809f5b7ddc0)  Enabling DEBUG logging at disk driver level clearly showed 30 seconds pauses during write operations.
  • 17. Page17 © Hortonworks Inc. 2011 – 2014. All Rights Reserved RS Crashes - Non GC JVM Pause CPU Halts  RS Logs show long JVM pause  However; it explicitly clarifies that it’s a non GC Pause 2016-02-11 04:59:33,859 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 140009ms No GCs detected 2016-02-11 04:59:33,861 WARN [regionserver60020.compactionChecker] util.Sleeper: We slept 140482ms instead of  We look at other component logs on the same machine.  DataNode logs show break in activity around the same time frame.  We don’t see exceptions in DN logs. But certainly break in log continuation.
  • 18. Page18 © Hortonworks Inc. 2011 – 2014. All Rights Reserved RS Crashes - Non GC JVM Pause CPU Halts  Start looking at system level information  dmesg buffer logs by running dmesg command provides leads on CPU pauses INFO: task java:100759 blocked for more than 120 seconds. Not tainted 2.6.32-431.el6.x86_64 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. java D 000000000000001b 0 100759 100731 0x00000080  This was identified as a kernel level Red Hat bug  Root Cause: hpsa driver can block CPU's workqueue for up to 10 minutes timeout as it waits for controller's acknowledgment. When this happens it results in stalled workqueue. And since the tty work ended up in the same CPU workqueue, we have the hung task
  • 19. Page19 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Mitigate JVM Pauses  Mitigate Crashes from JVM Pauses? – Extend ZK Tick Time in zoo.cfg – Extend zookeeper.session.timeout to match tick time in hbase-site.xml How Much? $ cat hbase-hbase*.log | grep –i pause 97903ms 102732ms 106956ms 112824ms 125318ms 165652ms – Biggest Pause so Far Consider – 180000ms Not my favorite workaround. Cons? • Now ZK will wait for extended time to issue a shutdown. • Makes Hbase fall short on its High Availability promises. • Make every effort to debug and resolve pauses.
  • 20. Page20 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Read Write Performance
  • 21. Page21 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Write Performance • Write to WAL caps your write performance. • Relies on throughput of DataNode Pipeline • Writes to Memstore is instantaneous • Writes build up in RS heap • Flushes eventually on the disk
  • 22. Page22 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Write Performance  How to go about debugging Write Performance issues in really huge clusters? – Thanks to Hbase community, starting Hbase 0.99 onwards we have DN pipeline printed for slow Hlog Sync. – For Hlog writes slower than what is configured as hbase.regionserver.hlog.slowsync.ms we now print DN pipeline in RS logs. 2016-06-23 05:01:06,972 INFO [sync.2] wal.FSHLog: Slow sync cost: 131006 ms, current pipeline: [DatanodeInfoWithStorage[10.189.115.117:50010,DS-c9d2a4b4-710b-4b3a-bd9d-93e8ba443f60,DISK], DatanodeInfoWithStorage[10.189.115.121:50010,DS-7b7ba04c-f654-4a50-ad3b-16116a593d37,DISK], DatanodeInfoWithStorage[10.189.111.128:50010,DS-8abb86da-84ac-413f-80a3-56ea7db1cb59,DISK]]  Tracking slow DN prior to Hbase 0.99 was a very convoluted process. – It starts with tracking which RS has RPC call queue length backing up – Identify the most recent WAL file associated with that RS – Run hadoop fsck –files –blocks –locations <WAL file> – Identify DN involved with hosting blocks for the most recent WAL file
  • 23. Page23 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Read Performance  Hbase provides block caching which can improve subsequent scans  However first read has to follow the read path of hitting HDFS first and the disk eventually.  Read performance ideally depends on how fast the disks are responding. Best Practices to Improve Read Performance  Major Compactions - Once a day during low traffic hour.  Balanced Cluster – Even distribution of regions across all region servers
  • 24. Page24 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Read Performance – Best Practices  Major Compaction – Consolidates multiple store files into one – Drastically improves block locality to avoid remote calls to read data. – Review Block Locality Metrics in RegionServer UI
  • 25. Page25 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Read Performance – Best Practices  Balanced Cluster – Even distribution of regions across all regionserver – Balancer if turned on runs ever 5 minutes and keeps balancing the cluster – It prevents a regionserver from being the most sought after regionserver. Preventing Hot Spotting  Other Configs – Enable HDFS Short Circuit – Turned on by Default in HDP distribution. – Client Scanner Cache hbase.client.scanner.caching. Set to 100 in HDP by default
  • 26. Page26 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies
  • 27. Page27 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Hbase stores information in multiple places which includes  Unhandled situation within Meta, ZK, HDFS or Master just throws the entire system out of sync causing inconsistencies  Region Splits is an extremely complex and orchestrated work flow. It includes interaction with all of the above mentioned components and has very little room for error.  We’ve seen the most inconsistencies coming out of region splits. – Lingering reference files – Catalog Janitor prematurely deleting parent store file. HBASE-13331 HDFS Zookeeper META Master Memory
  • 28. Page28 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Symptoms Client Hbase Region Not Serving Retries/Time Out
  • 29. Page29 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Tools to identify and resolve inconsistencies HBCK – Great Tool to identify inconsistencies • Can be executed from any hbase client machine • Confirms if Hbase is healthy or has inconsistencies • Provides fix options to resolve inconsistencies HBCK not a silver bullet • Deep dive into RS logs • Review Znodes • Hbase Master UI • Won’t run if Master has not initialized
  • 30. Page30 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Some of the inconsistencies we see – ERROR: Region { meta => xxx,x1A,1440904364342.ffdece0f3fc5323055b56b4d79e99e16., hdfs => null, deployed => } found in META, but not in HDFS or deployed on any region server – This is broken meta even though it says file missing on HDFS. – hbase hbck -fixMeta Zookeeper Master Memory HDFS META
  • 31. Page31 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Some of the inconsistencies we see – ERROR: There is a hole in the region chain between X and Y. You need to create a new .regioninfo and region dir in hdfs to plug the hole. – This is broken HDFS. Expected region directory is missing – hbase hbck –fixHdfsOrphans -fixHdfsHoles ZookeeperHDFS Master MemoryMETA
  • 32. Page32 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Some of the inconsistencies we see – ERROR: Found lingering reference file hdfs://namenode.example.com:8020/apps/hbase/data/XXX/f1d15a5a44f966f3f6ef1db4bd2b1874/a/ d730de20dcf148939c683bb20ed1acad.5dedd121a18d32879460713467db8736 – Region Splits did not complete successfully leaving lingering reference files – hbase hbck -fixReferenceFiles ZookeeperHDFS Master MemoryMETA
  • 33. Page33 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Some of the inconsistencies we see – HBCK reporting 0 inconsistencies after running the fixes. – However hbase master UI is still reporting RIT – Restart Hbase Master to resolve this. ZookeeperHDFS Master MemoryMETA
  • 34. Page34 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies  Not Always Straight Forward – ERROR: Region { meta => null, hdfs => hdfs://xxx/hbase/yyy/00e2eed3bd0c3e8993fb2e130dbaa9b8, deployed => } on HDFS, but not listed in META or deployed on any region server – Inconsistency of this nature needs deeper dive into other inconsistencies – It also need assessment of logs. HDFS Master Memory Zookeeper META
  • 35. Page35 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Inconsistencies Hbase Hbck Best Practices • Redirect output to a file  hbase hbck >>/tmp/hbck.txt • Larger clusters run table specific hbck fixes • hbase hbck –fixMeta mytable • Avoid running hbck with –repair flag.
  • 36. Page36 © Hortonworks Inc. 2011 – 2014. All Rights Reserved SmartSense
  • 37. Page37 © Hortonworks Inc. 2011 – 2014. All Rights Reserved SmartSense  Great at detecting setup/config issues proactively – Ulimits – Dedicated ZK drives – Transparent Huge Pages – Swapiness  This is common knowledge. However; if you don’t have it setup SmartSense will prompt for resolution
  • 38. Page38 © Hortonworks Inc. 2011 – 2014. All Rights Reserved SmartSense
  • 39. Page39 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Improvements in Ops and Stability
  • 40. Page40 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Metrics  You MUST have a metric solution to successfully operate HBase cluster(s) – GC Times, pause times – Gets / Puts, Scans per second – Memstore and Block cache (use memory!) – Queues (RPC, flush, compaction) – Replication (lag, queue, etc) – Load Distribution, per-server view – Look at HDFS and system(cpu, disk) metrics as well  Use OpenTSDB if nothing else is available  New versions keep adding more and more metrics – Pause times, more master metrics, per-table metrics, FS latencies, etc  How to chose important metrics out of hundreds available?  Region Server and Master UI is your friend
  • 41. Page41 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Grafana + AMS <insert grafana>
  • 42. Page42 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Other Improvements  Canary Tool – Monitor per-regionserver / per-region, do actual reads and writes, create alerts  Procedure V2 based assignments – Robust cluster ops (HBase-2.0) – Eliminate states in multiple places – Less manual intervention will be needed  Bigger Heaps – Reduce garbage being generated – More offheap stuff (eliminate buffer copy, ipc buffers, memstore, cells, etc)  Graceful handling of peak loads – RPC scheduling – client backoff  Rolling Upgradable, no downtime
  • 43. Page43 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Thanks. Q & A

Editor's Notes

  • #2: - What is hbase? - What is it good at? - How do you use it in my applications? Context, first principals
  • #5: Understand the world it lives in and it’s building blocks
  • #12: Understand the world it lives in and it’s building blocks
  • #21: Understand the world it lives in and it’s building blocks
  • #27: Understand the world it lives in and it’s building blocks
  • #37: Understand the world it lives in and it’s building blocks
  • #40: Understand the world it lives in and it’s building blocks
  • #44: Understand the world it lives in and it’s building blocks