SlideShare a Scribd company logo
ORC DEEP DIVE
Owen O’Malley
omalley@apache.org
January 2020
@owen_omalley
OVERVIEW
© 2019 Cloudera, Inc. All rights
reserved.
3
REQUIREMENTS
• Files had to be completely self describing
• Schema
• File version
• Tight compression ⇒ Run Length Encoding (RLE) &
compression
• Column projection ⇒ segregate column data
• Predicate pushdown ⇒ understand & index user’s types
• Files had to be easy & fast to divide
• Compatible with a write once file systems
© 2019 Cloudera, Inc. All rights
reserved.
4
FILE STRUCTURE
• The file footer contains:
• Metadata – schema, file statistics
• Stripe information – metadata and location of stripes
• Postscript with the compression, buffer size, & file version
• ORC file data is divided into stripes.
• Stripes are self contained sets of rows organized by
columns.
• Stripes are the smallest unit of work for tasks.
• Default is ~64MB, but often configured larger.
© 2019 Cloudera, Inc. All rights
reserved.
5
STRIPE STRUCTURE
• Within a stripe, the metadata data is in the stripe
footer.
• List of streams
• Column encoding information (eg. direct or
dictionary)
• Columns are written as a set of streams. There
are 3 kinds:
• Index streams
• Data streams
• Dictionary streams
© 2019 Cloudera, Inc. All rights
reserved.
6
FILE STRUCTURE
© 2019 Cloudera, Inc. All rights
reserved.
7
READ PATH
• The Reader reads last 16k of file, extra as
needed
• The RowReader reads
• Stripe footer
• Required streams
© 2019 Cloudera, Inc. All rights
reserved.
8
STREAMS
• Streams are an independent sequence of bytes
• Serialization into streams depends on column
type & encoding
• Optional pipeline stages:
• Run Length Encoding (RLE) – first pass integer
compression
• Generic compression – Zlib, Snappy, LZO, Zstd
• Encryption – AES/CTR
DATA ENCODING
© 2019 Cloudera, Inc. All rights
reserved.
10
COMPOUND TYPES
• Compound types are serialized as trees of
columns.
• struct, list, map, uniontype all have child
columns
• Types are numbered in a preorder traversal
• The column reading classes are called TreeReadera: int,
b: map<string,
struct<c: string,
d: double>>,
e: timestamp
© 2019 Cloudera, Inc. All rights
reserved.
11
ENCODING COLUMNS
• To interpret a stream, you need three pieces of information:
• Column type
• Column encoding (direct, dictionary)
• Stream kind (present, data, length, etc.)
• All columns, if they have nulls, will have a present stream
• Serialized using a boolean RLE
• Integer columns are serialized with
• A data stream using integer RLE
© 2019 Cloudera, Inc. All rights
reserved.
12
ENCODING COLUMNS
• Binary columns are serialized with:
• Length stream of integer RLE
• Data stream of raw sequence of bytes
• String columns may be direct or dictionary encoded
• Direct looks like binary column, but dictionary is different
• Dictionary_data is raw sequence of dictionary bytes
• Length is an integer RLE stream of the dictionary lengths
• Data is an integer RLE stream of indexes into dictionary
© 2019 Cloudera, Inc. All rights
reserved.
13
ENCODING COLUMNS
• Lists and maps record the number of child
elements
• Length is an integer RLE stream
• Structs only have the present stream
• Timestamps need nanosecond resolution (ouch!)
• Data is an integer RLE of seconds from Jan 2015
• Secondary is an integer RLE of nanoseconds
with 0 suppress
© 2019 Cloudera, Inc. All rights
reserved.
14
RUN LENGTH ENCODING
• Goal is to get some cheap quick compression
• Handles repeating/incrementing values
• Handles integer byte packing
• Two versions
• Version 1 – relative simple repeat/literal
encoding
• Version 2 – complex encoding with 4 variants
• Column encoding of *_V2 means use RLE version
2
COMPRESSION & INDEXES
© 2019 Cloudera, Inc. All rights
reserved.
16
ROW PRUNING
• Three levels of indexing/row pruning
• File – uses file statistics in file footer
• Stripe – uses stripe statistics before file footer
• Row group (default of 10k rows) – uses index
stream
• The index stream for each column includes for
each row group
• Column statistics (min, max, count, sum)
• The start positions of each stream
© 2019 Cloudera, Inc. All rights
reserved.
17
SEARCH ARGUMENTS
• Engines can pass Search Arguments (SArgs) to the
RowReader.
• Limited set of operations (=, <=>, <, <=, in, between, is
null)
• Compare one column to literal(s)
• Can only eliminate entire row groups, stripes, or files.
• Engine must still filter the individual rows afterwards
• For Hive, ensure hive.optimize.index.filter is true.
© 2019 Cloudera, Inc. All rights
reserved.
18
COMPRESSION
• All of the generic compression is done in chunks
• Codec is reinitialized at start of chunk
• Each chunk is compressed separately
• Each uncompressed chunk is at most the buffer
size
• Each chunk has a 3 byte header giving:
• Compressed size of chunk
• Whether it is the original or compressed
© 2019 Cloudera, Inc. All rights
reserved.
19
INDEXES
• Wanted ability to seek to each row group
• Allows fine grain seeking & row pruning
• Could have flushed stream compression pipeline
• Would have dramatically lowered compression
• Instead treat compression & RLE has gray boxes
• Use our knowledge of compression & RLE
• Always start fresh at beginning of chunk or run
© 2019 Cloudera, Inc. All rights
reserved.
20
INDEX POSITIONS
• Records information to
seek to a given row in all
of a column’s streams
• Includes:
• C Compressed bytes
• U Uncompressed bytes
• V RLE values
• C, U, & V jump to RG 4
© 2019 Cloudera, Inc. All rights
reserved.
21
BLOOM FILTERS
• For use cases where you need to find particular
values
• Sorting by that column allows min/max filtering
• But you can only sort on one column effectively
• Bloom filters are probabilistic data structures
• Only useful for equality, not less than or greater
than
• Need ~10 bits/distinct value ⇒ opt in
• ORC uses a bloom_filter_utf8 stream to record a
bloom filter per a row group
© 2019 Cloudera, Inc. All rights
reserved.
22
ROW PRUNING EXAMPLE
• TPC-DS
 from tpch1000.lineitem where l_orderkey = 1212000001;
Index Rows Read Time
Nothing 5,999,989,709 74 sec
Min/Max 540,000 4.5 sec
Bloom 10,000 1.3 sec
VERSIONING
© 2019 Cloudera, Inc. All rights
reserved.
24
COMPATIBILITY
• Within a file version, old readers must be able to read all
files.
• A few exceptions (eg. new codecs, types)
• Version 0 (from Hive 0.11)
• Only RLE V1 & string dictionary encoding
• Version 1 (from Hive 0.12 forward)
• Version 2 (under development)
• The library includes ability to write any file version.
• Enables smooth upgrades across clusters
© 2019 Cloudera, Inc. All rights
reserved.
25
WRITER VERSION
• When fixes or feature additions are made to the
writer, we bump the writer version.
• Allows reader to work around bugs, especially in
index
• Does not affect reader compatibility
• We should require each minor version adds a
new one.
• We also record which writer wrote the file:
• Java, C++, Presto, Go
© 2019 Cloudera, Inc. All rights
reserved.
26
EXAMPLE WORKAROUND FOR HIVE-8746
• Timestamps suck!
• ORC uses an epoch of 01-01-2015 00:00:00.
• Timestamp columns record seconds offset from
epoch
• Unfortunately, the original code use local time
zone.
• If reader and writer were in time zones with the
same rules, it worked.
• Fix involved writing the writer time zone into file.
• Forwards and backwards compatible
ADDITIONAL FEATURES
© 2019 Cloudera, Inc. All rights
reserved.
28
SCHEMA EVOLUTION
• User passes desired schema to RecordReader factory.
• SchemaEvolution class maps between file & reader
schemas.
• The mapping can be positional or name based.
• Conversions based on legacy Hive behavior…
• The RecordReader uses the mapping to translate
• Choosing streams uses the file schema column ids
• Type translation is done by ConvertTreeReaderFactory.
• Adds an additional TreeReader that does conversion.
© 2019 Cloudera, Inc. All rights
reserved.
29
STRIPE CONCATENATION & FLUSH
• ORC has a special operator to concatenate files
• Requires consistent options & schema
• Concatenates stripes without reserialization
• ORC can flush the current contents including a file
footer while still writing to the file.
• Writes a side file with the current offset of the
file tail
• When the file closes the intermediate file footers
are ignored
© 2019 Cloudera, Inc. All rights
reserved.
30
COLUMN ENCRYPTION
• Released in ORC 1.6
• Allows consistent column level access control across engines
• Writes two variants of data
• Encrypted original
• Unencrypted statically masked
• Each variant has its own streams & encodings
• Each column has a unique local key, which is encrypted by
KMS
© 2019 Cloudera, Inc. All rights
reserved.
31
OTHER DEVELOPER TOOLS
• Benchmarks
• Hive & Spark
• Avro, Json, ORC, and Parquet
• Three data sets (taxi, sales, github)
• Docker
• Allows automated builds on all supported Linux
variants
• Site source code is with C++ & Java
USING ORC
© 2019 Cloudera, Inc. All rights
reserved.
33
WHICH VERSION IS IT?
Engine Version ORC Version
Hive 0.11 to 2.2 Hive ORC 0.11 to 2.2
2.3 ORC 1.3
3.0 ORC 1.4
3.1 ORC 1.5
Spark hive * Hive ORC 1.2
Spark native 2.3 ORC 1.4
2.4 to 3.0 ORC 1.5
© 2019 Cloudera, Inc. All rights
reserved.
34
FROM SQL
• Hive:
• Add “stored as orc” to table definition
• Table properties override configuration for ORC
• Spark’s “spark.sql.orc.impl” controls
implementation
• native – Use ORC 1.5
• hive – Use ORC from Hive 1.2
© 2019 Cloudera, Inc. All rights
reserved.
35
FROM JAVA
• Use the ORC project rather than Hive’s ORC.
• Maven group id: org.apache.orc version: 1.6.2
• nohive classifier avoids interfering with Hive’s packages
• Two levels of access
• orc-core – Faster access, but uses Hive’s vectorized API
• orc-mapreduce – Row by row access, simpler OrcStruct API
• MapReduce API implements WritableComparable
• Can be shuffled
• Need to specify type information in configuration for shuffle
or output
© 2019 Cloudera, Inc. All rights
reserved.
36
FROM C++
• Pure C++ client library
• No JNI or JDK so client can estimate and control memory
• Uses pure C++ HDFS client from HDFS-8707
• Reader and writer are stable and in production use.
• Runs on Linux, Mac OS, and Windows.
• Docker scripts for CentOS 6-8, Debian 8-10, Ubuntu 14-18
• CI builds on Mac OS, Ubuntu, and Windows
© 2019 Cloudera, Inc. All rights
reserved.
37
FROM COMMAND LINE
• Using hive –orcfiledump from Hive
• -j -p – pretty prints the metadata as JSON
• -d – prints data as JSON
• Using java -jar orc-tools-*-uber.jar from ORC
• meta -j -p – print the metadata as JSON
• data – print data as JSON
• convert – convert CSV, JSON, or ORC to ORC
• json-schema – scan a set of JSON documents to find
schema
© 2019 Cloudera, Inc. All rights
reserved.
38
DEBUGGING
• Things to look for:
• Stripe size
• Rows/Stripe
• File version
• Writer version
• Width of schema
• Sanity of statistics
• Column encoding
• Size of dictionaries
OPTIMIZATION
© 2019 Cloudera, Inc. All rights
reserved.
40
STRIPE SIZE
• Makes a huge difference in performance
• orc.stripe.size or hive.exec.orc.default.stripe.size
• Controls the amount of buffer in writer. Default is
64MB
• Trade off
• Large = Large more efficient reads
• Small = Less memory and more granular
processing splits
• Multiple files written at the same time will shrink
stripes
© 2019 Cloudera, Inc. All rights
reserved.
41
HDFS BLOCK PADDING
• The stripes don’t align exactly with HDFS
blocks
• Unless orc.write.variable.length.blocks
• HDFS scatters blocks around cluster
• Often want to pad to block boundaries
• Costs space, but improves performance
• orc.default.block.padding
• orc.block.padding.tolerance
© 2019 Cloudera, Inc. All rights
reserved.
42
SPLIT CALCULATION
• BI
Small fast queries
Splits based on HDFS blocks
• ETL
Large queries
Read file footer and apply SearchArg to stripes
Can include footer in splits
(hive.orc.splits.include.file.footer)
• Hybrid
If small files or lots of files, use BI
CONCLUSION
© 2019 Cloudera, Inc. All rights
reserved.
44
FOR MORE INFORMATION
• The orc_proto.proto defines the ORC metadata
• Read code and especially OrcConf, which has all of the knobs
• Website on https://orc.apache.org/
• /bugs ⇒ jira repository
• /src ⇒ github repository
• /specification ⇒ format specification
• Apache email list dev@orc.apache.org
THANK YOU
Owen O’Malley
omalley@apache.org
@owen_omalley

More Related Content

PDF
ORC Files
Owen O'Malley
 
PPTX
ORC 2015
t3rmin4t0r
 
PDF
The Apache Spark File Format Ecosystem
Databricks
 
PDF
The Parquet Format and Performance Optimization Opportunities
Databricks
 
PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Databricks
 
PPTX
ORC File and Vectorization - Hadoop Summit 2013
Owen O'Malley
 
PPTX
Hive + Tez: A Performance Deep Dive
DataWorks Summit
 
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
ORC Files
Owen O'Malley
 
ORC 2015
t3rmin4t0r
 
The Apache Spark File Format Ecosystem
Databricks
 
The Parquet Format and Performance Optimization Opportunities
Databricks
 
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Databricks
 
ORC File and Vectorization - Hadoop Summit 2013
Owen O'Malley
 
Hive + Tez: A Performance Deep Dive
DataWorks Summit
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 

What's hot (20)

PPTX
Apache Kudu: Technical Deep Dive


Cloudera, Inc.
 
PPTX
The Impala Cookbook
Cloudera, Inc.
 
PDF
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
PPTX
File Format Benchmarks - Avro, JSON, ORC, & Parquet
Owen O'Malley
 
PPTX
ORC File - Optimizing Your Big Data
DataWorks Summit
 
PDF
Performance Profiling in Rust
InfluxData
 
PDF
Change Data Feed in Delta
Databricks
 
PDF
Enabling Vectorized Engine in Apache Spark
Kazuaki Ishizaki
 
PPTX
Using Apache Hive with High Performance
Inderaj (Raj) Bains
 
PDF
Parquet Strata/Hadoop World, New York 2013
Julien Le Dem
 
PDF
HBase Storage Internals
DataWorks Summit
 
PPTX
Admission Control in Impala
Cloudera, Inc.
 
PPTX
Data governance with Unity Catalog Presentation
Knoldus Inc.
 
PPTX
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
PDF
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation Buffers
Cloudera, Inc.
 
PDF
Hive tuning
Michael Zhang
 
PDF
Spark SQL
Joud Khattab
 
PDF
Average Active Sessions - OaktableWorld 2013
John Beresniewicz
 
PDF
Continuous Application with FAIR Scheduler with Robert Xue
Databricks
 
PDF
Dive into PySpark
Mateusz Buśkiewicz
 
Apache Kudu: Technical Deep Dive


Cloudera, Inc.
 
The Impala Cookbook
Cloudera, Inc.
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
File Format Benchmarks - Avro, JSON, ORC, & Parquet
Owen O'Malley
 
ORC File - Optimizing Your Big Data
DataWorks Summit
 
Performance Profiling in Rust
InfluxData
 
Change Data Feed in Delta
Databricks
 
Enabling Vectorized Engine in Apache Spark
Kazuaki Ishizaki
 
Using Apache Hive with High Performance
Inderaj (Raj) Bains
 
Parquet Strata/Hadoop World, New York 2013
Julien Le Dem
 
HBase Storage Internals
DataWorks Summit
 
Admission Control in Impala
Cloudera, Inc.
 
Data governance with Unity Catalog Presentation
Knoldus Inc.
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation Buffers
Cloudera, Inc.
 
Hive tuning
Michael Zhang
 
Spark SQL
Joud Khattab
 
Average Active Sessions - OaktableWorld 2013
John Beresniewicz
 
Continuous Application with FAIR Scheduler with Robert Xue
Databricks
 
Dive into PySpark
Mateusz Buśkiewicz
 
Ad

Similar to ORC Deep Dive 2020 (20)

PPT
8051h.ppt microcontroller Assembly Language Programming
anushkayadav3011
 
PDF
A Closer Look at Apache Kudu
Andriy Zabavskyy
 
PPTX
Kafka overview v0.1
Mahendran Ponnusamy
 
PPTX
A brave new world in mutable big data relational storage (Strata NYC 2017)
Todd Lipcon
 
PPTX
Intro to Apache Kudu (short) - Big Data Application Meetup
Mike Percy
 
PPTX
Arm architecture chapter2_steve_furber
asodariyabhavesh
 
PPT
Assembler
Temesgen Molla
 
PPT
chapter8.ppt clean code Boundary ppt Coding guide
SanjeevSaharan5
 
PDF
HadoopFileFormats_2016
Jakub Wszolek, PhD
 
PDF
Parquet Hadoop Summit 2013
Julien Le Dem
 
PPTX
SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV Designing Embedded System with 8051...
Arti Parab Academics
 
PPTX
Pune-Cocoa: Blocks and GCD
Prashant Rane
 
PDF
Cloudera Impala technical deep dive
huguk
 
PPTX
HBase Data Modeling and Access Patterns with Kite SDK
HBaseCon
 
PDF
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)
Emprovise
 
PPTX
Blazingly-Fast:Introduction to Apache Fury Serialization
shawnckyang
 
PPTX
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Cloudera, Inc.
 
PDF
DataFrames: The Extended Cut
Wes McKinney
 
PPT
Performance Tuning by Dijesh P
PlusOrMinusZero
 
PDF
Why you should care about data layout in the file system with Cheng Lian and ...
Databricks
 
8051h.ppt microcontroller Assembly Language Programming
anushkayadav3011
 
A Closer Look at Apache Kudu
Andriy Zabavskyy
 
Kafka overview v0.1
Mahendran Ponnusamy
 
A brave new world in mutable big data relational storage (Strata NYC 2017)
Todd Lipcon
 
Intro to Apache Kudu (short) - Big Data Application Meetup
Mike Percy
 
Arm architecture chapter2_steve_furber
asodariyabhavesh
 
Assembler
Temesgen Molla
 
chapter8.ppt clean code Boundary ppt Coding guide
SanjeevSaharan5
 
HadoopFileFormats_2016
Jakub Wszolek, PhD
 
Parquet Hadoop Summit 2013
Julien Le Dem
 
SYBSC IT SEM IV EMBEDDED SYSTEMS UNIT IV Designing Embedded System with 8051...
Arti Parab Academics
 
Pune-Cocoa: Blocks and GCD
Prashant Rane
 
Cloudera Impala technical deep dive
huguk
 
HBase Data Modeling and Access Patterns with Kite SDK
HBaseCon
 
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)
Emprovise
 
Blazingly-Fast:Introduction to Apache Fury Serialization
shawnckyang
 
Simplifying Hadoop with RecordService, A Secure and Unified Data Access Path ...
Cloudera, Inc.
 
DataFrames: The Extended Cut
Wes McKinney
 
Performance Tuning by Dijesh P
PlusOrMinusZero
 
Why you should care about data layout in the file system with Cheng Lian and ...
Databricks
 
Ad

More from Owen O'Malley (19)

PPTX
Running An Apache Project: 10 Traps and How to Avoid Them
Owen O'Malley
 
PPTX
Big Data's Journey to ACID
Owen O'Malley
 
PPTX
Protect your private data with ORC column encryption
Owen O'Malley
 
PPTX
Fine Grain Access Control for Big Data: ORC Column Encryption
Owen O'Malley
 
PPTX
Fast Access to Your Data - Avro, JSON, ORC, and Parquet
Owen O'Malley
 
PDF
Strata NYC 2018 Iceberg
Owen O'Malley
 
PPTX
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and Parquet
Owen O'Malley
 
PPTX
ORC Column Encryption
Owen O'Malley
 
PPTX
Protecting Enterprise Data in Apache Hadoop
Owen O'Malley
 
PPTX
Data protection2015
Owen O'Malley
 
PPTX
Structor - Automated Building of Virtual Hadoop Clusters
Owen O'Malley
 
PPT
Hadoop Security Architecture
Owen O'Malley
 
PPTX
Adding ACID Updates to Hive
Owen O'Malley
 
PPTX
ORC File Introduction
Owen O'Malley
 
PDF
Optimizing Hive Queries
Owen O'Malley
 
PDF
Next Generation Hadoop Operations
Owen O'Malley
 
PDF
Next Generation MapReduce
Owen O'Malley
 
PDF
Bay Area HUG Feb 2011 Intro
Owen O'Malley
 
PDF
Plugging the Holes: Security and Compatability in Hadoop
Owen O'Malley
 
Running An Apache Project: 10 Traps and How to Avoid Them
Owen O'Malley
 
Big Data's Journey to ACID
Owen O'Malley
 
Protect your private data with ORC column encryption
Owen O'Malley
 
Fine Grain Access Control for Big Data: ORC Column Encryption
Owen O'Malley
 
Fast Access to Your Data - Avro, JSON, ORC, and Parquet
Owen O'Malley
 
Strata NYC 2018 Iceberg
Owen O'Malley
 
Fast Spark Access To Your Complex Data - Avro, JSON, ORC, and Parquet
Owen O'Malley
 
ORC Column Encryption
Owen O'Malley
 
Protecting Enterprise Data in Apache Hadoop
Owen O'Malley
 
Data protection2015
Owen O'Malley
 
Structor - Automated Building of Virtual Hadoop Clusters
Owen O'Malley
 
Hadoop Security Architecture
Owen O'Malley
 
Adding ACID Updates to Hive
Owen O'Malley
 
ORC File Introduction
Owen O'Malley
 
Optimizing Hive Queries
Owen O'Malley
 
Next Generation Hadoop Operations
Owen O'Malley
 
Next Generation MapReduce
Owen O'Malley
 
Bay Area HUG Feb 2011 Intro
Owen O'Malley
 
Plugging the Holes: Security and Compatability in Hadoop
Owen O'Malley
 

Recently uploaded (20)

PPTX
MSME 4.0 Template idea hackathon pdf to understand
alaudeenaarish
 
PDF
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
PPTX
Online Cab Booking and Management System.pptx
diptipaneri80
 
PPT
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
PDF
Zero carbon Building Design Guidelines V4
BassemOsman1
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PDF
AI-Driven IoT-Enabled UAV Inspection Framework for Predictive Maintenance and...
ijcncjournal019
 
PDF
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
sunil mishra pptmmmmmmmmmmmmmmmmmmmmmmmmm
singhamit111
 
PDF
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
PDF
Zero Carbon Building Performance standard
BassemOsman1
 
PPTX
Information Retrieval and Extraction - Module 7
premSankar19
 
PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PDF
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
DOCX
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PPTX
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
PPTX
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
MSME 4.0 Template idea hackathon pdf to understand
alaudeenaarish
 
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
Online Cab Booking and Management System.pptx
diptipaneri80
 
1. SYSTEMS, ROLES, AND DEVELOPMENT METHODOLOGIES.ppt
zilow058
 
Zero carbon Building Design Guidelines V4
BassemOsman1
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
AI-Driven IoT-Enabled UAV Inspection Framework for Predictive Maintenance and...
ijcncjournal019
 
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
sunil mishra pptmmmmmmmmmmmmmmmmmmmmmmmmm
singhamit111
 
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
Zero Carbon Building Performance standard
BassemOsman1
 
Information Retrieval and Extraction - Module 7
premSankar19
 
Inventory management chapter in automation and robotics.
atisht0104
 
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
business incubation centre aaaaaaaaaaaaaa
hodeeesite4
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 

ORC Deep Dive 2020

  • 1. ORC DEEP DIVE Owen O’Malley [email protected] January 2020 @owen_omalley
  • 3. © 2019 Cloudera, Inc. All rights reserved. 3 REQUIREMENTS • Files had to be completely self describing • Schema • File version • Tight compression ⇒ Run Length Encoding (RLE) & compression • Column projection ⇒ segregate column data • Predicate pushdown ⇒ understand & index user’s types • Files had to be easy & fast to divide • Compatible with a write once file systems
  • 4. © 2019 Cloudera, Inc. All rights reserved. 4 FILE STRUCTURE • The file footer contains: • Metadata – schema, file statistics • Stripe information – metadata and location of stripes • Postscript with the compression, buffer size, & file version • ORC file data is divided into stripes. • Stripes are self contained sets of rows organized by columns. • Stripes are the smallest unit of work for tasks. • Default is ~64MB, but often configured larger.
  • 5. © 2019 Cloudera, Inc. All rights reserved. 5 STRIPE STRUCTURE • Within a stripe, the metadata data is in the stripe footer. • List of streams • Column encoding information (eg. direct or dictionary) • Columns are written as a set of streams. There are 3 kinds: • Index streams • Data streams • Dictionary streams
  • 6. © 2019 Cloudera, Inc. All rights reserved. 6 FILE STRUCTURE
  • 7. © 2019 Cloudera, Inc. All rights reserved. 7 READ PATH • The Reader reads last 16k of file, extra as needed • The RowReader reads • Stripe footer • Required streams
  • 8. © 2019 Cloudera, Inc. All rights reserved. 8 STREAMS • Streams are an independent sequence of bytes • Serialization into streams depends on column type & encoding • Optional pipeline stages: • Run Length Encoding (RLE) – first pass integer compression • Generic compression – Zlib, Snappy, LZO, Zstd • Encryption – AES/CTR
  • 10. © 2019 Cloudera, Inc. All rights reserved. 10 COMPOUND TYPES • Compound types are serialized as trees of columns. • struct, list, map, uniontype all have child columns • Types are numbered in a preorder traversal • The column reading classes are called TreeReadera: int, b: map<string, struct<c: string, d: double>>, e: timestamp
  • 11. © 2019 Cloudera, Inc. All rights reserved. 11 ENCODING COLUMNS • To interpret a stream, you need three pieces of information: • Column type • Column encoding (direct, dictionary) • Stream kind (present, data, length, etc.) • All columns, if they have nulls, will have a present stream • Serialized using a boolean RLE • Integer columns are serialized with • A data stream using integer RLE
  • 12. © 2019 Cloudera, Inc. All rights reserved. 12 ENCODING COLUMNS • Binary columns are serialized with: • Length stream of integer RLE • Data stream of raw sequence of bytes • String columns may be direct or dictionary encoded • Direct looks like binary column, but dictionary is different • Dictionary_data is raw sequence of dictionary bytes • Length is an integer RLE stream of the dictionary lengths • Data is an integer RLE stream of indexes into dictionary
  • 13. © 2019 Cloudera, Inc. All rights reserved. 13 ENCODING COLUMNS • Lists and maps record the number of child elements • Length is an integer RLE stream • Structs only have the present stream • Timestamps need nanosecond resolution (ouch!) • Data is an integer RLE of seconds from Jan 2015 • Secondary is an integer RLE of nanoseconds with 0 suppress
  • 14. © 2019 Cloudera, Inc. All rights reserved. 14 RUN LENGTH ENCODING • Goal is to get some cheap quick compression • Handles repeating/incrementing values • Handles integer byte packing • Two versions • Version 1 – relative simple repeat/literal encoding • Version 2 – complex encoding with 4 variants • Column encoding of *_V2 means use RLE version 2
  • 16. © 2019 Cloudera, Inc. All rights reserved. 16 ROW PRUNING • Three levels of indexing/row pruning • File – uses file statistics in file footer • Stripe – uses stripe statistics before file footer • Row group (default of 10k rows) – uses index stream • The index stream for each column includes for each row group • Column statistics (min, max, count, sum) • The start positions of each stream
  • 17. © 2019 Cloudera, Inc. All rights reserved. 17 SEARCH ARGUMENTS • Engines can pass Search Arguments (SArgs) to the RowReader. • Limited set of operations (=, <=>, <, <=, in, between, is null) • Compare one column to literal(s) • Can only eliminate entire row groups, stripes, or files. • Engine must still filter the individual rows afterwards • For Hive, ensure hive.optimize.index.filter is true.
  • 18. © 2019 Cloudera, Inc. All rights reserved. 18 COMPRESSION • All of the generic compression is done in chunks • Codec is reinitialized at start of chunk • Each chunk is compressed separately • Each uncompressed chunk is at most the buffer size • Each chunk has a 3 byte header giving: • Compressed size of chunk • Whether it is the original or compressed
  • 19. © 2019 Cloudera, Inc. All rights reserved. 19 INDEXES • Wanted ability to seek to each row group • Allows fine grain seeking & row pruning • Could have flushed stream compression pipeline • Would have dramatically lowered compression • Instead treat compression & RLE has gray boxes • Use our knowledge of compression & RLE • Always start fresh at beginning of chunk or run
  • 20. © 2019 Cloudera, Inc. All rights reserved. 20 INDEX POSITIONS • Records information to seek to a given row in all of a column’s streams • Includes: • C Compressed bytes • U Uncompressed bytes • V RLE values • C, U, & V jump to RG 4
  • 21. © 2019 Cloudera, Inc. All rights reserved. 21 BLOOM FILTERS • For use cases where you need to find particular values • Sorting by that column allows min/max filtering • But you can only sort on one column effectively • Bloom filters are probabilistic data structures • Only useful for equality, not less than or greater than • Need ~10 bits/distinct value ⇒ opt in • ORC uses a bloom_filter_utf8 stream to record a bloom filter per a row group
  • 22. © 2019 Cloudera, Inc. All rights reserved. 22 ROW PRUNING EXAMPLE • TPC-DS  from tpch1000.lineitem where l_orderkey = 1212000001; Index Rows Read Time Nothing 5,999,989,709 74 sec Min/Max 540,000 4.5 sec Bloom 10,000 1.3 sec
  • 24. © 2019 Cloudera, Inc. All rights reserved. 24 COMPATIBILITY • Within a file version, old readers must be able to read all files. • A few exceptions (eg. new codecs, types) • Version 0 (from Hive 0.11) • Only RLE V1 & string dictionary encoding • Version 1 (from Hive 0.12 forward) • Version 2 (under development) • The library includes ability to write any file version. • Enables smooth upgrades across clusters
  • 25. © 2019 Cloudera, Inc. All rights reserved. 25 WRITER VERSION • When fixes or feature additions are made to the writer, we bump the writer version. • Allows reader to work around bugs, especially in index • Does not affect reader compatibility • We should require each minor version adds a new one. • We also record which writer wrote the file: • Java, C++, Presto, Go
  • 26. © 2019 Cloudera, Inc. All rights reserved. 26 EXAMPLE WORKAROUND FOR HIVE-8746 • Timestamps suck! • ORC uses an epoch of 01-01-2015 00:00:00. • Timestamp columns record seconds offset from epoch • Unfortunately, the original code use local time zone. • If reader and writer were in time zones with the same rules, it worked. • Fix involved writing the writer time zone into file. • Forwards and backwards compatible
  • 28. © 2019 Cloudera, Inc. All rights reserved. 28 SCHEMA EVOLUTION • User passes desired schema to RecordReader factory. • SchemaEvolution class maps between file & reader schemas. • The mapping can be positional or name based. • Conversions based on legacy Hive behavior… • The RecordReader uses the mapping to translate • Choosing streams uses the file schema column ids • Type translation is done by ConvertTreeReaderFactory. • Adds an additional TreeReader that does conversion.
  • 29. © 2019 Cloudera, Inc. All rights reserved. 29 STRIPE CONCATENATION & FLUSH • ORC has a special operator to concatenate files • Requires consistent options & schema • Concatenates stripes without reserialization • ORC can flush the current contents including a file footer while still writing to the file. • Writes a side file with the current offset of the file tail • When the file closes the intermediate file footers are ignored
  • 30. © 2019 Cloudera, Inc. All rights reserved. 30 COLUMN ENCRYPTION • Released in ORC 1.6 • Allows consistent column level access control across engines • Writes two variants of data • Encrypted original • Unencrypted statically masked • Each variant has its own streams & encodings • Each column has a unique local key, which is encrypted by KMS
  • 31. © 2019 Cloudera, Inc. All rights reserved. 31 OTHER DEVELOPER TOOLS • Benchmarks • Hive & Spark • Avro, Json, ORC, and Parquet • Three data sets (taxi, sales, github) • Docker • Allows automated builds on all supported Linux variants • Site source code is with C++ & Java
  • 33. © 2019 Cloudera, Inc. All rights reserved. 33 WHICH VERSION IS IT? Engine Version ORC Version Hive 0.11 to 2.2 Hive ORC 0.11 to 2.2 2.3 ORC 1.3 3.0 ORC 1.4 3.1 ORC 1.5 Spark hive * Hive ORC 1.2 Spark native 2.3 ORC 1.4 2.4 to 3.0 ORC 1.5
  • 34. © 2019 Cloudera, Inc. All rights reserved. 34 FROM SQL • Hive: • Add “stored as orc” to table definition • Table properties override configuration for ORC • Spark’s “spark.sql.orc.impl” controls implementation • native – Use ORC 1.5 • hive – Use ORC from Hive 1.2
  • 35. © 2019 Cloudera, Inc. All rights reserved. 35 FROM JAVA • Use the ORC project rather than Hive’s ORC. • Maven group id: org.apache.orc version: 1.6.2 • nohive classifier avoids interfering with Hive’s packages • Two levels of access • orc-core – Faster access, but uses Hive’s vectorized API • orc-mapreduce – Row by row access, simpler OrcStruct API • MapReduce API implements WritableComparable • Can be shuffled • Need to specify type information in configuration for shuffle or output
  • 36. © 2019 Cloudera, Inc. All rights reserved. 36 FROM C++ • Pure C++ client library • No JNI or JDK so client can estimate and control memory • Uses pure C++ HDFS client from HDFS-8707 • Reader and writer are stable and in production use. • Runs on Linux, Mac OS, and Windows. • Docker scripts for CentOS 6-8, Debian 8-10, Ubuntu 14-18 • CI builds on Mac OS, Ubuntu, and Windows
  • 37. © 2019 Cloudera, Inc. All rights reserved. 37 FROM COMMAND LINE • Using hive –orcfiledump from Hive • -j -p – pretty prints the metadata as JSON • -d – prints data as JSON • Using java -jar orc-tools-*-uber.jar from ORC • meta -j -p – print the metadata as JSON • data – print data as JSON • convert – convert CSV, JSON, or ORC to ORC • json-schema – scan a set of JSON documents to find schema
  • 38. © 2019 Cloudera, Inc. All rights reserved. 38 DEBUGGING • Things to look for: • Stripe size • Rows/Stripe • File version • Writer version • Width of schema • Sanity of statistics • Column encoding • Size of dictionaries
  • 40. © 2019 Cloudera, Inc. All rights reserved. 40 STRIPE SIZE • Makes a huge difference in performance • orc.stripe.size or hive.exec.orc.default.stripe.size • Controls the amount of buffer in writer. Default is 64MB • Trade off • Large = Large more efficient reads • Small = Less memory and more granular processing splits • Multiple files written at the same time will shrink stripes
  • 41. © 2019 Cloudera, Inc. All rights reserved. 41 HDFS BLOCK PADDING • The stripes don’t align exactly with HDFS blocks • Unless orc.write.variable.length.blocks • HDFS scatters blocks around cluster • Often want to pad to block boundaries • Costs space, but improves performance • orc.default.block.padding • orc.block.padding.tolerance
  • 42. © 2019 Cloudera, Inc. All rights reserved. 42 SPLIT CALCULATION • BI Small fast queries Splits based on HDFS blocks • ETL Large queries Read file footer and apply SearchArg to stripes Can include footer in splits (hive.orc.splits.include.file.footer) • Hybrid If small files or lots of files, use BI
  • 44. © 2019 Cloudera, Inc. All rights reserved. 44 FOR MORE INFORMATION • The orc_proto.proto defines the ORC metadata • Read code and especially OrcConf, which has all of the knobs • Website on https://orc.apache.org/ • /bugs ⇒ jira repository • /src ⇒ github repository • /specification ⇒ format specification • Apache email list [email protected]