SlideShare a Scribd company logo
Redshift Introduction 
Keeyong Han 
keeyonghan@hotmail.com
Table of Contents 
1. What is Redshift? 
2. Redshift In Action 
1. How to Upload? 
2. How to Query? 
3. Recommendation 
4. Q&A
WHAT IS REDSHIFT?
Brief Introduction (1) 
• A scalable SQL engine in AWS 
– Available except N. California and San Paulo regions as 
of Sep 2014 
– Up to 1.6PB of data in a cluster of servers 
– Fast but still in minutes for big joins 
– Columnar storage 
• Adding or Deleting a column is very fast!! 
• Supports Per column compression 
– Supports bulk update 
• Upload gzipped tsv/csv file to S3 and then run bulk update 
command (called “copy”)
Brief Introduction (2) 
• Supports Postgresql 8.x 
– But not all of the features of Postgresql 
– Accessible through ODBC/JDBC interface 
• You can use any tools/library supporting ODBC/JDBC 
– Still table schema matters! 
• It is still SQL
Brief Introduction (3) 
• Dense Compute vs. Dense Storage 
vCPU ECU Memory Storage Price 
DW1 – Dense Storage 
dw1.xlarge 2 4.4 15 2TB HDD $0.85/hour 
dw1.8xlarge 16 35 120 16TB HDD $6.80/hour 
DW2 – Dense Compute 
dw2.xlarge 2 7 15 0.16TB SSD $0.25/hour 
dw2.8xlarge 32 104 244 2.56TB SSD $4.80/hour
Brief Introduction (4) 
• Cost Analysis 
– If you need 8TB RedShift cluster, you will need 4 
dw1.xlarge instances 
• That will be $2448 per 30 days and about $30K per year 
– You will need to store input records to RedShift in 
S3 at the minimum. So there will be S3 cost as 
well. 
• 1TB with “reduced redundancy” would cost $24.5 per 
month
Brief Introduction (5) 
• Tightly coupled with other AWS services 
– S3, EMR (ElasticMapReduce), Kinesis, DynamoDB, RDS and 
so on 
– Backup and Snapshot to S3 
• No Automatic Resizing 
– You have to manually resize and it takes a while 
• Doubling from 2 nodes to 4 took 8 hours. The other way around 
took 18 hours or so (done in summer of 2013 though) 
– But during resizing, read operation still works 
• 30 minutes Maintenance every week 
– You have to avoid this window
Brief Summary 
• RedShift is a large scale SQL engine which can be 
used as Data Warehouse/Analytics solution 
– You don’t stall your production database! 
– Smoother migration for anyone who knows SQL 
– It supports SQL interface but behind the scene it is a 
NoSQL engine 
• RedShift isn’t for Realtime query engine 
– Semi-realtime data consumption might be doable but 
querying can take a while
Difference from MySQL (1) 
• No guarantee of primary key uniqueness 
– There can be many duplicates if you are not careful 
• You better delete before inserting (based on date/time range) 
– Primary key is just a hint for query optimizer 
• Need to define distkey and sortkey per table 
– distkey is to determine which node to store a record 
– sortkey is to determine in what order records need to be stored in a machine 
create table session_attribute ( 
browser_id decimal(20,0) not null distkey sortkey, 
session_id int, 
name varchar(48), 
value varchar(48), 
primary key(vid, sid, name) 
);
Difference from MySQL (2) 
• char/varchar type is in bytes not in characters 
• "rn” is counted as two characters 
• No text field. The max number of bytes in 
char/varchar is 65535 
• Addition/deletion of a column is very fast 
• Some keywords are reserved (user, tag and so 
on) 
• LIKE is case-sensitive (ILIKE is case-insensitive)
Supported Data Type in RedShift 
• SMALLINT (INT2) 
• INTEGER (INT, INT4) 
• BIGINT (INT8) 
• DECIMAL (NUMERIC) 
• REAL (FLOAT4) 
• DOUBLE PRECISION (FLOAT8) 
• BOOLEAN (BOOL) 
• CHAR (CHARACTER) 
• VARCHAR (CHARACTER VARYING) 
• DATE 
• TIMESTAMP
REDSHIFT IN ACTION
What can be stored? 
• Log Files 
– Web access logs 
– But needs to define schema. Better to add session 
level tables 
• Relational Database Tables 
– MySQL tables 
– Almost one to one mapping 
• Any structured data 
– Any data you can represent as CSV
A bit more about Session Table 
• Hadoop can be used to aggregate pageviews 
into session (on top of pageviews): 
– Group by session key 
– Order pageviews in the same session by 
timestamp 
• This aggregated info -> session table 
• Example of session table 
– Session ID, Browser ID, user ID, IP, UserAgent, 
Referrer info, Start time, Duration, …
How to Upload? 
• Need to define schema of your data 
• Create a table (again it is a SQL engine) 
• Generate a tsv or csv file(s) from your source data 
• Compress the file(s) 
• Upload the file to S3 
– This S3 bucket better be in the same region as the RedShift 
cluster (but it is no longer a must) 
• Run a bulk insert (called “copy”) 
– copy session_attribute [fields] from ‘s3://your_bucket/…’ 
options 
– Options include AWS keys, whether gzipped or not, delimiter 
used, max errors to tolerate and so on 
• Regular insert/update SQL statement can be used
Update Workflow 
Periodically upload 
input files 
S3 RedShift 
A cronjob 
Data Source Server 
Bulk Insert 
You can introduce a queue where S3 
location of all incoming input files are 
pushed. A consumer of this queue 
read from the queue and bulk insert 
to RedShift 
You might have to do ETL on your source 
data using Hadoop and so on
Incremental Update from MySQL 
• Change your table schema if possible 
– Need to have updatedon field in your table 
– Never delete a record but mark it as inactive 
• Monitor your table changes and propagate it 
to Redshift 
– Use DataBus from LinkedIn
HOW TO ACCESS REDSHIFT
Different Ways to Access (1) 
1. JDBC/ODBC desktop tools such as 
– SQLWorkBench, Navicat and so on 
– Requires IP registration for outside access 
2. JDBC/ODBC Library 
– Any PostgreSQL 8.0.x compatible should work 
In both cases, you use SQL statements
Different Ways to Access (2) 
3. Use Analytics Tool such as Tableau or Birst 
– But these have too many features 
– Will likely need a dedicated analyst
RECOMMENDATION
Things to Consider 
• How big are your tables? 
• Dumping your tables would cause issues? 
– Site’s stability and so on 
– Or do you have backup instance? 
• Are your tables friendly for incremental 
update? 
– “updatedon” field 
– no deletion of records
Steps 
• Start from Daily Update 
– Daily full refresh is fine to begin with to set up end-to-end 
cycle 
– If the tables are big, then dumping them can take a 
while 
• Implement Incremental Update Mechanism 
– This will require either table schema change or the 
use of some database change tracking mechanism 
• Go for Shorter update interval

More Related Content

Viewers also liked (6)

PPTX
Data analysis scala_spark
Yiguang Hu
 
PDF
Apache Spark 101
Abdullah Çetin ÇAVDAR
 
PDF
Apache Spark 101 [in 50 min]
Pawel Szulc
 
ODP
Introduction to Spark with Scala
Himanshu Gupta
 
PDF
Build application using sbt
sparrowAnalytics.com
 
PPTX
Hadoop admiin demo
sparrowAnalytics.com
 
Data analysis scala_spark
Yiguang Hu
 
Apache Spark 101
Abdullah Çetin ÇAVDAR
 
Apache Spark 101 [in 50 min]
Pawel Szulc
 
Introduction to Spark with Scala
Himanshu Gupta
 
Build application using sbt
sparrowAnalytics.com
 
Hadoop admiin demo
sparrowAnalytics.com
 

Similar to AWS Redshift Introduction - Big Data Analytics (20)

PDF
Redshift deep dive
Amazon Web Services LATAM
 
PPTX
July 2017 Meeting of the Denver AWS Users' Group
David McDaniel
 
PDF
2017 AWS DB Day | Amazon Redshift 자세히 살펴보기
Amazon Web Services Korea
 
PPTX
Powering Interactive Data Analysis at Pinterest by Amazon Redshift
Jie Li
 
PDF
Amazon Redshift
Jeff Patti
 
PDF
Melhores práticas de data warehouse no Amazon Redshift
Amazon Web Services LATAM
 
PDF
7th Athens Big Data Meetup - 2nd Talk - Amazon Redshift Vs Google BigQuery
Athens Big Data
 
PDF
Redshift VS BigQuery
Kostas Pardalis
 
PDF
Introdução ao data warehouse Amazon Redshift
Amazon Web Services LATAM
 
PDF
Redshift performance tuning
Carlos del Cacho
 
PPTX
Introdução ao Data Warehouse Amazon Redshift
Amazon Web Services LATAM
 
PPTX
Migration to Redshift from SQL Server
joeharris76
 
PDF
Handle TBs with $1500 per month
Hung Lin
 
PDF
[よくわかるAmazon Redshift in 大阪]Amazon Redshift最新情報と導入事例のご紹介
Amazon Web Services Japan
 
PPTX
Data engineering
Parimala Killada
 
PPTX
Redshift vs BigQuery lessons learned at Yahoo!
Jonathan Raspaud
 
PDF
Redshift 101
Michael Krouze
 
PDF
How to Fine-Tune Performance Using Amazon Redshift
AWS Germany
 
PDF
London Redshift Meetup - July 2017
Pratim Das
 
PDF
Benefícios e melhores práticas no uso do Amazon Redshift
Amazon Web Services LATAM
 
Redshift deep dive
Amazon Web Services LATAM
 
July 2017 Meeting of the Denver AWS Users' Group
David McDaniel
 
2017 AWS DB Day | Amazon Redshift 자세히 살펴보기
Amazon Web Services Korea
 
Powering Interactive Data Analysis at Pinterest by Amazon Redshift
Jie Li
 
Amazon Redshift
Jeff Patti
 
Melhores práticas de data warehouse no Amazon Redshift
Amazon Web Services LATAM
 
7th Athens Big Data Meetup - 2nd Talk - Amazon Redshift Vs Google BigQuery
Athens Big Data
 
Redshift VS BigQuery
Kostas Pardalis
 
Introdução ao data warehouse Amazon Redshift
Amazon Web Services LATAM
 
Redshift performance tuning
Carlos del Cacho
 
Introdução ao Data Warehouse Amazon Redshift
Amazon Web Services LATAM
 
Migration to Redshift from SQL Server
joeharris76
 
Handle TBs with $1500 per month
Hung Lin
 
[よくわかるAmazon Redshift in 大阪]Amazon Redshift最新情報と導入事例のご紹介
Amazon Web Services Japan
 
Data engineering
Parimala Killada
 
Redshift vs BigQuery lessons learned at Yahoo!
Jonathan Raspaud
 
Redshift 101
Michael Krouze
 
How to Fine-Tune Performance Using Amazon Redshift
AWS Germany
 
London Redshift Meetup - July 2017
Pratim Das
 
Benefícios e melhores práticas no uso do Amazon Redshift
Amazon Web Services LATAM
 
Ad

Recently uploaded (20)

PPTX
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
PPTX
Climate Action.pptx action plan for climate
justfortalabat
 
PDF
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
PPTX
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 
PDF
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
PPTX
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
PDF
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 
PDF
How to Connect Your On-Premises Site to AWS Using Site-to-Site VPN.pdf
Tamanna
 
PPTX
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
PDF
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
PDF
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
PPTX
Advanced_NLP_with_Transformers_PPT_final 50.pptx
Shiwani Gupta
 
PPTX
apidays Helsinki & North 2025 - APIs at Scale: Designing for Alignment, Trust...
apidays
 
PPTX
apidays Singapore 2025 - Designing for Change, Julie Schiller (Google)
apidays
 
PPTX
GenAI-Introduction-to-Copilot-for-Bing-March-2025-FOR-HUB.pptx
cleydsonborges1
 
DOC
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
PPT
Performance Review for Security and Commodity.ppt
chatwithnitin
 
PDF
apidays Helsinki & North 2025 - API-Powered Journeys: Mobility in an API-Driv...
apidays
 
PDF
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
PPTX
Rocket-Launched-PowerPoint-Template.pptx
Arden31
 
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
Climate Action.pptx action plan for climate
justfortalabat
 
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
apidays Helsinki & North 2025 - API access control strategies beyond JWT bear...
apidays
 
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
apidays Helsinki & North 2025 - Monetizing AI APIs: The New API Economy, Alla...
apidays
 
How to Connect Your On-Premises Site to AWS Using Site-to-Site VPN.pdf
Tamanna
 
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
Advanced_NLP_with_Transformers_PPT_final 50.pptx
Shiwani Gupta
 
apidays Helsinki & North 2025 - APIs at Scale: Designing for Alignment, Trust...
apidays
 
apidays Singapore 2025 - Designing for Change, Julie Schiller (Google)
apidays
 
GenAI-Introduction-to-Copilot-for-Bing-March-2025-FOR-HUB.pptx
cleydsonborges1
 
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
Performance Review for Security and Commodity.ppt
chatwithnitin
 
apidays Helsinki & North 2025 - API-Powered Journeys: Mobility in an API-Driv...
apidays
 
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
Rocket-Launched-PowerPoint-Template.pptx
Arden31
 
Ad

AWS Redshift Introduction - Big Data Analytics

  • 2. Table of Contents 1. What is Redshift? 2. Redshift In Action 1. How to Upload? 2. How to Query? 3. Recommendation 4. Q&A
  • 4. Brief Introduction (1) • A scalable SQL engine in AWS – Available except N. California and San Paulo regions as of Sep 2014 – Up to 1.6PB of data in a cluster of servers – Fast but still in minutes for big joins – Columnar storage • Adding or Deleting a column is very fast!! • Supports Per column compression – Supports bulk update • Upload gzipped tsv/csv file to S3 and then run bulk update command (called “copy”)
  • 5. Brief Introduction (2) • Supports Postgresql 8.x – But not all of the features of Postgresql – Accessible through ODBC/JDBC interface • You can use any tools/library supporting ODBC/JDBC – Still table schema matters! • It is still SQL
  • 6. Brief Introduction (3) • Dense Compute vs. Dense Storage vCPU ECU Memory Storage Price DW1 – Dense Storage dw1.xlarge 2 4.4 15 2TB HDD $0.85/hour dw1.8xlarge 16 35 120 16TB HDD $6.80/hour DW2 – Dense Compute dw2.xlarge 2 7 15 0.16TB SSD $0.25/hour dw2.8xlarge 32 104 244 2.56TB SSD $4.80/hour
  • 7. Brief Introduction (4) • Cost Analysis – If you need 8TB RedShift cluster, you will need 4 dw1.xlarge instances • That will be $2448 per 30 days and about $30K per year – You will need to store input records to RedShift in S3 at the minimum. So there will be S3 cost as well. • 1TB with “reduced redundancy” would cost $24.5 per month
  • 8. Brief Introduction (5) • Tightly coupled with other AWS services – S3, EMR (ElasticMapReduce), Kinesis, DynamoDB, RDS and so on – Backup and Snapshot to S3 • No Automatic Resizing – You have to manually resize and it takes a while • Doubling from 2 nodes to 4 took 8 hours. The other way around took 18 hours or so (done in summer of 2013 though) – But during resizing, read operation still works • 30 minutes Maintenance every week – You have to avoid this window
  • 9. Brief Summary • RedShift is a large scale SQL engine which can be used as Data Warehouse/Analytics solution – You don’t stall your production database! – Smoother migration for anyone who knows SQL – It supports SQL interface but behind the scene it is a NoSQL engine • RedShift isn’t for Realtime query engine – Semi-realtime data consumption might be doable but querying can take a while
  • 10. Difference from MySQL (1) • No guarantee of primary key uniqueness – There can be many duplicates if you are not careful • You better delete before inserting (based on date/time range) – Primary key is just a hint for query optimizer • Need to define distkey and sortkey per table – distkey is to determine which node to store a record – sortkey is to determine in what order records need to be stored in a machine create table session_attribute ( browser_id decimal(20,0) not null distkey sortkey, session_id int, name varchar(48), value varchar(48), primary key(vid, sid, name) );
  • 11. Difference from MySQL (2) • char/varchar type is in bytes not in characters • "rn” is counted as two characters • No text field. The max number of bytes in char/varchar is 65535 • Addition/deletion of a column is very fast • Some keywords are reserved (user, tag and so on) • LIKE is case-sensitive (ILIKE is case-insensitive)
  • 12. Supported Data Type in RedShift • SMALLINT (INT2) • INTEGER (INT, INT4) • BIGINT (INT8) • DECIMAL (NUMERIC) • REAL (FLOAT4) • DOUBLE PRECISION (FLOAT8) • BOOLEAN (BOOL) • CHAR (CHARACTER) • VARCHAR (CHARACTER VARYING) • DATE • TIMESTAMP
  • 14. What can be stored? • Log Files – Web access logs – But needs to define schema. Better to add session level tables • Relational Database Tables – MySQL tables – Almost one to one mapping • Any structured data – Any data you can represent as CSV
  • 15. A bit more about Session Table • Hadoop can be used to aggregate pageviews into session (on top of pageviews): – Group by session key – Order pageviews in the same session by timestamp • This aggregated info -> session table • Example of session table – Session ID, Browser ID, user ID, IP, UserAgent, Referrer info, Start time, Duration, …
  • 16. How to Upload? • Need to define schema of your data • Create a table (again it is a SQL engine) • Generate a tsv or csv file(s) from your source data • Compress the file(s) • Upload the file to S3 – This S3 bucket better be in the same region as the RedShift cluster (but it is no longer a must) • Run a bulk insert (called “copy”) – copy session_attribute [fields] from ‘s3://your_bucket/…’ options – Options include AWS keys, whether gzipped or not, delimiter used, max errors to tolerate and so on • Regular insert/update SQL statement can be used
  • 17. Update Workflow Periodically upload input files S3 RedShift A cronjob Data Source Server Bulk Insert You can introduce a queue where S3 location of all incoming input files are pushed. A consumer of this queue read from the queue and bulk insert to RedShift You might have to do ETL on your source data using Hadoop and so on
  • 18. Incremental Update from MySQL • Change your table schema if possible – Need to have updatedon field in your table – Never delete a record but mark it as inactive • Monitor your table changes and propagate it to Redshift – Use DataBus from LinkedIn
  • 19. HOW TO ACCESS REDSHIFT
  • 20. Different Ways to Access (1) 1. JDBC/ODBC desktop tools such as – SQLWorkBench, Navicat and so on – Requires IP registration for outside access 2. JDBC/ODBC Library – Any PostgreSQL 8.0.x compatible should work In both cases, you use SQL statements
  • 21. Different Ways to Access (2) 3. Use Analytics Tool such as Tableau or Birst – But these have too many features – Will likely need a dedicated analyst
  • 23. Things to Consider • How big are your tables? • Dumping your tables would cause issues? – Site’s stability and so on – Or do you have backup instance? • Are your tables friendly for incremental update? – “updatedon” field – no deletion of records
  • 24. Steps • Start from Daily Update – Daily full refresh is fine to begin with to set up end-to-end cycle – If the tables are big, then dumping them can take a while • Implement Incremental Update Mechanism – This will require either table schema change or the use of some database change tracking mechanism • Go for Shorter update interval

Editor's Notes