SlideShare a Scribd company logo
Azure Data Factory: Mapping Data Flows
What are mapping data flows?
 Code-free data transformation at scale
 Serverless, scaled-out, ADF-managed
Apache Spark™ engine
 Resilient flows handle structured and
unstructured data
 Operationalized as an ADF pipeline activity
Code-free data transformation at scale
 Intuitive UX lets you focus on building transformation logic
 Data cleansing
 Data validation
 Data aggregation
 No requirement of knowing Spark, cluster management, Scala, Python, etc
vs
INGEST
Modern Data Warehouse (MDW)
PREPARE TRANSFORM,
PREDICT
& ENRICH
SERVE
STORE
VISUALIZE
On-premises data
Cloud data
SaaS data
Data Pipeline Orchestration & Monitoring
Common data flow scenarios
Slowly changing dimensions
Data deduplication and validation
Fact loading into a data warehouse
Authoring mapping data flows
Dedicated development canvas
Building transformation logic
 Transformations: A ‘step’ in the data flow
 Engine intelligently groups them at runtime
 19 currently available
 Core logic of data flow
 Add/Remove/Alter Columns
 Join or lookup data from datasets
 Change number or order of rows
 Aggregate data
 Hierarchal to relational
Source transformation
 Define the data read by your
data flow
 Import projection vs generic
 Schema drift
 Connector specific properties and
optmizations
 Min: 1, Max: ∞
 Define in-line or use dataset
Source: In-line vs dataset
 Define all source properties within a data flow or use a separate
entity to store them
 Dataset:
 Reusable in other ADF activities such as Copy
 Not based in Spark -> some settings overridden
 In-line
 Useful when using flexible schemas, one-off source instances or parameterized sources
 Do not need “dummy” dataset object
 Based in Spark, properties native to data flow
 Most connectors only in available in one
Supported connectors
 File-based data stores (ADLS Gen1/Gen2, Azure Blob Storage)
 Parquet, JSON, DelimitedText, Excel, Avro, XML
 In-line only: Common Data Model, Delta Lake
 SQL tables
 Azure SQL Database
 Azure Synapse Analytics (formerly SQL DW)
 Cosmos DB
 Coming soon: Snowflake
 If not supported, ingest to staging area via Copy activity
 90+ connectors supported natively
Duplicating data streams
 Duplicate data stream from any
stage of your data flow
 Select ‘New branch’
 Operate on same data with
different transformation
requirements
 Self-joins
 Writing to different sinks
 Aggregating in one branch
Joining two data streams together
 Use Join transformation to append columns from incoming stream to
any stream in your data flow
 Join types: full outer, inner, left outer, right outer, cross
 SQL Join equivalent
 Match on computed columns or use non-equality conditions
 Broadcast small data streams to cache data and improve
performance
Lookup transformation
 Similar to left outer join, but with more functionality
 All incoming rows are passed through regardless of match
 Matching conditions same as a join
 Multi or single row lookup
 Match on all, first, last, or any row that meets join conditions
 isMatch() function can be used in downstream transformations to
verify output
Exists transformation
 Check for existence of a value in another stream
 SQL Exists equivalent
 See if any row matches in a subquery, just like SQL
 Filter based on join matching conditions
 Choose Exist or Not Exist for your filter conditions
 Can specify a custom expressoin
Union transformation
 Combine rows from multiple
streams
 Add as many streams as
needed
 Combine data based upon
column name or ordinal
column position
 Use cases:
 Similar data from different connection
that undergo same transformations
 Writing multiple data streams into the
same sink
Conditional split
 Split data into separate streams
based upon conditions
 Use data flow expression language to
evaluate boolean
 Use cases:
 Sinking subset of data to different
locations
 Perform different calculations on data
depending on a set of values
Derived column
 Transform data at row and column level using expression language
 Generate new or modify existing columns
 Build expressions using the expression builder
 Handle structured or unstructured data
 Use column patterns to match on rules and regular expressions
 Can be used to transform multiple columns in bulk
 Most heavily used transformation
Select transformation
 Metadata and column maintenance
 SQL Select statement
 Alias or renames data stream and columns
 Prune unwanted or duplicate columns
 Common after joins and lookups
 Rule-based mapping for flexible schemas, bulk mapping
 Map hierarchal columns to flat structure
Surrogate key transformation
 Generate incrementing key to use as a non-business key in your data
 To seed the starting value of your surrogate key, use derived column
and a lookup from an existing table
 Examples are in documentation
 Useful for generating keys for star schema dimension tables
Aggregate transformation
 Aggregate data into groups using aggregate function
 Like SQL GROUP BY clause in a Select statement
 Aggregate functions include sum(), max(), avg(), first(), collect()
 Choose columns to group by
 One row for each unique group by column value
 Only columns used in transformation are in output data stream
 Use self-join to append to existing data
 Supports pattern matching
Pivot and unpivot transformations
 Pivot row values into new columns and vice-versa
 Both are aggregate transformations that require aggregate functions
 If pivot key values not specified, all columns become drifted
 Use map drifted quick action to add to schema quickly
Window transformation
 Aggregates data across
“windows” of data partitions
 Used to compare a row of data against
others in its ‘group’
 Group determined by group by
columns, sorting conditions
and range bounds
 Used for ranking rows in a
group and getting lead/lag
 Sorting causes reshuffling of
data
 “Expensive” operation
Filter transformation
 Filter rows based upon an
expression
 Like SQL WHERE clause
 Expressions return true or false
Alter row transformation
 Mark rows as Insert, Update, Delete, or Upsert
 Like SQL MERGE statement
 Insert by default
 Define policies to update your database
 Works with SQL DB, Synapse, Cosmos DB, and Delta Lake
 Specify allowed update methods in each sink
Flatten trasnformation
 Unroll array values into individual
rows
 One row per value
 Used to convert hierarchies to flat
structures
 Opposite of collect() aggregate
function
Sort transformation
 Sort your data by column values
 SQL Order By equivalent
 Use sparingly: Reshuffles and coalesces data
 Reduces effectiveness of data partitioning
 Does not optimize speed like legacy ETL tools
 Useful for data exploration and validation
Sink transformatoin
 Define the properties for landing your data in your destination
target data store
 Define using dataset or in-line
 Can map columns similar to select transformation
 Import schema definition from destination
 Set actions on destinations
 Truncate table or clear folder, SQL pre/post actions, database update methods
 Choose how the written data is partitioned
 Use current partitioning is almost always fastest
 Note: Writing to single file can be very slow with large amounts
of data
Mapping data flow expression language
Visual expression builder
List of columns
being modified
All available
functions, fields,
parameters …
Build expressions
here with full
auto-complete
and syntax
checking
View results of your
expression in the data
preview pane with
live, interactive
results
Expression language
 Expressions are built using the data flow expression language
 Expressions can reference:
 Built-in expression functions
 Defined input schema columns
 Data flow parameters
 Literals
 Certain transformations have unique functions
 Count(), sum() in Aggregate, denseRank() in Window, etc
 Evaluates to spark data types
Debug mode
 Quickly verify logic during development on small interactive cluster
 4 core, 60-minute time to live
 Enables the following:
 Get data preview snapshot at each transformation
 Preview output of expression in expression builder
 Run debug pipeline with no spin up
 Import Spark projection of source schema
 Rule of thumb: If developing Data Flows, turn on right away
 Initial 3-5-minute start up time
Debug mode: data preview
Debug mode: data profiling
Debug mode: expression output
Parameterizing data flows
 Both dataset properties and data-flow expressions can be
parameterized
 Passed in via data flow activity
 Can use data flow or pipeline expression language
 Expressions can reference $parameterName
 Can be literal values or column references
Referencing data flow parameters
Working with flexible schemas
Schema drift
 In real-world data integration solutions, source/target data stores
change shape
 Source data fields can change names
 Number of columns can change over time
 Traditional ETL processes break when schemas drift
 Mapping data flow has built-in handling for flexible schemas
 Patterns, rule-based mappings, byName(s) function, etc
 Source: Read additional columns on top of what is defined in the source schema
 Sink: Write additional columns on top of what is defined in the sink schema
Column pattern matching
 Match by name, type, stream, position
Rule-based mapping
Operationalizing and monitoring data flows
Data flow activity
 Run as activity in pipeline
 Integrated with existing ADF control flow, scheduling, orchestration, montoring, C/ICD
 Choose which integration runtime (IR) to run on
 # of cores, compute type, cluster time to live
 Assign parameters
Data flow integration runtime
 Integrated with existing Azure IR
 Choose compute type, # of cores, time to live
 Time to live: time a cluster is alive after last execution concludes
 Minimal start up time for sequential data flows
 Parameterize compute type, # of cores if using Auto Resolve
Monitoring data flows
Data flow security considerations
 All data stays inside VMs that run the Databricks cluster which are
spun up JIT for each job
• Azure Databricks attaches storage to the VMs for logging and spill-over from in-memory data frames
during job operation. These storage accounts are fully encrypted and within the Microsoft tenant.
• Each cluster is single-tenant and specific to your data and job. This cluster is not shared with any
other tenant
• Data flow processes are completely ephemeral. Once a job is completed,
all associated resources are destroyed
• Both cluster and storage account are deleted
• Data transfers in data flows are protected using certificates
• Active telemetry is logged and maintained for 45 days for troubleshooting
by the Azure Data Factory team
Data flow best practices and optimizations
Best practices – Lifecycle
1. Test your transformation logic using debug mode and data
preview
 Limited source size or use sample files
2. Test end-to-end pipeline logic using pipeline debug
 Verify data is read/written correctly
 Used as smoke test before merging your changes
3. Publish and trigger your pipelines within a Dev Factory
 Test performance and cluster size
4. Promote pipelines to higher environments such as UAT and PROD
using CI/CD
 Increase size and scope of data as you get to higher environments
Best practices – Debug (Data Preview)
 Data Preview
 Data preview is inside the data flow designer transformation properties
 Uses row limits and sampling techniques to preview data from a small size of data
 Allows you to build and validate units of logic with samples of data in real time
 You have control over the size of the data limits under Debug Settings
 If you wish to test with larger datasets, set a larger compute size in the Azure IR when
switching on “Debug Mode”
 Data Preview is only a snapshot of data in memory from Spark data frames. This feature does
not write any data, so the sink drivers are not utilized and not tested in this mode.
Best practices – Debug (Pipeline Debug)
 Pipeline Debug
 Click debug button to test your data flow inside of a pipeline
 Default debug limits the execution runtime so you will want to limit data sizes
 Sampling can be applied here as well by using the “Enable Sampling” option in each Source
 Use the debug button option of “use activity IR” when you wish to use a job execution
compute environment
 This option is good for debugging with larger datasets. It will not have the same execution timeout limit as the
default debug setting
Optimizing data flows
 Transformation order generally does not matter
 Data flows have a Spark optimizer that reorders logic to perform as best as it can
 Repartitioning and reshuffling data negates optimizer
 Each transformation has ‘Optimize’ tab to control partitioning
strategies
 Generally do not need to alter
 Altering cluster size and type has performance impact
 Four components
1. Cluster startup time
2. Reading from sources
3. Transformation time
4. Writing to sinks
Identifying bottlenecks
1. Cluster startup time
2. Sink processing time
3. Source read time
4. Transformation stage time
1. Sequential executions can
lower the cluster startup time
by setting a TTL in Azure IR
2. Total time to process the
stream from source to sink.
There is also a post-processing
time when you click on the Sink
that will show you how much
time Spark had to spend with
partition and job clean-up.
Write to single file and slow
database connections will
increase this time
3. Shows you how long it took to
read data from source.
Optimize with different source
partition strategies
4. This will show you bottlenecks
in your transformation logic.
With larger general purpose
and mem optimized IRs, most
of these operations occur in
memory in data frames and are
usually the fastest operations
in your data flow
Best practices - Sources
 When reading from file-based sources, data flow automatically
partitions the data based on size
 ~128 MB per partition, evenly distributed
 Use current partitioning will be fastest for file-based and Synapse using PolyBase
 Enable staging for Synapse
 For Azure SQL DB, use Source partitioning on column with high
cardinality
 Improves performance, but can saturate your source database
 Reading can be limited by the I/O of your source
Optimizing transformations
 Each transformation has its own optimize tab
 Generally better to not alter -> reshuffling is a relatively slow process
 Reshuffling can be useful if data is very skewed
 One node has a disproportionate amount of data
 For Joins, Exists and Lookups:
 If you have a lot, memory optimized greatly increases performance
 Can ‘Broadcast’ if the data on one side is small
 Rule of thumb: Less than 50k rows
 Increasing integration runtime can speed up transformations
 Transformations that require reshuffling like Sort negatively impact
performance
Best practices - Sinks
 SQL:
 Disable indexes on target with pre/post SQL scripts
 Increase SQL capacity during pipeline execution
 Enable staging when using Synapse
 File-based sinks:
 Use current partitioning allows Spark to create output
 Output to single file is a very slow operation
 Combines data into single partition
 Often unnecessary by whoever is consuming data
 Can set naming patterns or use data in column
 Any reshuffling of data is slow
 Cosmos DB
 Set throughput and batch size to meet performance requirements
Azure Integration Runtime
 Data Flows use JIT compute to minimize running expensive clusters
when they are mostly idle
 Generally more economical, but each cluster takes ~4 minutes to spin up
 IR specifies what cluster type and core-count to use
 Memory optimized is best, compute optimized doesn’t generally work for production workloads
 When running Sequential jobs utilize Time to Live to reuse cluster
between executions
 Keeps cluster alive for TTL minutes after execution for new job to use
 Maximum one job per cluster
 Rule of thumb: start small and scale up
Data flow script
Data flow script (DFS)
 DFS defines the logical intent of your data transformations
 Script is bundled and marshalled to Spark cluster as a job for
execution
 DFS can be auto-generated and used for programmatic creation of
data flows
 Access script behind UI via “Script” button
 Click “Copy as Single Line” to save version of script that is ready for
JSON
 https://docs.microsoft.com/en-us/azure/data-factory/data-flow-
script
Data flow script (DFS)
Source projection (1)
Source properties
Unpivot transformation (3)
Aggregate Transformation (2)
Sort (4)
Sink (5)
1 2 3 4 5
• Syntax: input_name transform_type(properties)
~> stream_name
Data flow script (DFS)
1 2
3
4
5
6
Source projection (1)
Source properties
Distinct Aggregate (3)
Row Count Agg (5)
Select transformation mappings (2)
Select properties (2)
Row Count Agg (4)
Sink transformation (6)
• ~> name_of_transform
• New branch does not require any script
element
ETL Migrations
ETL Tool Migration Overview
 Migrating from an existing large enterprise ETL installation to ADF and data flows requires
adherence to a formal methodology that incorporates classic SDLC, change management,
project management, and a deep understanding of your current data estate and ETL
requirements.
 Successful migration projects require project plans, executive sponsorship, budget, and a
dedicated team to focus on rebuilding the ETL in ADF.
 For existing on-prem ETL estates, it is very important to learn basics of Cloud, Azure, and ADF
generally before taking this Data Flows training.
Sponsorship
Discovery
Training
• On-prem to Cloud, Azure general training, ADF general training, Data Flows training
• A general understanding of the different between legacy client/server on-prem ETL
architectures and cloud-based Big Data processing is required
• ADF and Data Flows execute on Spark, so learn the fundamentals of the different between
row-by-row processing on a local server and batch/distributed computing on Spark in the
Cloud
Execution
• Start with the top 10 mission-critical ETL mappings and list out the primary logical goals and
steps achieved in each
• Use sample data and debug each scenario as new pipelines and data flows in ADF
• UAT each of those 10 mappings in ADF using sample data
• Lay out end-to-end project plan for remaining mapping migrations
• Plan the remainder of the project into quarterly calendar milestones
• Except each phase to take around 3 months
• Majority of large existing ETL infrastructure modernization migrations take 12-18 months to
complete
Roadmap: 2020 H2
 New connectors:
• Snowflake (r/w) for Data Flow (GA)
• Delta lake (r/w) for Data Flow (GA)
• Common Data Model (CDM) format support for Mapping Data Flow (GA)
• Azure Database for PostgreSQL (r/w) for Data Flow (GA)
• Azure Database for MySQL (r/w) for Data Flow (GA)
• Dynamics 365/CDS (r/w) for Data Flow (GA)
• Error Row Handling (GA)
• Wide row completion (GA)
• Updated Expression Builder UX w/Local Vars (GA)
• Wrangling Data Flow (GA)
Additional
resources
Documentation
List of tutorial videos
Expression language reference
Performance guide
ADF twitter
ADF tech community blog

More Related Content

What's hot (20)

PPTX
[DSC Europe 22] Overview of the Databricks Platform - Petar Zecevic
DataScienceConferenc1
 
PPTX
Azure Data Factory
HARIHARAN R
 
PPTX
Azure datafactory
Dimko Zhluktenko
 
PPTX
Azure Synapse Analytics Overview (r1)
James Serra
 
PDF
Snowflake for Data Engineering
Harald Erb
 
PDF
Azure Data Factory presentation with links
Chris Testa-O'Neill
 
PDF
Data Mesh for Dinner
Kent Graziano
 
PDF
Moving to Databricks & Delta
Databricks
 
PDF
Time to Talk about Data Mesh
LibbySchulze
 
PPTX
Azure Data Factory ETL Patterns in the Cloud
Mark Kromer
 
PDF
Introduction to Azure Data Factory
Slava Kokaev
 
PDF
Intuit's Data Mesh - Data Mesh Leaning Community meetup 5.13.2021
Tristan Baker
 
PPTX
Free Training: How to Build a Lakehouse
Databricks
 
PDF
Data Mesh
Piethein Strengholt
 
PDF
Making Data Timelier and More Reliable with Lakehouse Technology
Matei Zaharia
 
PDF
Architect’s Open-Source Guide for a Data Mesh Architecture
Databricks
 
PPTX
TechEvent Databricks on Azure
Trivadis
 
PDF
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
PPTX
Azure data factory
David Giard
 
PPTX
Azure data bricks by Eugene Polonichko
Alex Tumanoff
 
[DSC Europe 22] Overview of the Databricks Platform - Petar Zecevic
DataScienceConferenc1
 
Azure Data Factory
HARIHARAN R
 
Azure datafactory
Dimko Zhluktenko
 
Azure Synapse Analytics Overview (r1)
James Serra
 
Snowflake for Data Engineering
Harald Erb
 
Azure Data Factory presentation with links
Chris Testa-O'Neill
 
Data Mesh for Dinner
Kent Graziano
 
Moving to Databricks & Delta
Databricks
 
Time to Talk about Data Mesh
LibbySchulze
 
Azure Data Factory ETL Patterns in the Cloud
Mark Kromer
 
Introduction to Azure Data Factory
Slava Kokaev
 
Intuit's Data Mesh - Data Mesh Leaning Community meetup 5.13.2021
Tristan Baker
 
Free Training: How to Build a Lakehouse
Databricks
 
Making Data Timelier and More Reliable with Lakehouse Technology
Matei Zaharia
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Databricks
 
TechEvent Databricks on Azure
Trivadis
 
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
Azure data factory
David Giard
 
Azure data bricks by Eugene Polonichko
Alex Tumanoff
 

Similar to Azure Data Factory Data Flows Training (Sept 2020 Update) (20)

PPTX
Mapping Data Flows Training April 2021
Mark Kromer
 
PPTX
Mapping Data Flows Training deck Q1 CY22
Mark Kromer
 
PPTX
White jason presentation
WhiteJason
 
PDF
Creating Visual Transformations in Azure Data Factory (dataMinds Connect)
Cathrine Wilhelmsen
 
PPTX
Big Data Analytics in the Cloud with Microsoft Azure
Mark Kromer
 
PPTX
Day 1 - Technical Bootcamp azure synapse analytics
Armand272
 
PDF
Data infrastructure for the other 90% of companies
Martin Loetzsch
 
PPTX
Cepta The Future of Data with Power BI
Kellyn Pot'Vin-Gorman
 
PPTX
Mapping Data Flows Perf Tuning April 2021
Mark Kromer
 
PPTX
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
Mark Kromer
 
PPTX
SESSION-5 Targets and Filter Transformation.pptx
penchikalapavankumar
 
PPTX
SESSION-6 Targets and Filter Transformation.pptx
penchikalapavankumar
 
PPTX
Azure Data Factory Data Flows Training v005
Mark Kromer
 
PDF
Microsoft Fabric Analytics Engineer (DP-600) Exam Dumps 2024.pdf
SkillCertProExams
 
PDF
azure-cloud-data-engineer-training-curriculum (1).pdf
k6640559
 
PPT
Data ware housing- Introduction to data ware housing
Vibrant Technologies & Computers
 
PDF
Tx2014 Feature and Highlights
Heath Turner
 
PPTX
Modern data warehouse
Elena Lopez
 
PPTX
Azure Data Engineering course in hyderabad.pptx
shaikmadarbi3zen
 
PDF
Azure Data Engineering Course in Hyderabad
nagendrastoitech
 
Mapping Data Flows Training April 2021
Mark Kromer
 
Mapping Data Flows Training deck Q1 CY22
Mark Kromer
 
White jason presentation
WhiteJason
 
Creating Visual Transformations in Azure Data Factory (dataMinds Connect)
Cathrine Wilhelmsen
 
Big Data Analytics in the Cloud with Microsoft Azure
Mark Kromer
 
Day 1 - Technical Bootcamp azure synapse analytics
Armand272
 
Data infrastructure for the other 90% of companies
Martin Loetzsch
 
Cepta The Future of Data with Power BI
Kellyn Pot'Vin-Gorman
 
Mapping Data Flows Perf Tuning April 2021
Mark Kromer
 
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
Mark Kromer
 
SESSION-5 Targets and Filter Transformation.pptx
penchikalapavankumar
 
SESSION-6 Targets and Filter Transformation.pptx
penchikalapavankumar
 
Azure Data Factory Data Flows Training v005
Mark Kromer
 
Microsoft Fabric Analytics Engineer (DP-600) Exam Dumps 2024.pdf
SkillCertProExams
 
azure-cloud-data-engineer-training-curriculum (1).pdf
k6640559
 
Data ware housing- Introduction to data ware housing
Vibrant Technologies & Computers
 
Tx2014 Feature and Highlights
Heath Turner
 
Modern data warehouse
Elena Lopez
 
Azure Data Engineering course in hyderabad.pptx
shaikmadarbi3zen
 
Azure Data Engineering Course in Hyderabad
nagendrastoitech
 
Ad

More from Mark Kromer (20)

PPTX
Fabric Data Factory Pipeline Copy Perf Tips.pptx
Mark Kromer
 
PPTX
Build data quality rules and data cleansing into your data pipelines
Mark Kromer
 
PPTX
Data cleansing and prep with synapse data flows
Mark Kromer
 
PPTX
Data cleansing and data prep with synapse data flows
Mark Kromer
 
PPTX
Data Lake ETL in the Cloud with ADF
Mark Kromer
 
PPTX
Azure Data Factory Data Wrangling with Power Query
Mark Kromer
 
PPTX
Azure Data Factory Data Flow Performance Tuning 101
Mark Kromer
 
PPTX
Data Quality Patterns in the Cloud with ADF
Mark Kromer
 
PPTX
Data quality patterns in the cloud with ADF
Mark Kromer
 
PPTX
Data Quality Patterns in the Cloud with Azure Data Factory
Mark Kromer
 
PPTX
ADF Mapping Data Flows Level 300
Mark Kromer
 
PPTX
ADF Mapping Data Flows Training V2
Mark Kromer
 
PPTX
ADF Mapping Data Flows Training Slides V1
Mark Kromer
 
PDF
ADF Mapping Data Flow Private Preview Migration
Mark Kromer
 
PPTX
Azure Data Factory Data Flow Limited Preview for January 2019
Mark Kromer
 
PPTX
Microsoft Azure Data Factory Data Flow Scenarios
Mark Kromer
 
PPTX
Azure Data Factory Data Flow Preview December 2019
Mark Kromer
 
PPTX
Azure Data Factory for Azure Data Week
Mark Kromer
 
PPTX
Azure Data Factory for Redmond SQL PASS UG Sept 2018
Mark Kromer
 
PPTX
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Mark Kromer
 
Fabric Data Factory Pipeline Copy Perf Tips.pptx
Mark Kromer
 
Build data quality rules and data cleansing into your data pipelines
Mark Kromer
 
Data cleansing and prep with synapse data flows
Mark Kromer
 
Data cleansing and data prep with synapse data flows
Mark Kromer
 
Data Lake ETL in the Cloud with ADF
Mark Kromer
 
Azure Data Factory Data Wrangling with Power Query
Mark Kromer
 
Azure Data Factory Data Flow Performance Tuning 101
Mark Kromer
 
Data Quality Patterns in the Cloud with ADF
Mark Kromer
 
Data quality patterns in the cloud with ADF
Mark Kromer
 
Data Quality Patterns in the Cloud with Azure Data Factory
Mark Kromer
 
ADF Mapping Data Flows Level 300
Mark Kromer
 
ADF Mapping Data Flows Training V2
Mark Kromer
 
ADF Mapping Data Flows Training Slides V1
Mark Kromer
 
ADF Mapping Data Flow Private Preview Migration
Mark Kromer
 
Azure Data Factory Data Flow Limited Preview for January 2019
Mark Kromer
 
Microsoft Azure Data Factory Data Flow Scenarios
Mark Kromer
 
Azure Data Factory Data Flow Preview December 2019
Mark Kromer
 
Azure Data Factory for Azure Data Week
Mark Kromer
 
Azure Data Factory for Redmond SQL PASS UG Sept 2018
Mark Kromer
 
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Mark Kromer
 
Ad

Recently uploaded (20)

PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PDF
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PPT
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
PPTX
MSP360 Backup Scheduling and Retention Best Practices.pptx
MSP360
 
PDF
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
July Patch Tuesday
Ivanti
 
PDF
TrustArc Webinar - Data Privacy Trends 2025: Mid-Year Insights & Program Stra...
TrustArc
 
PDF
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
PDF
NewMind AI Journal - Weekly Chronicles - July'25 Week II
NewMind AI
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PPTX
Top iOS App Development Company in the USA for Innovative Apps
SynapseIndia
 
PDF
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
MSP360 Backup Scheduling and Retention Best Practices.pptx
MSP360
 
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
July Patch Tuesday
Ivanti
 
TrustArc Webinar - Data Privacy Trends 2025: Mid-Year Insights & Program Stra...
TrustArc
 
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
NewMind AI Journal - Weekly Chronicles - July'25 Week II
NewMind AI
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Top iOS App Development Company in the USA for Innovative Apps
SynapseIndia
 
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 

Azure Data Factory Data Flows Training (Sept 2020 Update)

  • 1. Azure Data Factory: Mapping Data Flows
  • 2. What are mapping data flows?  Code-free data transformation at scale  Serverless, scaled-out, ADF-managed Apache Spark™ engine  Resilient flows handle structured and unstructured data  Operationalized as an ADF pipeline activity
  • 3. Code-free data transformation at scale  Intuitive UX lets you focus on building transformation logic  Data cleansing  Data validation  Data aggregation  No requirement of knowing Spark, cluster management, Scala, Python, etc vs
  • 4. INGEST Modern Data Warehouse (MDW) PREPARE TRANSFORM, PREDICT & ENRICH SERVE STORE VISUALIZE On-premises data Cloud data SaaS data Data Pipeline Orchestration & Monitoring
  • 5. Common data flow scenarios
  • 8. Fact loading into a data warehouse
  • 11. Building transformation logic  Transformations: A ‘step’ in the data flow  Engine intelligently groups them at runtime  19 currently available  Core logic of data flow  Add/Remove/Alter Columns  Join or lookup data from datasets  Change number or order of rows  Aggregate data  Hierarchal to relational
  • 12. Source transformation  Define the data read by your data flow  Import projection vs generic  Schema drift  Connector specific properties and optmizations  Min: 1, Max: ∞  Define in-line or use dataset
  • 13. Source: In-line vs dataset  Define all source properties within a data flow or use a separate entity to store them  Dataset:  Reusable in other ADF activities such as Copy  Not based in Spark -> some settings overridden  In-line  Useful when using flexible schemas, one-off source instances or parameterized sources  Do not need “dummy” dataset object  Based in Spark, properties native to data flow  Most connectors only in available in one
  • 14. Supported connectors  File-based data stores (ADLS Gen1/Gen2, Azure Blob Storage)  Parquet, JSON, DelimitedText, Excel, Avro, XML  In-line only: Common Data Model, Delta Lake  SQL tables  Azure SQL Database  Azure Synapse Analytics (formerly SQL DW)  Cosmos DB  Coming soon: Snowflake  If not supported, ingest to staging area via Copy activity  90+ connectors supported natively
  • 15. Duplicating data streams  Duplicate data stream from any stage of your data flow  Select ‘New branch’  Operate on same data with different transformation requirements  Self-joins  Writing to different sinks  Aggregating in one branch
  • 16. Joining two data streams together  Use Join transformation to append columns from incoming stream to any stream in your data flow  Join types: full outer, inner, left outer, right outer, cross  SQL Join equivalent  Match on computed columns or use non-equality conditions  Broadcast small data streams to cache data and improve performance
  • 17. Lookup transformation  Similar to left outer join, but with more functionality  All incoming rows are passed through regardless of match  Matching conditions same as a join  Multi or single row lookup  Match on all, first, last, or any row that meets join conditions  isMatch() function can be used in downstream transformations to verify output
  • 18. Exists transformation  Check for existence of a value in another stream  SQL Exists equivalent  See if any row matches in a subquery, just like SQL  Filter based on join matching conditions  Choose Exist or Not Exist for your filter conditions  Can specify a custom expressoin
  • 19. Union transformation  Combine rows from multiple streams  Add as many streams as needed  Combine data based upon column name or ordinal column position  Use cases:  Similar data from different connection that undergo same transformations  Writing multiple data streams into the same sink
  • 20. Conditional split  Split data into separate streams based upon conditions  Use data flow expression language to evaluate boolean  Use cases:  Sinking subset of data to different locations  Perform different calculations on data depending on a set of values
  • 21. Derived column  Transform data at row and column level using expression language  Generate new or modify existing columns  Build expressions using the expression builder  Handle structured or unstructured data  Use column patterns to match on rules and regular expressions  Can be used to transform multiple columns in bulk  Most heavily used transformation
  • 22. Select transformation  Metadata and column maintenance  SQL Select statement  Alias or renames data stream and columns  Prune unwanted or duplicate columns  Common after joins and lookups  Rule-based mapping for flexible schemas, bulk mapping  Map hierarchal columns to flat structure
  • 23. Surrogate key transformation  Generate incrementing key to use as a non-business key in your data  To seed the starting value of your surrogate key, use derived column and a lookup from an existing table  Examples are in documentation  Useful for generating keys for star schema dimension tables
  • 24. Aggregate transformation  Aggregate data into groups using aggregate function  Like SQL GROUP BY clause in a Select statement  Aggregate functions include sum(), max(), avg(), first(), collect()  Choose columns to group by  One row for each unique group by column value  Only columns used in transformation are in output data stream  Use self-join to append to existing data  Supports pattern matching
  • 25. Pivot and unpivot transformations  Pivot row values into new columns and vice-versa  Both are aggregate transformations that require aggregate functions  If pivot key values not specified, all columns become drifted  Use map drifted quick action to add to schema quickly
  • 26. Window transformation  Aggregates data across “windows” of data partitions  Used to compare a row of data against others in its ‘group’  Group determined by group by columns, sorting conditions and range bounds  Used for ranking rows in a group and getting lead/lag  Sorting causes reshuffling of data  “Expensive” operation
  • 27. Filter transformation  Filter rows based upon an expression  Like SQL WHERE clause  Expressions return true or false
  • 28. Alter row transformation  Mark rows as Insert, Update, Delete, or Upsert  Like SQL MERGE statement  Insert by default  Define policies to update your database  Works with SQL DB, Synapse, Cosmos DB, and Delta Lake  Specify allowed update methods in each sink
  • 29. Flatten trasnformation  Unroll array values into individual rows  One row per value  Used to convert hierarchies to flat structures  Opposite of collect() aggregate function
  • 30. Sort transformation  Sort your data by column values  SQL Order By equivalent  Use sparingly: Reshuffles and coalesces data  Reduces effectiveness of data partitioning  Does not optimize speed like legacy ETL tools  Useful for data exploration and validation
  • 31. Sink transformatoin  Define the properties for landing your data in your destination target data store  Define using dataset or in-line  Can map columns similar to select transformation  Import schema definition from destination  Set actions on destinations  Truncate table or clear folder, SQL pre/post actions, database update methods  Choose how the written data is partitioned  Use current partitioning is almost always fastest  Note: Writing to single file can be very slow with large amounts of data
  • 32. Mapping data flow expression language
  • 33. Visual expression builder List of columns being modified All available functions, fields, parameters … Build expressions here with full auto-complete and syntax checking View results of your expression in the data preview pane with live, interactive results
  • 34. Expression language  Expressions are built using the data flow expression language  Expressions can reference:  Built-in expression functions  Defined input schema columns  Data flow parameters  Literals  Certain transformations have unique functions  Count(), sum() in Aggregate, denseRank() in Window, etc  Evaluates to spark data types
  • 35. Debug mode  Quickly verify logic during development on small interactive cluster  4 core, 60-minute time to live  Enables the following:  Get data preview snapshot at each transformation  Preview output of expression in expression builder  Run debug pipeline with no spin up  Import Spark projection of source schema  Rule of thumb: If developing Data Flows, turn on right away  Initial 3-5-minute start up time
  • 36. Debug mode: data preview
  • 37. Debug mode: data profiling
  • 39. Parameterizing data flows  Both dataset properties and data-flow expressions can be parameterized  Passed in via data flow activity  Can use data flow or pipeline expression language  Expressions can reference $parameterName  Can be literal values or column references
  • 40. Referencing data flow parameters
  • 42. Schema drift  In real-world data integration solutions, source/target data stores change shape  Source data fields can change names  Number of columns can change over time  Traditional ETL processes break when schemas drift  Mapping data flow has built-in handling for flexible schemas  Patterns, rule-based mappings, byName(s) function, etc  Source: Read additional columns on top of what is defined in the source schema  Sink: Write additional columns on top of what is defined in the sink schema
  • 43. Column pattern matching  Match by name, type, stream, position
  • 46. Data flow activity  Run as activity in pipeline  Integrated with existing ADF control flow, scheduling, orchestration, montoring, C/ICD  Choose which integration runtime (IR) to run on  # of cores, compute type, cluster time to live  Assign parameters
  • 47. Data flow integration runtime  Integrated with existing Azure IR  Choose compute type, # of cores, time to live  Time to live: time a cluster is alive after last execution concludes  Minimal start up time for sequential data flows  Parameterize compute type, # of cores if using Auto Resolve
  • 49. Data flow security considerations  All data stays inside VMs that run the Databricks cluster which are spun up JIT for each job • Azure Databricks attaches storage to the VMs for logging and spill-over from in-memory data frames during job operation. These storage accounts are fully encrypted and within the Microsoft tenant. • Each cluster is single-tenant and specific to your data and job. This cluster is not shared with any other tenant • Data flow processes are completely ephemeral. Once a job is completed, all associated resources are destroyed • Both cluster and storage account are deleted • Data transfers in data flows are protected using certificates • Active telemetry is logged and maintained for 45 days for troubleshooting by the Azure Data Factory team
  • 50. Data flow best practices and optimizations
  • 51. Best practices – Lifecycle 1. Test your transformation logic using debug mode and data preview  Limited source size or use sample files 2. Test end-to-end pipeline logic using pipeline debug  Verify data is read/written correctly  Used as smoke test before merging your changes 3. Publish and trigger your pipelines within a Dev Factory  Test performance and cluster size 4. Promote pipelines to higher environments such as UAT and PROD using CI/CD  Increase size and scope of data as you get to higher environments
  • 52. Best practices – Debug (Data Preview)  Data Preview  Data preview is inside the data flow designer transformation properties  Uses row limits and sampling techniques to preview data from a small size of data  Allows you to build and validate units of logic with samples of data in real time  You have control over the size of the data limits under Debug Settings  If you wish to test with larger datasets, set a larger compute size in the Azure IR when switching on “Debug Mode”  Data Preview is only a snapshot of data in memory from Spark data frames. This feature does not write any data, so the sink drivers are not utilized and not tested in this mode.
  • 53. Best practices – Debug (Pipeline Debug)  Pipeline Debug  Click debug button to test your data flow inside of a pipeline  Default debug limits the execution runtime so you will want to limit data sizes  Sampling can be applied here as well by using the “Enable Sampling” option in each Source  Use the debug button option of “use activity IR” when you wish to use a job execution compute environment  This option is good for debugging with larger datasets. It will not have the same execution timeout limit as the default debug setting
  • 54. Optimizing data flows  Transformation order generally does not matter  Data flows have a Spark optimizer that reorders logic to perform as best as it can  Repartitioning and reshuffling data negates optimizer  Each transformation has ‘Optimize’ tab to control partitioning strategies  Generally do not need to alter  Altering cluster size and type has performance impact  Four components 1. Cluster startup time 2. Reading from sources 3. Transformation time 4. Writing to sinks
  • 55. Identifying bottlenecks 1. Cluster startup time 2. Sink processing time 3. Source read time 4. Transformation stage time 1. Sequential executions can lower the cluster startup time by setting a TTL in Azure IR 2. Total time to process the stream from source to sink. There is also a post-processing time when you click on the Sink that will show you how much time Spark had to spend with partition and job clean-up. Write to single file and slow database connections will increase this time 3. Shows you how long it took to read data from source. Optimize with different source partition strategies 4. This will show you bottlenecks in your transformation logic. With larger general purpose and mem optimized IRs, most of these operations occur in memory in data frames and are usually the fastest operations in your data flow
  • 56. Best practices - Sources  When reading from file-based sources, data flow automatically partitions the data based on size  ~128 MB per partition, evenly distributed  Use current partitioning will be fastest for file-based and Synapse using PolyBase  Enable staging for Synapse  For Azure SQL DB, use Source partitioning on column with high cardinality  Improves performance, but can saturate your source database  Reading can be limited by the I/O of your source
  • 57. Optimizing transformations  Each transformation has its own optimize tab  Generally better to not alter -> reshuffling is a relatively slow process  Reshuffling can be useful if data is very skewed  One node has a disproportionate amount of data  For Joins, Exists and Lookups:  If you have a lot, memory optimized greatly increases performance  Can ‘Broadcast’ if the data on one side is small  Rule of thumb: Less than 50k rows  Increasing integration runtime can speed up transformations  Transformations that require reshuffling like Sort negatively impact performance
  • 58. Best practices - Sinks  SQL:  Disable indexes on target with pre/post SQL scripts  Increase SQL capacity during pipeline execution  Enable staging when using Synapse  File-based sinks:  Use current partitioning allows Spark to create output  Output to single file is a very slow operation  Combines data into single partition  Often unnecessary by whoever is consuming data  Can set naming patterns or use data in column  Any reshuffling of data is slow  Cosmos DB  Set throughput and batch size to meet performance requirements
  • 59. Azure Integration Runtime  Data Flows use JIT compute to minimize running expensive clusters when they are mostly idle  Generally more economical, but each cluster takes ~4 minutes to spin up  IR specifies what cluster type and core-count to use  Memory optimized is best, compute optimized doesn’t generally work for production workloads  When running Sequential jobs utilize Time to Live to reuse cluster between executions  Keeps cluster alive for TTL minutes after execution for new job to use  Maximum one job per cluster  Rule of thumb: start small and scale up
  • 61. Data flow script (DFS)  DFS defines the logical intent of your data transformations  Script is bundled and marshalled to Spark cluster as a job for execution  DFS can be auto-generated and used for programmatic creation of data flows  Access script behind UI via “Script” button  Click “Copy as Single Line” to save version of script that is ready for JSON  https://docs.microsoft.com/en-us/azure/data-factory/data-flow- script
  • 62. Data flow script (DFS) Source projection (1) Source properties Unpivot transformation (3) Aggregate Transformation (2) Sort (4) Sink (5) 1 2 3 4 5 • Syntax: input_name transform_type(properties) ~> stream_name
  • 63. Data flow script (DFS) 1 2 3 4 5 6 Source projection (1) Source properties Distinct Aggregate (3) Row Count Agg (5) Select transformation mappings (2) Select properties (2) Row Count Agg (4) Sink transformation (6) • ~> name_of_transform • New branch does not require any script element
  • 65. ETL Tool Migration Overview  Migrating from an existing large enterprise ETL installation to ADF and data flows requires adherence to a formal methodology that incorporates classic SDLC, change management, project management, and a deep understanding of your current data estate and ETL requirements.  Successful migration projects require project plans, executive sponsorship, budget, and a dedicated team to focus on rebuilding the ETL in ADF.  For existing on-prem ETL estates, it is very important to learn basics of Cloud, Azure, and ADF generally before taking this Data Flows training.
  • 68. Training • On-prem to Cloud, Azure general training, ADF general training, Data Flows training • A general understanding of the different between legacy client/server on-prem ETL architectures and cloud-based Big Data processing is required • ADF and Data Flows execute on Spark, so learn the fundamentals of the different between row-by-row processing on a local server and batch/distributed computing on Spark in the Cloud
  • 69. Execution • Start with the top 10 mission-critical ETL mappings and list out the primary logical goals and steps achieved in each • Use sample data and debug each scenario as new pipelines and data flows in ADF • UAT each of those 10 mappings in ADF using sample data • Lay out end-to-end project plan for remaining mapping migrations • Plan the remainder of the project into quarterly calendar milestones • Except each phase to take around 3 months • Majority of large existing ETL infrastructure modernization migrations take 12-18 months to complete
  • 70. Roadmap: 2020 H2  New connectors: • Snowflake (r/w) for Data Flow (GA) • Delta lake (r/w) for Data Flow (GA) • Common Data Model (CDM) format support for Mapping Data Flow (GA) • Azure Database for PostgreSQL (r/w) for Data Flow (GA) • Azure Database for MySQL (r/w) for Data Flow (GA) • Dynamics 365/CDS (r/w) for Data Flow (GA) • Error Row Handling (GA) • Wide row completion (GA) • Updated Expression Builder UX w/Local Vars (GA) • Wrangling Data Flow (GA)
  • 71. Additional resources Documentation List of tutorial videos Expression language reference Performance guide ADF twitter ADF tech community blog