INTRODUCTION TO DATA SCIENCE
NIKO VUOKKO
JYVÄSKYLÄ SUMMER SCHOOL
AUGUST 2013
DATA SCIENCE WITH A BROAD BRUSH
Concepts and methodologies
DATA SCIENCE IS AN UMBRELLA, A FUSION
• Databases and infrastructure
• Pattern mining
• Statistics
• Machine learning
• Numerical optimization
• Stochastic modeling
• Data visualization
… of specialties needed
for data-driven
business optimization
DATA SCIENTIST
• Data scientist is defined as DS : business problem  data solution
• Combination of strong programming, math, computational and business skills
• Recipe for success
1. Convert vague business requirements into measurable technical targets
2. Develop a solution to reach the targets
3. Communicate business results
4. Deploy the solution in production
UNDERSTANDING DATA
Monday 19 August 2013
PATTERN MINING AND DATA ANALYSIS
UNSUPERVISED LEARNING
• Could be called pattern recognition or structure discovery
• What kind of a process could have produced this data?
• Discovery of “interesting” phenomena in a dataset
• Now how do you define interesting?
• Learning algorithms exist for a huge collection of pattern types
• Analogy: You decide if you want to see westerns or comedies,
but the machine picks the movies
• But does “interesting” imply useful and significant?
EXAMPLES OF STRUCTURES IN DATA
• Clustering and mixture models: separation of data into parts
• Dictionary learning: a compact grammar of the dataset
• Single class learning: learn the natural boundaries of data
Example: Early detection of machine failure or network intrusion
• Latent allocation: learn hidden preferences driving purchase decisions
• Source separation: find independent generators of the data
Example: Independent phenomena affecting exchange rates
MORE EXAMPLES OF “INTERESTING” PATTERNS
• { charcoal, mustard } ⇒ sausage
• Grocery customer types with differing paths around the trading floor
• Pricing trend change in a web ad exchange
• Communities and topics in a social network
• Distinct features of a person’s face and fingerprints
• Objects emerging in front of a moving car
KNOW YOUR EIGENS AND SINGULARS
• Eigenvalue and singular value decompositions are central data analysis tools
• They describe the energy distribution and static core structures of data
Examples
• Face detection, speaker adaptation
• Google PageRank is basically just the world’s largest EVD
• Zombie outbreak risk is determined by its eigenvalues
• As a sub-component in every second learning algorithm
DIMENSION REDUCTION
• Some applications encounter large dimension counts up to millions
• Dimension reduction may either
1. Retain space: preserve the most “descriptive” dimensions
2. Transform space: trade interpretability for powerful rendition
• Usually transformations work oblivious to data (they are simple functions)
• Curvilinear transformations try to see how the data is “folded” and build new
dimensions specific to the given dataset
DIMENSION REDUCTION EXAMPLE
• Singular value decomposition is commonly used to remove the “noise
dimensions” with little energy
• Example: gene expression data and movie preferences have lots of these
• After this more complex methods can be used for unfolding the data
DIMENSION REDUCTION EXAMPLE
BLIND SOURCE SEPARATION
• Find latent sources that generated the data
• Tries to discover the real truth beneath all noise and convolution
• Examples:
• Air defense missile guidance systems
• Error-correcting codes
• Language modeling
• Brain activity factors
• Industrial process dynamics
• Factors behind climate change
(STATISTICAL) SIGNIFICANCE TESTING
• Example: Rejection rate increase in a manufacturing plant
• “What is the probability of observing this increase if everything was OK?”
• “What is the probability of having a valid alert if there really was something
wrong?”
• Reliability of significance testing results is wholly dependent on correct
modeling of the data source and pattern type
• Statistical significance is different from material significance
CORRELATION IS NOT CAUSALITY
A correlation may hide an almost arbitrary truth
• Cities with more firemen have more fires
• Companies spending more in marketing have higher revenues
• Marsupials exist mainly in Australia
• However, making successful predictions does not require causality
MACHINE LEARNING
Basics
SUPERVISED LEARNING
• Simplistically task is to find function f : f(input) = output
• Examples: spam filtering, speech recognition, steel strength estimation
• Risks for different types of errors can be very skewed
• Complex inputs may confuse or slow down models
• Unsupervised methods often useful in improving results by simplifying the input
SEMI-SUPERVISED LEARNING
• Only a part of data is labeled
• Needed when labeling data is expensive
• Understanding the structure of unlabeled data enhances learning by bringing
diversity and generalization and by constraining learning
• Relates to multi-source learning, some sources labeled, some not
• Examples:
• Object detection from a video feed
• Web page categorization
• Sentiment analysis
• Transfer learning between domains
TRAINING, TESTING, VALIDATION
• A model is trained using a training dataset
• The quality of the model is measured by using it on a separate testing dataset
• A model often contains hyper-parameters chosen by the user
• A separate validation dataset is split off from the training data
• Validation data is used for testing and finding good hyper-parameter values
• Cross-validation is common practice and asymptotically unbiased
BIAS AND VARIANCE
• Squared error of predictions consists of bias and variance (and noise)
• BIAS Model incapability of approximating the underlying truth
• VARIANCE Model reliance on whims of the observed data
• Complex models often have low bias and high variance
• Simple models often have high bias and low variance
• Having more data instances (rows) may reduce variance
• Having more detailed data (variables) may reduce bias
• Testing different types of models can explain how to improve your data
TRAINING AND TESTING, BIAS AND VARIANCE
Complex modelSimple model
Minimal testing error
Minimal training error
MACHINE LEARNING
Learning new tricks
THE KERNEL TRICK
• Many learning methods rely on inner products of data points
• The “kernel trick” maps the data to an implicitly defined, high dimension space
• Kernel is the matrix of the new inner products in this space
• Mapping itself often left unknown
• Example: Gaussian kernel associates local Euclidean neighborhoods to similarity
• Example: String kernels are used for modeling DNA sequence structure
• Kernels can be combined and custom built to match expert knowledge
A kernel is a dataset-specific space transformation,
success depends on good understanding of the dataset
ENSEMBLE LEARNING
• The power of many: combine multiple models into one
• Wide and strong proof of superior performance
• Extra bonus: often trivially parallelizable
OUR EXPERIENCE IS THAT MOST EFFORTS SHOULD BE CONCENTRATED IN
DERIVING SUBSTANTIALLY DIFFERENT APPROACHES, RATHER THAN REFINING
A SINGLE TECHNIQUE.
Netflix $1M prize winner (ensemble of 107 models)
“
“
ENSEMBLE LEARNING IN PRACTICE
• Boosting: weigh (⇒ low bias) focused (⇒ low bias) simple models (⇒ low bias)
• Bagging: average (⇒ low variance) results of simple models (⇒ low bias)
• What aspect of the data am I still missing?
• Variable mixing, discretized jumps, independent factors, transformations, etc.
• Questions about practical implementability and ROI
• Failure: Netflix winner solution never taken to production
• Success: Official US hurricane model is an ensemble of 43
RANDOMIZED LEARNING
• Motivation: random variation beats expert guidance surprisingly often
• Introducing randomness can improve generalization performance (smaller
variance)
• Randomness allows methods to discover unexpected success
• Examples: genetic models, simulated annealing, parallel tempering
• Increasingly useful to allow scale-out for large datasets
• Many successful methods combine random models as an ensemble
• Example: combining random projections or transformations can often beat optimized
unsupervised models
ONLINE LEARNING
• Instead of ingesting a training dataset, adjust the data model after every
incoming (instance, label) pair
• Allows quick adaptation and “always-on” operation
• Finds good models fast, but may miss the great one
⟹ suitable also as a burn-in for other models
• Useful especially for the present trend towards analyzing data streams
BAYESIAN BASICS
• Bayesians see data as fixed and parameters as distributions
• Parameters have prior assumptions that can encode expert knowledge
• Data is used as evidence for possible parameter values
• Final output is a set of posterior distributions for the parameters
• Models may employ only the most probable parameter values or their full
probability distribution
• Variational Bayes approximates the posterior with a simpler distribution
MODEL COMPLEXITY
• Limiting model size and complexity can be used to avoid excessive bias
• Minimum description length and Akaike/Bayesian information criteria are the
Occam’s razor of data science
• VC dimension of a model provides a theoretical limit for generalization error
• Regularization can limit instance weights or parameter sizes
• Bayesian models use hyper-parameters to limit parameter overfit
THE END

More Related Content

PDF
Introduction to data science
PPTX
Data science
PDF
How to Become a Data Scientist
PPTX
Introduction to data science.pptx
PDF
Data Science Introduction
PDF
Introduction To Data Science
PPTX
What Is Data Science? | Introduction to Data Science | Data Science For Begin...
PPTX
Introduction to Data Science
Introduction to data science
Data science
How to Become a Data Scientist
Introduction to data science.pptx
Data Science Introduction
Introduction To Data Science
What Is Data Science? | Introduction to Data Science | Data Science For Begin...
Introduction to Data Science

What's hot (20)

KEY
Intro to Data Science for Enterprise Big Data
PDF
Big Data [sorry] & Data Science: What Does a Data Scientist Do?
PDF
Introduction to Data Science
PPTX
introduction to data science
PPTX
Data science
PPTX
Data science & data scientist
PPTX
Introduction to data science
PPTX
Introduction to data science club
PPTX
Data Science
PPTX
What Is Data Science? Data Science Course - Data Science Tutorial For Beginne...
PDF
Data science
PDF
Data Science Tutorial | Introduction To Data Science | Data Science Training ...
PDF
Data science presentation
PDF
Introduction to Data Science and Analytics
PPTX
Introduction of Data Science
PDF
Data Science For Beginners | Who Is A Data Scientist? | Data Science Tutorial...
PPTX
Career in Data Science
PPTX
Data Mining: What is Data Mining?
PPTX
Introduction to Data Mining
PPTX
Data Science
Intro to Data Science for Enterprise Big Data
Big Data [sorry] & Data Science: What Does a Data Scientist Do?
Introduction to Data Science
introduction to data science
Data science
Data science & data scientist
Introduction to data science
Introduction to data science club
Data Science
What Is Data Science? Data Science Course - Data Science Tutorial For Beginne...
Data science
Data Science Tutorial | Introduction To Data Science | Data Science Training ...
Data science presentation
Introduction to Data Science and Analytics
Introduction of Data Science
Data Science For Beginners | Who Is A Data Scientist? | Data Science Tutorial...
Career in Data Science
Data Mining: What is Data Mining?
Introduction to Data Mining
Data Science
Ad

Similar to Introduction to Data Science (20)

PPTX
Introduction to Machine Learning
PPTX
Improving AI Development - Dave Litwiller - Jan 11 2022 - Public
PDF
THEORITICAL FRAMEWORK FOR THE DATA MINING PROCESS
PDF
Choosing a Machine Learning technique to solve your need
PDF
Industrial Data Science
PDF
Neo4j Theory and Practice - Tareq Abedrabbo @ GraphConnect London 2013
PDF
Lecture 2 Data mining process.pdf
PPTX
MACHINE LEARNING PRESENTATION (ARTIFICIAL INTELLIGENCE)
PPTX
Data Science presentation for explanation of numpy and pandas
PPTX
The zen of predictive modelling
PDF
The Art of Intelligence – A Practical Introduction Machine Learning for Orac...
PDF
"Solving Vision Tasks Using Deep Learning: An Introduction," a Presentation f...
PPTX
02-Lifecycle.pptx
PPTX
MACHINE LEARNING YEAR DL SECOND PART.pptx
DOCX
DATA SCIENCE AND BIG DATA ANALYTICSCHAPTER 2 DATA ANA.docx
PPTX
Machine Learning in the Financial Industry
PPTX
MachineLearningSparkML.pptx
PDF
Large Scale Modeling Overview
PPTX
Ml2 production
PPT
Unit 3 part ii Data mining
Introduction to Machine Learning
Improving AI Development - Dave Litwiller - Jan 11 2022 - Public
THEORITICAL FRAMEWORK FOR THE DATA MINING PROCESS
Choosing a Machine Learning technique to solve your need
Industrial Data Science
Neo4j Theory and Practice - Tareq Abedrabbo @ GraphConnect London 2013
Lecture 2 Data mining process.pdf
MACHINE LEARNING PRESENTATION (ARTIFICIAL INTELLIGENCE)
Data Science presentation for explanation of numpy and pandas
The zen of predictive modelling
The Art of Intelligence – A Practical Introduction Machine Learning for Orac...
"Solving Vision Tasks Using Deep Learning: An Introduction," a Presentation f...
02-Lifecycle.pptx
MACHINE LEARNING YEAR DL SECOND PART.pptx
DATA SCIENCE AND BIG DATA ANALYTICSCHAPTER 2 DATA ANA.docx
Machine Learning in the Financial Industry
MachineLearningSparkML.pptx
Large Scale Modeling Overview
Ml2 production
Unit 3 part ii Data mining
Ad

More from Niko Vuokko (7)

PPTX
Analytics in business
PPTX
Drones in real use
PPTX
Analytiikka bisneksessä
PPTX
Sensor Data in Business
PPTX
Sensoridatan ja liiketoiminnan tulevaisuus
PPTX
Metrics @ App Academy
PDF
Big Data Rampage
Analytics in business
Drones in real use
Analytiikka bisneksessä
Sensor Data in Business
Sensoridatan ja liiketoiminnan tulevaisuus
Metrics @ App Academy
Big Data Rampage

Recently uploaded (20)

PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PPTX
Report in SIP_Distance_Learning_Technology_Impact.pptx
PDF
ment.tech-Siri Delay Opens AI Startup Opportunity in 2025.pdf
PDF
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
PDF
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
PDF
substrate PowerPoint Presentation basic one
PDF
CEH Module 2 Footprinting CEH V13, concepts
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PPTX
How to use fields_get method in Odoo 18
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PDF
zbrain.ai-Scope Key Metrics Configuration and Best Practices.pdf
PDF
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
PDF
Examining Bias in AI Generated News Content.pdf
PDF
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
PDF
CCUS-as-the-Missing-Link-to-Net-Zero_AksCurious.pdf
PDF
A symptom-driven medical diagnosis support model based on machine learning te...
PPTX
AQUEEL MUSHTAQUE FAKIH COMPUTER CENTER .
PDF
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
PDF
Build Real-Time ML Apps with Python, Feast & NoSQL
EIS-Webinar-Regulated-Industries-2025-08.pdf
giants, standing on the shoulders of - by Daniel Stenberg
Report in SIP_Distance_Learning_Technology_Impact.pptx
ment.tech-Siri Delay Opens AI Startup Opportunity in 2025.pdf
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
substrate PowerPoint Presentation basic one
CEH Module 2 Footprinting CEH V13, concepts
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
How to use fields_get method in Odoo 18
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
zbrain.ai-Scope Key Metrics Configuration and Best Practices.pdf
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
Examining Bias in AI Generated News Content.pdf
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
CCUS-as-the-Missing-Link-to-Net-Zero_AksCurious.pdf
A symptom-driven medical diagnosis support model based on machine learning te...
AQUEEL MUSHTAQUE FAKIH COMPUTER CENTER .
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
Build Real-Time ML Apps with Python, Feast & NoSQL

Introduction to Data Science

  • 1. INTRODUCTION TO DATA SCIENCE NIKO VUOKKO JYVÄSKYLÄ SUMMER SCHOOL AUGUST 2013
  • 2. DATA SCIENCE WITH A BROAD BRUSH Concepts and methodologies
  • 3. DATA SCIENCE IS AN UMBRELLA, A FUSION • Databases and infrastructure • Pattern mining • Statistics • Machine learning • Numerical optimization • Stochastic modeling • Data visualization … of specialties needed for data-driven business optimization
  • 4. DATA SCIENTIST • Data scientist is defined as DS : business problem  data solution • Combination of strong programming, math, computational and business skills • Recipe for success 1. Convert vague business requirements into measurable technical targets 2. Develop a solution to reach the targets 3. Communicate business results 4. Deploy the solution in production
  • 6. PATTERN MINING AND DATA ANALYSIS
  • 7. UNSUPERVISED LEARNING • Could be called pattern recognition or structure discovery • What kind of a process could have produced this data? • Discovery of “interesting” phenomena in a dataset • Now how do you define interesting? • Learning algorithms exist for a huge collection of pattern types • Analogy: You decide if you want to see westerns or comedies, but the machine picks the movies • But does “interesting” imply useful and significant?
  • 8. EXAMPLES OF STRUCTURES IN DATA • Clustering and mixture models: separation of data into parts • Dictionary learning: a compact grammar of the dataset • Single class learning: learn the natural boundaries of data Example: Early detection of machine failure or network intrusion • Latent allocation: learn hidden preferences driving purchase decisions • Source separation: find independent generators of the data Example: Independent phenomena affecting exchange rates
  • 9. MORE EXAMPLES OF “INTERESTING” PATTERNS • { charcoal, mustard } ⇒ sausage • Grocery customer types with differing paths around the trading floor • Pricing trend change in a web ad exchange • Communities and topics in a social network • Distinct features of a person’s face and fingerprints • Objects emerging in front of a moving car
  • 10. KNOW YOUR EIGENS AND SINGULARS • Eigenvalue and singular value decompositions are central data analysis tools • They describe the energy distribution and static core structures of data Examples • Face detection, speaker adaptation • Google PageRank is basically just the world’s largest EVD • Zombie outbreak risk is determined by its eigenvalues • As a sub-component in every second learning algorithm
  • 11. DIMENSION REDUCTION • Some applications encounter large dimension counts up to millions • Dimension reduction may either 1. Retain space: preserve the most “descriptive” dimensions 2. Transform space: trade interpretability for powerful rendition • Usually transformations work oblivious to data (they are simple functions) • Curvilinear transformations try to see how the data is “folded” and build new dimensions specific to the given dataset
  • 12. DIMENSION REDUCTION EXAMPLE • Singular value decomposition is commonly used to remove the “noise dimensions” with little energy • Example: gene expression data and movie preferences have lots of these • After this more complex methods can be used for unfolding the data
  • 14. BLIND SOURCE SEPARATION • Find latent sources that generated the data • Tries to discover the real truth beneath all noise and convolution • Examples: • Air defense missile guidance systems • Error-correcting codes • Language modeling • Brain activity factors • Industrial process dynamics • Factors behind climate change
  • 15. (STATISTICAL) SIGNIFICANCE TESTING • Example: Rejection rate increase in a manufacturing plant • “What is the probability of observing this increase if everything was OK?” • “What is the probability of having a valid alert if there really was something wrong?” • Reliability of significance testing results is wholly dependent on correct modeling of the data source and pattern type • Statistical significance is different from material significance
  • 16. CORRELATION IS NOT CAUSALITY A correlation may hide an almost arbitrary truth • Cities with more firemen have more fires • Companies spending more in marketing have higher revenues • Marsupials exist mainly in Australia • However, making successful predictions does not require causality
  • 18. SUPERVISED LEARNING • Simplistically task is to find function f : f(input) = output • Examples: spam filtering, speech recognition, steel strength estimation • Risks for different types of errors can be very skewed • Complex inputs may confuse or slow down models • Unsupervised methods often useful in improving results by simplifying the input
  • 19. SEMI-SUPERVISED LEARNING • Only a part of data is labeled • Needed when labeling data is expensive • Understanding the structure of unlabeled data enhances learning by bringing diversity and generalization and by constraining learning • Relates to multi-source learning, some sources labeled, some not • Examples: • Object detection from a video feed • Web page categorization • Sentiment analysis • Transfer learning between domains
  • 20. TRAINING, TESTING, VALIDATION • A model is trained using a training dataset • The quality of the model is measured by using it on a separate testing dataset • A model often contains hyper-parameters chosen by the user • A separate validation dataset is split off from the training data • Validation data is used for testing and finding good hyper-parameter values • Cross-validation is common practice and asymptotically unbiased
  • 21. BIAS AND VARIANCE • Squared error of predictions consists of bias and variance (and noise) • BIAS Model incapability of approximating the underlying truth • VARIANCE Model reliance on whims of the observed data • Complex models often have low bias and high variance • Simple models often have high bias and low variance • Having more data instances (rows) may reduce variance • Having more detailed data (variables) may reduce bias • Testing different types of models can explain how to improve your data
  • 22. TRAINING AND TESTING, BIAS AND VARIANCE Complex modelSimple model Minimal testing error Minimal training error
  • 24. THE KERNEL TRICK • Many learning methods rely on inner products of data points • The “kernel trick” maps the data to an implicitly defined, high dimension space • Kernel is the matrix of the new inner products in this space • Mapping itself often left unknown • Example: Gaussian kernel associates local Euclidean neighborhoods to similarity • Example: String kernels are used for modeling DNA sequence structure • Kernels can be combined and custom built to match expert knowledge A kernel is a dataset-specific space transformation, success depends on good understanding of the dataset
  • 25. ENSEMBLE LEARNING • The power of many: combine multiple models into one • Wide and strong proof of superior performance • Extra bonus: often trivially parallelizable OUR EXPERIENCE IS THAT MOST EFFORTS SHOULD BE CONCENTRATED IN DERIVING SUBSTANTIALLY DIFFERENT APPROACHES, RATHER THAN REFINING A SINGLE TECHNIQUE. Netflix $1M prize winner (ensemble of 107 models) “ “
  • 26. ENSEMBLE LEARNING IN PRACTICE • Boosting: weigh (⇒ low bias) focused (⇒ low bias) simple models (⇒ low bias) • Bagging: average (⇒ low variance) results of simple models (⇒ low bias) • What aspect of the data am I still missing? • Variable mixing, discretized jumps, independent factors, transformations, etc. • Questions about practical implementability and ROI • Failure: Netflix winner solution never taken to production • Success: Official US hurricane model is an ensemble of 43
  • 27. RANDOMIZED LEARNING • Motivation: random variation beats expert guidance surprisingly often • Introducing randomness can improve generalization performance (smaller variance) • Randomness allows methods to discover unexpected success • Examples: genetic models, simulated annealing, parallel tempering • Increasingly useful to allow scale-out for large datasets • Many successful methods combine random models as an ensemble • Example: combining random projections or transformations can often beat optimized unsupervised models
  • 28. ONLINE LEARNING • Instead of ingesting a training dataset, adjust the data model after every incoming (instance, label) pair • Allows quick adaptation and “always-on” operation • Finds good models fast, but may miss the great one ⟹ suitable also as a burn-in for other models • Useful especially for the present trend towards analyzing data streams
  • 29. BAYESIAN BASICS • Bayesians see data as fixed and parameters as distributions • Parameters have prior assumptions that can encode expert knowledge • Data is used as evidence for possible parameter values • Final output is a set of posterior distributions for the parameters • Models may employ only the most probable parameter values or their full probability distribution • Variational Bayes approximates the posterior with a simpler distribution
  • 30. MODEL COMPLEXITY • Limiting model size and complexity can be used to avoid excessive bias • Minimum description length and Akaike/Bayesian information criteria are the Occam’s razor of data science • VC dimension of a model provides a theoretical limit for generalization error • Regularization can limit instance weights or parameter sizes • Bayesian models use hyper-parameters to limit parameter overfit