SlideShare a Scribd company logo
2
Most read
5
Most read
17
Most read
Machine Learning
in Artificial Intelligence
Aman Patel
Roll no: A211
Machine Learning: Definition
 Machine learning, a branch of artificial intelligence, concerns
the construction and study of systems that can learn from
data.
 Definition: A computer program is said to learn from
experience E with respect to some class of tasks T and
performance measure P, if its performance at tasks in T, as
measured by P, improves with experience E.
 For example, a machine learning system could be trained on
email messages to learn to distinguish between spam and
non-spam messages. After learning, it can then be used to
classify new email messages into spam and non-spam folders.
Why is Machine Learning Important?
 Some tasks cannot be defined well, except by examples
(e.g., recognizing people).
 Relationships and correlations can be hidden within large
amounts of data. Machine Learning/Data Mining may be
able to find these relationships.
 Human designers often produce machines that do not work
as well as desired in the environments in which they are used.
Why is Machine Learning Important
(Cont’d)?
 The amount of knowledge available about certain tasks
might be too large for explicit encoding by humans (e.g.,
medical diagnostic).
 Environments change over time.

 New knowledge about tasks is constantly being discovered
by humans. It may be difficult to continuously re-design
systems “by hand”.
Areas of Influence for Machine
Learning
 Statistics: How best to use samples drawn from unknown
probability distributions to help decide from which distribution
some new sample is drawn?
 Brain Models: Non-linear elements with weighted inputs
(Artificial Neural Networks) have been suggested as simple
models of biological neurons.
 Adaptive Control Theory: How to deal with controlling a
process having unknown parameters that must be estimated
during operation?
Areas of Influence for Machine
Learning (Cont’d)
 Psychology: How to model human performance on various
learning tasks?
 Artificial Intelligence: How to write algorithms to acquire the
knowledge humans are able to acquire, at least, as well as
humans?
 Evolutionary Models: How to model certain aspects of
biological evolution to improve the performance of computer
programs?
Designing a Learning System:
An Example
o Problem Description
o Choosing the Training Experience
o Choosing the Target Function
o Choosing a Representation for the Target Function
o Choosing a Function Approximation Algorithm
o Final Design
Problem Description:
A Checker Learning Problem
 Task T: Playing Checkers
 Performance Measure P: Percent of games won against
opponents
 Training Experience E: To be selected ==> Games Played
against itself
Issues in Machine Learning
 What algorithms are available for learning a concept? How
well do they perform?
 How much training data is sufficient to learn a concept with
high confidence?

 When is it useful to use prior knowledge?
 Are some training examples more useful than others?
 What are best tasks for a system to learn?
 What is the best way for a system to represent its knowledge?
Machine Learning Algorithm Types
 Machine learning algorithms can be organized into a taxonomy based on
the desired outcome of the algorithm or the type of input available during
training the machine.
 Supervised learning algorithms are trained on labelled examples, i.e.,
input where the desired output is known. The supervised learning
algorithm attempts to generalise a function or mapping from inputs to
outputs which can then be used to speculatively generate an output
for previously unseen inputs.
 Unsupervised learning algorithms operate on unlabelled examples, i.e.,
input where the desired output is unknown. Here the objective is to
discover structure in the data (e.g. through a cluster analysis), not to
generalise a mapping from inputs to outputs.
 Semi-supervised learning combines both labelled and unlabelled
examples to generate an appropriate function or classifier.
Machine Learning Algorithm Types
(Cont’d)
 Reinforcement learning is concerned with how intelligent
agents ought to act in an environment to maximise some notion of
reward. The agent executes actions which cause the observable
state of the environment to change. Through a sequence of
actions, the agent attempts to gather knowledge about how the
environment responds to its actions, and attempts to synthesise a
sequence of actions that maximises a cumulative reward.
 Developmental learning, elaborated for Robot learning, generates
its own sequences (also called curriculum) of learning situations to
cumulatively acquire repertoires of novel skills through autonomous
self-exploration and social interaction with human teachers, and
using guidance mechanisms such as active learning, maturation,
motor synergies, and imitation.
AdaBoost Algorithm
 AdaBoost, short for Adaptive Boosting, is a machine
learning algorithm, formulated by Yoav Freund and Robert
Schapire.
 It is a meta-algorithm, and can be used in conjunction with many
other learning algorithms to improve their performance.

 AdaBoost is adaptive in the sense that subsequent classifiers built
are tweaked in favour of those instances misclassified by previous
classifiers.
 AdaBoost is sensitive to noisy data and outliers.
AdaBoost - Adaptive Boosting
 Instead of resampling, uses training set re-weighting
 Each training sample uses a weight to determine the
probability of being selected for a training set.
 AdaBoost is an algorithm for constructing a “strong” classifier
as linear combination of “simple” “weak” classifier

 Final classification based on weighted vote of weak classifiers
AdaBoost Terminology


… “weak” or basis classifier
(Classifier = Learner = Hypothesis)



… “strong” or final classifier

 Weak Classifier: < 50% error over any distribution
 Strong Classifier: Thresholded linear combination of weak
classifier outputs
AdaBoost : The Algorithm
 The framework
 The learner receives examples xi , yi i 1 N chosen randomly
according to some fixed but unknown distribution P on X Y
 The learner finds a hypothesis which is consistent with most of the
for most 1 i N
samples h f xi yi

 The algorithm
 Input variables
P: The distribution where the training examples sampling from
D: The distribution over all the training samples
WeakLearn: A weak learning algorithm to be boosted
T: The specified number of iterations
AdaBoost (Cont’d)
Advantages of AdaBoost
 Very simple to implement
 Feature selection on very large sets of features

 AdaBoost adjusts adaptively the errors of the weak
hypotheses by WeakLearn.
Machine learning with ADA Boost

More Related Content

What's hot (20)

ODP
Machine Learning with Decision trees
Knoldus Inc.
 
PPT
Decision tree
Ami_Surati
 
PPTX
Support vector machines (svm)
Sharayu Patil
 
PPTX
Ensemble learning
Haris Jamil
 
PDF
Decision trees in Machine Learning
Mohammad Junaid Khan
 
PDF
Logistic regression in Machine Learning
Kuppusamy P
 
PPTX
Random forest
Ujjawal
 
PPTX
Overfitting & Underfitting
SOUMIT KAR
 
PPT
2.2 decision tree
Krish_ver2
 
PPTX
Unsupervised learning (clustering)
Pravinkumar Landge
 
PDF
Bias and variance trade off
VARUN KUMAR
 
PDF
Understanding Bagging and Boosting
Mohit Rajput
 
PPTX
Classification techniques in data mining
Kamal Acharya
 
PPTX
Decision Tree Learning
Md. Ariful Hoque
 
PPTX
Unsupervised learning clustering
Arshad Farhad
 
PPTX
Presentation on K-Means Clustering
Pabna University of Science & Technology
 
PPTX
Ensemble methods in machine learning
SANTHOSH RAJA M G
 
PPTX
Curse of dimensionality
Nikhil Sharma
 
PPTX
Machine Learning - Accuracy and Confusion Matrix
Andrew Ferlitsch
 
PDF
Dimensionality Reduction
mrizwan969
 
Machine Learning with Decision trees
Knoldus Inc.
 
Decision tree
Ami_Surati
 
Support vector machines (svm)
Sharayu Patil
 
Ensemble learning
Haris Jamil
 
Decision trees in Machine Learning
Mohammad Junaid Khan
 
Logistic regression in Machine Learning
Kuppusamy P
 
Random forest
Ujjawal
 
Overfitting & Underfitting
SOUMIT KAR
 
2.2 decision tree
Krish_ver2
 
Unsupervised learning (clustering)
Pravinkumar Landge
 
Bias and variance trade off
VARUN KUMAR
 
Understanding Bagging and Boosting
Mohit Rajput
 
Classification techniques in data mining
Kamal Acharya
 
Decision Tree Learning
Md. Ariful Hoque
 
Unsupervised learning clustering
Arshad Farhad
 
Presentation on K-Means Clustering
Pabna University of Science & Technology
 
Ensemble methods in machine learning
SANTHOSH RAJA M G
 
Curse of dimensionality
Nikhil Sharma
 
Machine Learning - Accuracy and Confusion Matrix
Andrew Ferlitsch
 
Dimensionality Reduction
mrizwan969
 

Viewers also liked (20)

PPTX
Ada boost
Hank (Tai-Chi) Wang
 
PDF
Kato Mivule: An Overview of Adaptive Boosting – AdaBoost
Kato Mivule
 
PDF
Ada boost
Keisuke OTAKI
 
PDF
24 Machine Learning Combining Models - Ada Boost
Andres Mendez-Vazquez
 
PDF
Classifications & Misclassifications of EEG Signals using Linear and AdaBoost...
IJARIIT
 
PDF
Datamining 4th Adaboost
sesejun
 
PPTX
boosting algorithm
Prithvi Paneru
 
PPTX
Multiple Classifier Systems
Farzad Vasheghani Farahani
 
PPTX
Ensemble Learning: The Wisdom of Crowds (of Machines)
Lior Rokach
 
PDF
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
Dongseo University
 
PPTX
Lecture 6: Ensemble Methods
Marina Santini
 
PDF
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Deepak George
 
PDF
Deep Learning for Computer Vision (2/4): Object Analytics @ laSalle 2016
Universitat Politècnica de Catalunya
 
PDF
Xgboost
Vivian S. Zhang
 
PPT
Avihu Efrat's Viola and Jones face detection slides
wolf
 
PPTX
Face detection ppt by Batyrbek
Batyrbek Ryskhan
 
PDF
Deep Learning for Computer Vision: A comparision between Convolutional Neural...
Vincenzo Lomonaco
 
PDF
Deep Learning for Computer Vision: ImageNet Challenge (UPC 2016)
Universitat Politècnica de Catalunya
 
PDF
Deep Learning for Computer Vision: Object Detection (UPC 2016)
Universitat Politècnica de Catalunya
 
PPT
Face Detection techniques
Abhineet Bhamra
 
Kato Mivule: An Overview of Adaptive Boosting – AdaBoost
Kato Mivule
 
Ada boost
Keisuke OTAKI
 
24 Machine Learning Combining Models - Ada Boost
Andres Mendez-Vazquez
 
Classifications & Misclassifications of EEG Signals using Linear and AdaBoost...
IJARIIT
 
Datamining 4th Adaboost
sesejun
 
boosting algorithm
Prithvi Paneru
 
Multiple Classifier Systems
Farzad Vasheghani Farahani
 
Ensemble Learning: The Wisdom of Crowds (of Machines)
Lior Rokach
 
2013-1 Machine Learning Lecture 06 - Artur Ferreira - A Survey on Boosting…
Dongseo University
 
Lecture 6: Ensemble Methods
Marina Santini
 
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Deepak George
 
Deep Learning for Computer Vision (2/4): Object Analytics @ laSalle 2016
Universitat Politècnica de Catalunya
 
Avihu Efrat's Viola and Jones face detection slides
wolf
 
Face detection ppt by Batyrbek
Batyrbek Ryskhan
 
Deep Learning for Computer Vision: A comparision between Convolutional Neural...
Vincenzo Lomonaco
 
Deep Learning for Computer Vision: ImageNet Challenge (UPC 2016)
Universitat Politècnica de Catalunya
 
Deep Learning for Computer Vision: Object Detection (UPC 2016)
Universitat Politècnica de Catalunya
 
Face Detection techniques
Abhineet Bhamra
 
Ad

Similar to Machine learning with ADA Boost (20)

PPTX
An-Overview-of-Machine-Learning.pptx
someyamohsen3
 
PPTX
Introduction to machine learning
Sangath babu
 
PDF
Machine Learning Basics_Dr.Balamurugan.pdf
Dr. Balamurugan M
 
PPTX
chapter Three artificial intelligence 1.pptx
gadisaadamu101
 
PPTX
Machine Can Think
Rahul Jaiman
 
PPTX
introductiontomachinelearning.pptx
SivapriyaS12
 
PDF
machinecanthink-160226155704.pdf
PranavPatil822557
 
PPTX
introduction to machine learning
Johnson Ubah
 
PDF
An Introduction to Machine Learning
Vedaj Padman
 
PPTX
Introduction to Machine Learning
Panimalar Engineering College
 
PDF
ML_Lec1 introduction to machine learning.pdf
BeshoyArnest
 
PDF
ML_lec1.pdf
Abdulrahman181781
 
PDF
Machine learning basics
AtheenaPandian Enterprises
 
PPTX
AI_06_Machine Learning.pptx
Yousef Aburawi
 
PPTX
Statistical foundations of ml
Vipul Kalamkar
 
PDF
Machine learning interview questions and answers
kavinilavuG
 
PPTX
Lec1 intoduction.pptx
Oussama Haj Salem
 
PPTX
Introduction to Machine Learning
Sujith Jayaprakash
 
PPTX
unit 2 (wecompress.com) its compersenn fiance .pptx
MohitMaheshwari71
 
An-Overview-of-Machine-Learning.pptx
someyamohsen3
 
Introduction to machine learning
Sangath babu
 
Machine Learning Basics_Dr.Balamurugan.pdf
Dr. Balamurugan M
 
chapter Three artificial intelligence 1.pptx
gadisaadamu101
 
Machine Can Think
Rahul Jaiman
 
introductiontomachinelearning.pptx
SivapriyaS12
 
machinecanthink-160226155704.pdf
PranavPatil822557
 
introduction to machine learning
Johnson Ubah
 
An Introduction to Machine Learning
Vedaj Padman
 
Introduction to Machine Learning
Panimalar Engineering College
 
ML_Lec1 introduction to machine learning.pdf
BeshoyArnest
 
ML_lec1.pdf
Abdulrahman181781
 
Machine learning basics
AtheenaPandian Enterprises
 
AI_06_Machine Learning.pptx
Yousef Aburawi
 
Statistical foundations of ml
Vipul Kalamkar
 
Machine learning interview questions and answers
kavinilavuG
 
Lec1 intoduction.pptx
Oussama Haj Salem
 
Introduction to Machine Learning
Sujith Jayaprakash
 
unit 2 (wecompress.com) its compersenn fiance .pptx
MohitMaheshwari71
 
Ad

Recently uploaded (20)

PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
PDF
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
PDF
UPDF - AI PDF Editor & Converter Key Features
DealFuel
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
NASA A Researcher’s Guide to International Space Station : Physical Sciences ...
Dr. PANKAJ DHUSSA
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
PPTX
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
PDF
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
PPT
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
PDF
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
Edge AI and Vision Alliance
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
Future-Proof or Fall Behind? 10 Tech Trends You Can’t Afford to Ignore in 2025
DIGITALCONFEX
 
PPTX
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
PDF
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
PDF
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
UPDF - AI PDF Editor & Converter Key Features
DealFuel
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
NASA A Researcher’s Guide to International Space Station : Physical Sciences ...
Dr. PANKAJ DHUSSA
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
Edge AI and Vision Alliance
 
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
Edge AI and Vision Alliance
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Future-Proof or Fall Behind? 10 Tech Trends You Can’t Afford to Ignore in 2025
DIGITALCONFEX
 
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
How do you fast track Agentic automation use cases discovery?
DianaGray10
 

Machine learning with ADA Boost

  • 1. Machine Learning in Artificial Intelligence Aman Patel Roll no: A211
  • 2. Machine Learning: Definition  Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data.  Definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.  For example, a machine learning system could be trained on email messages to learn to distinguish between spam and non-spam messages. After learning, it can then be used to classify new email messages into spam and non-spam folders.
  • 3. Why is Machine Learning Important?  Some tasks cannot be defined well, except by examples (e.g., recognizing people).  Relationships and correlations can be hidden within large amounts of data. Machine Learning/Data Mining may be able to find these relationships.  Human designers often produce machines that do not work as well as desired in the environments in which they are used.
  • 4. Why is Machine Learning Important (Cont’d)?  The amount of knowledge available about certain tasks might be too large for explicit encoding by humans (e.g., medical diagnostic).  Environments change over time.  New knowledge about tasks is constantly being discovered by humans. It may be difficult to continuously re-design systems “by hand”.
  • 5. Areas of Influence for Machine Learning  Statistics: How best to use samples drawn from unknown probability distributions to help decide from which distribution some new sample is drawn?  Brain Models: Non-linear elements with weighted inputs (Artificial Neural Networks) have been suggested as simple models of biological neurons.  Adaptive Control Theory: How to deal with controlling a process having unknown parameters that must be estimated during operation?
  • 6. Areas of Influence for Machine Learning (Cont’d)  Psychology: How to model human performance on various learning tasks?  Artificial Intelligence: How to write algorithms to acquire the knowledge humans are able to acquire, at least, as well as humans?  Evolutionary Models: How to model certain aspects of biological evolution to improve the performance of computer programs?
  • 7. Designing a Learning System: An Example o Problem Description o Choosing the Training Experience o Choosing the Target Function o Choosing a Representation for the Target Function o Choosing a Function Approximation Algorithm o Final Design
  • 8. Problem Description: A Checker Learning Problem  Task T: Playing Checkers  Performance Measure P: Percent of games won against opponents  Training Experience E: To be selected ==> Games Played against itself
  • 9. Issues in Machine Learning  What algorithms are available for learning a concept? How well do they perform?  How much training data is sufficient to learn a concept with high confidence?  When is it useful to use prior knowledge?  Are some training examples more useful than others?  What are best tasks for a system to learn?  What is the best way for a system to represent its knowledge?
  • 10. Machine Learning Algorithm Types  Machine learning algorithms can be organized into a taxonomy based on the desired outcome of the algorithm or the type of input available during training the machine.  Supervised learning algorithms are trained on labelled examples, i.e., input where the desired output is known. The supervised learning algorithm attempts to generalise a function or mapping from inputs to outputs which can then be used to speculatively generate an output for previously unseen inputs.  Unsupervised learning algorithms operate on unlabelled examples, i.e., input where the desired output is unknown. Here the objective is to discover structure in the data (e.g. through a cluster analysis), not to generalise a mapping from inputs to outputs.  Semi-supervised learning combines both labelled and unlabelled examples to generate an appropriate function or classifier.
  • 11. Machine Learning Algorithm Types (Cont’d)  Reinforcement learning is concerned with how intelligent agents ought to act in an environment to maximise some notion of reward. The agent executes actions which cause the observable state of the environment to change. Through a sequence of actions, the agent attempts to gather knowledge about how the environment responds to its actions, and attempts to synthesise a sequence of actions that maximises a cumulative reward.  Developmental learning, elaborated for Robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.
  • 12. AdaBoost Algorithm  AdaBoost, short for Adaptive Boosting, is a machine learning algorithm, formulated by Yoav Freund and Robert Schapire.  It is a meta-algorithm, and can be used in conjunction with many other learning algorithms to improve their performance.  AdaBoost is adaptive in the sense that subsequent classifiers built are tweaked in favour of those instances misclassified by previous classifiers.  AdaBoost is sensitive to noisy data and outliers.
  • 13. AdaBoost - Adaptive Boosting  Instead of resampling, uses training set re-weighting  Each training sample uses a weight to determine the probability of being selected for a training set.  AdaBoost is an algorithm for constructing a “strong” classifier as linear combination of “simple” “weak” classifier  Final classification based on weighted vote of weak classifiers
  • 14. AdaBoost Terminology  … “weak” or basis classifier (Classifier = Learner = Hypothesis)  … “strong” or final classifier  Weak Classifier: < 50% error over any distribution  Strong Classifier: Thresholded linear combination of weak classifier outputs
  • 15. AdaBoost : The Algorithm  The framework  The learner receives examples xi , yi i 1 N chosen randomly according to some fixed but unknown distribution P on X Y  The learner finds a hypothesis which is consistent with most of the for most 1 i N samples h f xi yi  The algorithm  Input variables P: The distribution where the training examples sampling from D: The distribution over all the training samples WeakLearn: A weak learning algorithm to be boosted T: The specified number of iterations
  • 17. Advantages of AdaBoost  Very simple to implement  Feature selection on very large sets of features  AdaBoost adjusts adaptively the errors of the weak hypotheses by WeakLearn.