SlideShare a Scribd company logo
2
Most read
3
Most read
7
Most read
Facial Emotion Recognition with
Facial Gestures
By Ashwin Kedari Rachha
3263
Why Emotion Detection?
● The motivation behind choosing this topic specifically lies in the huge
investments large corporations do in feedbacks and surveys but fail to get
equitable response on their investments.
● Emotion Detection through facial gestures is a technology that aims to improve
product and services performance by monitoring customer behavior to certain
products or service staff by their evaluation
Some Companies that make us of emotion detection...
● While Disney uses emotion-detection tech to find out opinion on a completed
project, other brands have used it to directly inform advertising and digital
marketing.
● Kellogg’s is just one high-profile example, having used Affectiva’s software to test
audience reaction to ads for its cereal.
● Unilever does this, using HireVue’s AI-powered technology to screen prospective
candidates based on factors like body language and mood. In doing so, the
company is able to find the person whose personality and characteristics are best
suited to the job.
Emotion Expression Recognition Using SVM
What are Support Vector Machines?
SVMs plot the training vectors in high-dimensional
feature space, and label each vector with its class. A
hyperplane is drawn between the training vectors
that maximizes the distance between the different
classes. The hyperplane is determined through a
kernel function (radial basis, linear, polynomial or
sigmoid), which is given as input to the classification
software. [1999][Joachims, 1998b].
Facial Expression Detection using Fuzzy Classifier
● The algorithm is composed of three main
stages: image processing stage and facial
feature extraction stage, and emotion
detection stage.
● In image processing stage, the face region
and facial component is extracted by using
fuzzy color filter, virtual face model, and
histogram analysis method.
● The features for emotion detection are
extracted from facial component in facial
feature extraction stage.
FACIAL
EXPRESSION
RECOGNITION:
A DEEP
LEARNING
APPROACH
Facial Emotion Recognition by CNN
➽ Steps:
1. Data Preprocessing
2. Image Augmentation
3. Feature Extraction
4. Training
5. Validation
Dataset Description
The data consists of 48x48 pixel grayscale images of faces. The faces have been
categorized into facial expression in to one of seven categories (0=Angry, 1=Disgust,
2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).
The training set consists of 28,709 examples. The public test set used for the
leaderboard consists of 3,589 examples. The final test set, which was used to determine
the winner of the competition, consists of another 3,589 examples.This dataset was
prepared by Pierre-Luc Carrier and Aaron Courville, as part of an ongoing research
project.
Data Preprocessing
● The fer2013.csv consists of three columns namely emotion, pixels and purpose.
● The column in pixel first of all is stored in a list format.
● Since computational complexity is high for computing pixel values in the range of
(0-255), the data in pixel field is normalized to values between [0-1].
● The face objects stored are reshaped and resized to the mentioned size of 48 X 48.
● The respective emotion labels and their respective pixel values are stored in
objects.
● We use scikit-learn’s train_test_split() function to split the dataset into training
and testing data. The test_size being 0.2 meaning, 20% of data is for validation
while 80% of the data will be trained.
Facial Emotion Recognition: A Deep Learning approach
More data is generated using the training set by applying
transformations. It is required if the training set is not sufficient
enough to learn representation. The image data is generated by
transforming the actual training images by rotation, crop, shifts,
shear, zoom, flip, reflection, normalization etc.
Data Augmentation
Mini-Exception Model for Emotion Recognition
The architecture of the Mini_Exception
model consists of:
● Two convolutional layers followed
by batch normalization
● Then one batch is again treated
with a convolutional layer and other
batch is treated by seperable-
convolutional layer.
● The layers are followed my max
pooling and global average pooling
finalized by the softmax algorithm.
Convolutional 2D
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which
is simply a small matrix of weights. This kernel “slides” over the 2D input data,
performing an elementwise multiplication with the part of the input it is currently on,
and then summing up the results into a single output pixel.
Batch Normalization and Max Pooling 2D
Batch normalization reduces the amount by what the hidden unit values shift around
(covariance shift).
In case of Max Pooling, a kernel of size n*n is moved across the matrix and for each
position the max value is taken and put in the corresponding position of the output
matrix.
Global Average Pooling
(GAP) layers to minimize overfitting by reducing the total number of parameters in
the model. Similar to max pooling layers, GAP layers are used to reduce the spatial
dimensions of a three-dimensional tensor.
Optimizer, Loss function and Metrics.
● Loss function used is categorical crossentropy.
● Loss function simply measures the absolute difference between our prediction
and the actual value.
● Cross-entropy loss, or log loss, measures the performance of a classification model
whose output is a probability value between 0 and 1. Cross-entropy loss increases
as the predicted probability diverges from the actual label. So predicting a
probability of .012 when the actual observation label is 1 would be bad and result
in a high loss value.
● Optimizer used is the Adam() optimizer.
● Adam stands for Adaptive Moment Estimation. Adaptive
Moment Estimation (Adam) is another method that
computes adaptive learning rates for each parameter. In
addition to storing an exponentially decaying average of
past squared gradients like AdaDelta ,Adam also keeps
an exponentially decaying average of past gradients M(t),
similar to momentum:
Validation
● In the validation phase, various OpenCV functions and Keras functions have been
used.
● Initially the video frame is stored in a video object.
● An LBP cascade classifier is used to detect facial region of interest
● The image frame is converted into grayscale and resized and reshaped with the
help of numpy.
● This resized image is fed to the predictor which is loaded by
keras.models.load_model() function.
● The max argument is output.
● A rectangle is drawn around the facial regions and the output is formatted above
the rectangular box.
Performance Evaluation
The model gives 65-66% accuracy on validation set while training the model. The CNN
model learns the representation features of emotions from the training images. Below
are few epochs of training process with batch size of 64.
Output and statistics
Facial Emotion Recognition: A Deep Learning approach

More Related Content

PPTX
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Sandeep Wakchaure
 
PDF
Emotion detection using cnn.pptx
RADO7900
 
PDF
Facial emotion recognition
Rahin Patel
 
PDF
EMOTION DETECTION USING AI
Aantariksh Developers
 
PPTX
HUMAN EMOTION RECOGNIITION SYSTEM
soumi sarkar
 
PPTX
Emotion recognition using image processing in deep learning
vishnuv43
 
PPTX
Emotion recognition
Madhusudhan G
 
PPTX
Information Technology in Rural Development
RitabrataSarkar3
 
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Sandeep Wakchaure
 
Emotion detection using cnn.pptx
RADO7900
 
Facial emotion recognition
Rahin Patel
 
EMOTION DETECTION USING AI
Aantariksh Developers
 
HUMAN EMOTION RECOGNIITION SYSTEM
soumi sarkar
 
Emotion recognition using image processing in deep learning
vishnuv43
 
Emotion recognition
Madhusudhan G
 
Information Technology in Rural Development
RitabrataSarkar3
 

What's hot (20)

PDF
Human Emotion Recognition
Chaitanya Maddala
 
PPT
Face Detection and Recognition System
Zara Tariq
 
PDF
Facial Emoji Recognition
ijtsrd
 
PPTX
Facial emotion recognition
Anukriti Dureha
 
PPTX
Facial expression recognition projc 2 (3) (1)
AbhiAchalla
 
DOCX
Facial Expression Recognition via Python
Saurav Gupta
 
PPTX
Computer Vision - Real Time Face Recognition using Open CV and Python
Akash Satamkar
 
PDF
Facial Expression Recognitino
International Islamic University
 
PPTX
Emotion Based Music Player.pptx
YogeshChaubey2
 
PPTX
Facial emotion detection on babies' emotional face using Deep Learning.
Takrim Ul Islam Laskar
 
PPTX
Predicting Emotions through Facial Expressions
twinkle singh
 
PPTX
Face detection presentation slide
Sanjoy Dutta
 
PPTX
Emotion based music player
Nizam Muhammed
 
PPTX
Face detection and recognition
Pankaj Thakur
 
PPTX
Machine learning ppt
Rajat Sharma
 
PDF
Noise Models
Sardar Alam
 
PDF
Hand gesture recognition system(FYP REPORT)
Afnan Rehman
 
PPT
Back propagation
Nagarajan
 
PPSX
Face recognition technology - BEST PPT
Siddharth Modi
 
PPTX
Transfer learning-presentation
Bushra Jbawi
 
Human Emotion Recognition
Chaitanya Maddala
 
Face Detection and Recognition System
Zara Tariq
 
Facial Emoji Recognition
ijtsrd
 
Facial emotion recognition
Anukriti Dureha
 
Facial expression recognition projc 2 (3) (1)
AbhiAchalla
 
Facial Expression Recognition via Python
Saurav Gupta
 
Computer Vision - Real Time Face Recognition using Open CV and Python
Akash Satamkar
 
Facial Expression Recognitino
International Islamic University
 
Emotion Based Music Player.pptx
YogeshChaubey2
 
Facial emotion detection on babies' emotional face using Deep Learning.
Takrim Ul Islam Laskar
 
Predicting Emotions through Facial Expressions
twinkle singh
 
Face detection presentation slide
Sanjoy Dutta
 
Emotion based music player
Nizam Muhammed
 
Face detection and recognition
Pankaj Thakur
 
Machine learning ppt
Rajat Sharma
 
Noise Models
Sardar Alam
 
Hand gesture recognition system(FYP REPORT)
Afnan Rehman
 
Back propagation
Nagarajan
 
Face recognition technology - BEST PPT
Siddharth Modi
 
Transfer learning-presentation
Bushra Jbawi
 
Ad

Similar to Facial Emotion Recognition: A Deep Learning approach (20)

PDF
IRJET - Emotion Recognising System-Crowd Behavior Analysis
IRJET Journal
 
PPTX
Face recognition technology
ranjit banshpal
 
PDF
Sign Detection from Hearing Impaired
IRJET Journal
 
PDF
IRJET- An Overview on Automated Emotion Recognition System
IRJET Journal
 
PDF
IRJET - Hand Gesture Recognition to Perform System Operations
IRJET Journal
 
PDF
Methods of Optimization in Machine Learning
Knoldus Inc.
 
PDF
IRJET - Skin Disease Predictor using Deep Learning
IRJET Journal
 
PPTX
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGES
PNandaSai
 
PPTX
ppt 20BET1024.pptx
ManeetBali
 
PPTX
cvpresentation-190812154654 (1).pptx
PyariMohanJena
 
PDF
Bangla Handwritten Digit Recognition Report.pdf
KhondokerAbuNaim
 
PDF
CUDA Accelerated Face Recognition
QuEST Global (erstwhile NeST Software)
 
PDF
Real-Time Face Tracking with GPU Acceleration
QuEST Global (erstwhile NeST Software)
 
PDF
Blood Cell Image Classification for Detecting Malaria using CNN
IRJET Journal
 
PPTX
Enhancing-Facial-Expression-Recognition-with-DDAMFN.pptx
ShrutiGoyal214798
 
PDF
IRJET- Facial Expression Recognition using GPA Analysis
IRJET Journal
 
PPTX
Smart Driver Alert: Revolutionizing Road Safety with Predictive Fatigue Detec...
Boston Institute of Analytics
 
PPTX
Smart Driver Alert: Predictive Fatigue Detection Technology
Boston Institute of Analytics
 
PDF
IRJET- Mango Classification using Convolutional Neural Networks
IRJET Journal
 
PPT
Automated Face Detection System
Abhiroop Ghatak
 
IRJET - Emotion Recognising System-Crowd Behavior Analysis
IRJET Journal
 
Face recognition technology
ranjit banshpal
 
Sign Detection from Hearing Impaired
IRJET Journal
 
IRJET- An Overview on Automated Emotion Recognition System
IRJET Journal
 
IRJET - Hand Gesture Recognition to Perform System Operations
IRJET Journal
 
Methods of Optimization in Machine Learning
Knoldus Inc.
 
IRJET - Skin Disease Predictor using Deep Learning
IRJET Journal
 
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGES
PNandaSai
 
ppt 20BET1024.pptx
ManeetBali
 
cvpresentation-190812154654 (1).pptx
PyariMohanJena
 
Bangla Handwritten Digit Recognition Report.pdf
KhondokerAbuNaim
 
CUDA Accelerated Face Recognition
QuEST Global (erstwhile NeST Software)
 
Real-Time Face Tracking with GPU Acceleration
QuEST Global (erstwhile NeST Software)
 
Blood Cell Image Classification for Detecting Malaria using CNN
IRJET Journal
 
Enhancing-Facial-Expression-Recognition-with-DDAMFN.pptx
ShrutiGoyal214798
 
IRJET- Facial Expression Recognition using GPA Analysis
IRJET Journal
 
Smart Driver Alert: Revolutionizing Road Safety with Predictive Fatigue Detec...
Boston Institute of Analytics
 
Smart Driver Alert: Predictive Fatigue Detection Technology
Boston Institute of Analytics
 
IRJET- Mango Classification using Convolutional Neural Networks
IRJET Journal
 
Automated Face Detection System
Abhiroop Ghatak
 
Ad

Recently uploaded (20)

PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PDF
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PPTX
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
PDF
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
PDF
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
PDF
Get More from Fiori Automation - What’s New, What Works, and What’s Next.pdf
Precisely
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PDF
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
PDF
Doc9.....................................
SofiaCollazos
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
Security features in Dell, HP, and Lenovo PC systems: A research-based compar...
Principled Technologies
 
PDF
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 
The Future of Artificial Intelligence (AI)
Mukul
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
Get More from Fiori Automation - What’s New, What Works, and What’s Next.pdf
Precisely
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
Doc9.....................................
SofiaCollazos
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
Security features in Dell, HP, and Lenovo PC systems: A research-based compar...
Principled Technologies
 
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 

Facial Emotion Recognition: A Deep Learning approach

  • 1. Facial Emotion Recognition with Facial Gestures By Ashwin Kedari Rachha 3263
  • 2. Why Emotion Detection? ● The motivation behind choosing this topic specifically lies in the huge investments large corporations do in feedbacks and surveys but fail to get equitable response on their investments. ● Emotion Detection through facial gestures is a technology that aims to improve product and services performance by monitoring customer behavior to certain products or service staff by their evaluation
  • 3. Some Companies that make us of emotion detection... ● While Disney uses emotion-detection tech to find out opinion on a completed project, other brands have used it to directly inform advertising and digital marketing. ● Kellogg’s is just one high-profile example, having used Affectiva’s software to test audience reaction to ads for its cereal. ● Unilever does this, using HireVue’s AI-powered technology to screen prospective candidates based on factors like body language and mood. In doing so, the company is able to find the person whose personality and characteristics are best suited to the job.
  • 4. Emotion Expression Recognition Using SVM What are Support Vector Machines? SVMs plot the training vectors in high-dimensional feature space, and label each vector with its class. A hyperplane is drawn between the training vectors that maximizes the distance between the different classes. The hyperplane is determined through a kernel function (radial basis, linear, polynomial or sigmoid), which is given as input to the classification software. [1999][Joachims, 1998b].
  • 5. Facial Expression Detection using Fuzzy Classifier ● The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. ● In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. ● The features for emotion detection are extracted from facial component in facial feature extraction stage.
  • 7. Facial Emotion Recognition by CNN ➽ Steps: 1. Data Preprocessing 2. Image Augmentation 3. Feature Extraction 4. Training 5. Validation
  • 8. Dataset Description The data consists of 48x48 pixel grayscale images of faces. The faces have been categorized into facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples. The public test set used for the leaderboard consists of 3,589 examples. The final test set, which was used to determine the winner of the competition, consists of another 3,589 examples.This dataset was prepared by Pierre-Luc Carrier and Aaron Courville, as part of an ongoing research project.
  • 9. Data Preprocessing ● The fer2013.csv consists of three columns namely emotion, pixels and purpose. ● The column in pixel first of all is stored in a list format. ● Since computational complexity is high for computing pixel values in the range of (0-255), the data in pixel field is normalized to values between [0-1]. ● The face objects stored are reshaped and resized to the mentioned size of 48 X 48. ● The respective emotion labels and their respective pixel values are stored in objects. ● We use scikit-learn’s train_test_split() function to split the dataset into training and testing data. The test_size being 0.2 meaning, 20% of data is for validation while 80% of the data will be trained.
  • 11. More data is generated using the training set by applying transformations. It is required if the training set is not sufficient enough to learn representation. The image data is generated by transforming the actual training images by rotation, crop, shifts, shear, zoom, flip, reflection, normalization etc. Data Augmentation
  • 12. Mini-Exception Model for Emotion Recognition The architecture of the Mini_Exception model consists of: ● Two convolutional layers followed by batch normalization ● Then one batch is again treated with a convolutional layer and other batch is treated by seperable- convolutional layer. ● The layers are followed my max pooling and global average pooling finalized by the softmax algorithm.
  • 13. Convolutional 2D The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
  • 14. Batch Normalization and Max Pooling 2D Batch normalization reduces the amount by what the hidden unit values shift around (covariance shift). In case of Max Pooling, a kernel of size n*n is moved across the matrix and for each position the max value is taken and put in the corresponding position of the output matrix.
  • 15. Global Average Pooling (GAP) layers to minimize overfitting by reducing the total number of parameters in the model. Similar to max pooling layers, GAP layers are used to reduce the spatial dimensions of a three-dimensional tensor.
  • 16. Optimizer, Loss function and Metrics. ● Loss function used is categorical crossentropy. ● Loss function simply measures the absolute difference between our prediction and the actual value. ● Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value.
  • 17. ● Optimizer used is the Adam() optimizer. ● Adam stands for Adaptive Moment Estimation. Adaptive Moment Estimation (Adam) is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients like AdaDelta ,Adam also keeps an exponentially decaying average of past gradients M(t), similar to momentum:
  • 18. Validation ● In the validation phase, various OpenCV functions and Keras functions have been used. ● Initially the video frame is stored in a video object. ● An LBP cascade classifier is used to detect facial region of interest ● The image frame is converted into grayscale and resized and reshaped with the help of numpy. ● This resized image is fed to the predictor which is loaded by keras.models.load_model() function. ● The max argument is output. ● A rectangle is drawn around the facial regions and the output is formatted above the rectangular box.
  • 19. Performance Evaluation The model gives 65-66% accuracy on validation set while training the model. The CNN model learns the representation features of emotions from the training images. Below are few epochs of training process with batch size of 64.