SlideShare a Scribd company logo
Simple Introduction to

  AutoEncoder
             Lang Jun
Deep Learning Study Group, HLT, I2R
         17 August, 2012
Outline
1. What is AutoEncoder?
   Input = decoder(encoder(input))

2. How to train AutoEncoder?

  pre-training

3. What can it be used for?

  reduce dimensionality
                                     2/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          3/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          4/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          5/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          6/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          7/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          8/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          9/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          10/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          11/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          12/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          13/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          14/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          15/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          16/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          17/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          18/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          19/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          20/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          21/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          22/34
1. What is AutoEncoder?
➢   Multilayer neural net simple review




                                          23/34
1. What is AutoEncoder?
➢   Multilayer neural net with target output = input
➢   Reconstruction=decoder(encoder(input))




➢   Minimizing reconstruction error
➢   Probable inputs have small reconstruction error
                                                       24/34
2. How to train AutoEncoder?
       Hinton (2006) Science Paper

Restricted Boltzmann Machine
(RBM)




                                     25/34
2. How to train AutoEncoder?
                      Hinton (2006) Science Paper
restricted Boltzmann machine




                                                    26/34
Effective deep learning became
possible through unsupervised pre-
              training
  Purely supervised neural net                 With unsupervised pre‐training
                                        (with RBMs and Denoising Auto-Encoders)




                                                                           27/34
           0–9 handwritten digit recognition error rate (MNIST data)
Why is unsupervised pre-training working so well?

Regularization hypothesis:
   Representations good
for P(x) are good for P(y|x)
Optimization hypothesis:
   Unsupervised initializations
start near better local minimum
 of supervised training error
      Minima otherwise not
achievable by random
initialization




Erhan, Courville, Manzagol, Vincent, Bengio (JMLR, 2010)
                                                           28/34
3. What can it be used for?
     illustration for images




                               29/34
3. What can it be used for?
                  document retrieval
                            output
2000 reconstructed counts   vector
                                     • We train the neural network
    500 neurons                        to reproduce its input vector
                                       as its output
                                     • This forces it to compress as
      250 neurons                      much information as possible
                                       into the 10 numbers in the
                                       central bottleneck.
           10                        • These 10 numbers are then a
                                       good way to compare
                                       documents.
      250 neurons
                                        – See Ruslan
                                           Salakhutdinov’s talk
     500 neurons

                            input                                30/34
  2000 word counts          vector
3. What can it be used for?
                     visualize documents
                                                                  output
                                      2000 reconstructed counts   vector
•   Instead of using codes to
    retrieve documents, we can            500 neurons
    use 2-D codes to visualize sets
    of documents.
     – This works much better               250 neurons
       than 2-D PCA

                                                  2


                                            250 neurons


                                           500 neurons

                                                                  input 31/34
                                        2000 word counts          vector
First compress all documents to 2 numbers using a type of PCA
                   Then use different colors for different
document categories




                                                                32/34
First compress all documents to 2 numbers with an autoencoder
              Then use different colors for different document
categories




                                                                 33/34
3. What can it be used for?
       transliteration




                              34/34
Thanks for your attendance


      Looking forward to present
     Recursive AutoEncoder



                                   35/34

More Related Content

What's hot (20)

PPTX
Feedforward neural network
Sopheaktra YONG
 
PDF
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
PDF
Variational Autoencoders For Image Generation
Jason Anderson
 
PPTX
Deep Neural Networks (DNN)
Sir Syed University of Engineering & Technology
 
PDF
Introduction of Deep Learning
Myungjin Lee
 
PDF
Deep Feed Forward Neural Networks and Regularization
Yan Xu
 
PPT
Intro to Deep learning - Autoencoders
Akash Goel
 
PDF
Convolutional neural network
Yan Xu
 
PPTX
Regularization in deep learning
Kien Le
 
PPTX
Hyperparameter Tuning
Jon Lederman
 
PDF
Deep Learning - Overview of my work II
Mohamed Loey
 
PDF
Introduction to Recurrent Neural Network
Knoldus Inc.
 
PDF
NLP using transformers
Arvind Devaraj
 
PDF
Variational Autoencoder
Mark Chang
 
PPTX
Deep neural networks
Si Haem
 
PPT
Perceptron
Nagarajan
 
PDF
An introduction to Deep Learning
Julien SIMON
 
PPTX
Artificial Intelligence, Machine Learning and Deep Learning
Sujit Pal
 
PPTX
Deep learning
Pratap Dangeti
 
Feedforward neural network
Sopheaktra YONG
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
Variational Autoencoders For Image Generation
Jason Anderson
 
Introduction of Deep Learning
Myungjin Lee
 
Deep Feed Forward Neural Networks and Regularization
Yan Xu
 
Intro to Deep learning - Autoencoders
Akash Goel
 
Convolutional neural network
Yan Xu
 
Regularization in deep learning
Kien Le
 
Hyperparameter Tuning
Jon Lederman
 
Deep Learning - Overview of my work II
Mohamed Loey
 
Introduction to Recurrent Neural Network
Knoldus Inc.
 
NLP using transformers
Arvind Devaraj
 
Variational Autoencoder
Mark Chang
 
Deep neural networks
Si Haem
 
Perceptron
Nagarajan
 
An introduction to Deep Learning
Julien SIMON
 
Artificial Intelligence, Machine Learning and Deep Learning
Sujit Pal
 
Deep learning
Pratap Dangeti
 

Similar to Simple Introduction to AutoEncoder (20)

PPTX
ENNEoS Presentation - CackalackyCon
Drew Kirkpatrick
 
PPTX
AUTOENCODER AND ITS TYPES , HOW ITS USED, APPLICATIONS , ADVANTAGES AND DISAD...
devismileyrockz
 
PPTX
ENNEoS Presentation - HackMiami
Drew Kirkpatrick
 
PDF
DEEPLEARNING recurrent neural networs.pdf
AamirMaqsood8
 
PPTX
Document Analysis with Deep Learning
aiaioo
 
PDF
8085 microprocessor ramesh gaonkar
jemimajerome
 
PPTX
Introduction to Autoencoders: Types and Applications
Amr Rashed
 
PPTX
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Simplilearn
 
PPTX
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Simplilearn
 
PPTX
Seminar
Priyanshi Pandey
 
PDF
AN OVERVIEW OF MICROPROCESSORS AND ASSEMBLY LANGUAGE PROGRAMMING
Darian Pruitt
 
PPTX
Deep Learning for Text (Text Mining) LSTM
m0972220819
 
PDF
IC technology integration and circuit logic families
AlvinHavinz
 
PDF
Deep Networks with Neuromorphic VLSI devices
Giacomo Indiveri
 
PDF
Deep learning seminar report
SKS
 
PDF
Artificial Neural networks
Learnbay Datascience
 
PPTX
Demystifying NLP Transformers: Understanding the Power and Architecture behin...
NILESH VERMA
 
PDF
Blue Brain Project
IRJET Journal
 
PPTX
Autoecoders.pptx
MirzaJahanzeb5
 
PDF
biometrics
Avishek Singh
 
ENNEoS Presentation - CackalackyCon
Drew Kirkpatrick
 
AUTOENCODER AND ITS TYPES , HOW ITS USED, APPLICATIONS , ADVANTAGES AND DISAD...
devismileyrockz
 
ENNEoS Presentation - HackMiami
Drew Kirkpatrick
 
DEEPLEARNING recurrent neural networs.pdf
AamirMaqsood8
 
Document Analysis with Deep Learning
aiaioo
 
8085 microprocessor ramesh gaonkar
jemimajerome
 
Introduction to Autoencoders: Types and Applications
Amr Rashed
 
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Simplilearn
 
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Simplilearn
 
AN OVERVIEW OF MICROPROCESSORS AND ASSEMBLY LANGUAGE PROGRAMMING
Darian Pruitt
 
Deep Learning for Text (Text Mining) LSTM
m0972220819
 
IC technology integration and circuit logic families
AlvinHavinz
 
Deep Networks with Neuromorphic VLSI devices
Giacomo Indiveri
 
Deep learning seminar report
SKS
 
Artificial Neural networks
Learnbay Datascience
 
Demystifying NLP Transformers: Understanding the Power and Architecture behin...
NILESH VERMA
 
Blue Brain Project
IRJET Journal
 
Autoecoders.pptx
MirzaJahanzeb5
 
biometrics
Avishek Singh
 
Ad

Recently uploaded (20)

PDF
Dimensions of Societal Planning in Commonism
StefanMz
 
PDF
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
PPTX
How to Convert an Opportunity into a Quotation in Odoo 18 CRM
Celine George
 
PPTX
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
PPTX
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
PDF
The Different Types of Non-Experimental Research
Thelma Villaflores
 
PDF
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
PDF
LAW OF CONTRACT (5 YEAR LLB & UNITARY LLB )- MODULE - 1.& 2 - LEARN THROUGH P...
APARNA T SHAIL KUMAR
 
PDF
The dynastic history of the Chahmana.pdf
PrachiSontakke5
 
PDF
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
PPTX
ASRB NET 2023 PREVIOUS YEAR QUESTION PAPER GENETICS AND PLANT BREEDING BY SAT...
Krashi Coaching
 
PDF
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
PPTX
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
PPTX
STAFF DEVELOPMENT AND WELFARE: MANAGEMENT
PRADEEP ABOTHU
 
PPTX
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
PPTX
2025 Winter SWAYAM NPTEL & A Student.pptx
Utsav Yagnik
 
PPTX
Universal immunization Programme (UIP).pptx
Vishal Chanalia
 
PDF
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PPTX
PATIENT ASSIGNMENTS AND NURSING CARE RESPONSIBILITIES.pptx
PRADEEP ABOTHU
 
Dimensions of Societal Planning in Commonism
StefanMz
 
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
How to Convert an Opportunity into a Quotation in Odoo 18 CRM
Celine George
 
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
The Different Types of Non-Experimental Research
Thelma Villaflores
 
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
LAW OF CONTRACT (5 YEAR LLB & UNITARY LLB )- MODULE - 1.& 2 - LEARN THROUGH P...
APARNA T SHAIL KUMAR
 
The dynastic history of the Chahmana.pdf
PrachiSontakke5
 
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
ASRB NET 2023 PREVIOUS YEAR QUESTION PAPER GENETICS AND PLANT BREEDING BY SAT...
Krashi Coaching
 
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
STAFF DEVELOPMENT AND WELFARE: MANAGEMENT
PRADEEP ABOTHU
 
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
2025 Winter SWAYAM NPTEL & A Student.pptx
Utsav Yagnik
 
Universal immunization Programme (UIP).pptx
Vishal Chanalia
 
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PATIENT ASSIGNMENTS AND NURSING CARE RESPONSIBILITIES.pptx
PRADEEP ABOTHU
 
Ad

Simple Introduction to AutoEncoder

  • 1. Simple Introduction to AutoEncoder Lang Jun Deep Learning Study Group, HLT, I2R 17 August, 2012
  • 2. Outline 1. What is AutoEncoder? Input = decoder(encoder(input)) 2. How to train AutoEncoder? pre-training 3. What can it be used for? reduce dimensionality 2/34
  • 3. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 3/34
  • 4. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 4/34
  • 5. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 5/34
  • 6. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 6/34
  • 7. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 7/34
  • 8. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 8/34
  • 9. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 9/34
  • 10. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 10/34
  • 11. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 11/34
  • 12. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 12/34
  • 13. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 13/34
  • 14. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 14/34
  • 15. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 15/34
  • 16. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 16/34
  • 17. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 17/34
  • 18. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 18/34
  • 19. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 19/34
  • 20. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 20/34
  • 21. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 21/34
  • 22. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 22/34
  • 23. 1. What is AutoEncoder? ➢ Multilayer neural net simple review 23/34
  • 24. 1. What is AutoEncoder? ➢ Multilayer neural net with target output = input ➢ Reconstruction=decoder(encoder(input)) ➢ Minimizing reconstruction error ➢ Probable inputs have small reconstruction error 24/34
  • 25. 2. How to train AutoEncoder? Hinton (2006) Science Paper Restricted Boltzmann Machine (RBM) 25/34
  • 26. 2. How to train AutoEncoder? Hinton (2006) Science Paper restricted Boltzmann machine 26/34
  • 27. Effective deep learning became possible through unsupervised pre- training Purely supervised neural net With unsupervised pre‐training (with RBMs and Denoising Auto-Encoders) 27/34 0–9 handwritten digit recognition error rate (MNIST data)
  • 28. Why is unsupervised pre-training working so well? Regularization hypothesis: Representations good for P(x) are good for P(y|x) Optimization hypothesis: Unsupervised initializations start near better local minimum of supervised training error Minima otherwise not achievable by random initialization Erhan, Courville, Manzagol, Vincent, Bengio (JMLR, 2010) 28/34
  • 29. 3. What can it be used for? illustration for images 29/34
  • 30. 3. What can it be used for? document retrieval output 2000 reconstructed counts vector • We train the neural network 500 neurons to reproduce its input vector as its output • This forces it to compress as 250 neurons much information as possible into the 10 numbers in the central bottleneck. 10 • These 10 numbers are then a good way to compare documents. 250 neurons – See Ruslan Salakhutdinov’s talk 500 neurons input 30/34 2000 word counts vector
  • 31. 3. What can it be used for? visualize documents output 2000 reconstructed counts vector • Instead of using codes to retrieve documents, we can 500 neurons use 2-D codes to visualize sets of documents. – This works much better 250 neurons than 2-D PCA 2 250 neurons 500 neurons input 31/34 2000 word counts vector
  • 32. First compress all documents to 2 numbers using a type of PCA Then use different colors for different document categories 32/34
  • 33. First compress all documents to 2 numbers with an autoencoder Then use different colors for different document categories 33/34
  • 34. 3. What can it be used for? transliteration 34/34
  • 35. Thanks for your attendance Looking forward to present Recursive AutoEncoder 35/34