SlideShare a Scribd company logo
Deep Learning for Natural Language
Processing
Jonathan Mugan, PhD
NLP Community Day
June 4, 2015
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
The importance of finding dumb mistakes
The importance of finding dumb mistakes
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Deep learning enables sub-symbolic processing
Symbolic systems can be brittle.
I
bought
a
car
.
<i>
<bought>
<a>
<car>
<.>
You have to remember to
represent “purchased” and
“automobile.”
What about “truck”?
How do you encode the
meaning of the entire
sentence?
Deep learning begins with a little function
It all starts with a humble linear function called a perceptron.
weight1 ✖ input1
weight2 ✖ input2
weight3 ✖ input3
sum
✚
Perceptron:
If sum > threshold: output 1
Else: output 0
Example: The inputs can be your data. Question: Should I buy this car?
0.2 ✖ gas milage
0.3 ✖ horepower
0.5 ✖ num cup holders
sum
✚
Perceptron:
If sum > threshold: buy
Else: walk
These little functions are chained together
Deep learning comes from chaining a bunch of these little
functions together. Chained together, they are called
neurons.
where
To create a neuron, we add a nonlinearity to the perceptron to get
extra representational power when we chain them together.
Our nonlinear perceptron is
sometimes called a sigmoid.
The value 𝑏 just offsets the sigmoid so the
center is at 0.
Plot of a sigmoid
Single artificial neuron
Output, or input to
next neuron
weight1 ✖ input1
weight2 ✖ input2
weight3 ✖ input3
Three-layered neural network
A bunch of neurons chained together is called a neural network.
Layer 2: hidden layer. Called
this because it is neither input
nor output.
Layer 3: output. E.g., cat
or not a cat; buy the car or
walk.
Layer 1: input data. Can
be pixel values or the number
of cup holders.
This network has three layers.
(Some edges lighter
to avoid clutter.)
[16.2, 17.3, −52.3, 11.1]
Training with supervised learning
Supervised Learning: You show the network a bunch of things
with a labels saying what they are, and you want the network to
learn to classify future things without labels.
Example: here are some
pictures of cats. Tell me
which of these other pictures
are of cats.
To train the network, want to
find the weights that
correctly classify all of the
training examples. You hope
it will work on the testing
examples.
Done with an algorithm
called Backpropagation
[Rumelhart et al., 1986].
[16.2, 17.3, −52.3, 11.1]
Training with supervised learning
Supervised Learning: You show the network a bunch of things
with a labels saying what they are, and you want the network to
learn to classify future things without labels.
𝑤
𝑊
𝑦
𝑥 [16.2, 17.3, −52.3, 11.1]
Learning is learning the
parameter values.
Why Google’s deep
learning toolbox is
called TensorFlow.
Deep learning is adding more layers
There is no exact
definition of
what constitutes
“deep learning.”
The number of
weights (parameters)
is generally large.
Some networks have
millions of parameters
that are learned.
(Some edges omitted
to avoid clutter.)
[16.2, 17.3, −52.3, 11.1]
Recall our standard architecture
Layer 2: hidden layer. Called
this because it is neither input
nor output.
Layer 3: output. E.g., cat
or not a cat; buy the car or
walk.
Layer 1: input data. Can
be pixel values or the number
of cup holders.
Is this a cat?
[16.2, 17.3, −52.3, 11.1]
Neural nets with multiple outputs
Okay, but what kind of cat is it?
𝑃(𝑥)𝑃(𝑥)𝑃(𝑥) 𝑃(𝑥) 𝑃(𝑥)
Introduce a new node
called a softmax.
Probability a
house cat
Probability a
lion
Probability a
panther
Probability a
bobcat
Just normalize the
output over the sum of
the other outputs
(using the exponential).
Gives a probability.
[16.2, 17.3, −52.3, 11.1]
Learning word vectors
13.2, 5.4, −3.1 [−12.1, 13.1, 0.1] [7.2, 3.2,-1.9]
the man ran
From the sentence, “The man ran fast.”
𝑃(𝑥)𝑃(𝑥) 𝑃(𝑥) 𝑃(𝑥)
Probability
of “fast”
Probability
of “slow”
Probability
of “taco”
Probability
of “bobcat”
Learns a vector for each word based on the “meaning” in the sentence by
trying to predict the next word [Bengio et al., 2003].
These numbers updated
along with the weights
and become the vector
representations of the
words.
Comparing vector and symbolic representations
Vector representation
taco = [17.32, 82.9, −4.6, 7.2]
Symbolic representation
taco = 𝑡𝑎𝑐𝑜
• Vectors have a similarity score.
• A taco is not a burrito but similar.
• Symbols can be the same or not.
• A taco is just as different from a
burrito as a Toyota.
• Vectors have internal structure
[Mikolov et al., 2013].
• Italy – Rome = France – Paris
• King – Queen = Man – Woman
• Symbols have no structure.
• Symbols are arbitrarily assigned.
• Meaning relative to other symbols.
• Vectors are grounded in
experience.
• Meaning relative to predictions.
• Ability to learn representations
makes agents less brittle.
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Encoding sentence meaning into a vector
h0
The
“The patient fell.”
Encoding sentence meaning into a vector
h0
The
h1
patient
“The patient fell.”
Encoding sentence meaning into a vector
h0
The
h1
patient
h2
fell
“The patient fell.”
Encoding sentence meaning into a vector
Like a hidden Markov model, but doesn’t make the Markov
assumption and benefits from a vector representation.
h0
The
h1
patient
h2
fell
h3
.
“The patient fell.”
Decoding sentence meaning
Machine translation, or structure learning more generally.
El
h3
Decoding sentence meaning
Machine translation, or structure learning more generally.
El
h3 h4
Decoding sentence meaning
Machine translation, or structure learning more generally.
El
h3
paciente
h4
Decoding sentence meaning
Machine translation, or structure learning more generally.
El
h3
paciente
h4
cayó
h5
.
h5
[Cho et al., 2014]
It keeps generating until it generates a stop symbol.
Generating image captions
Convolutional
neural network
An
h0
angry
h1
sister
h2
.
h3
[Karpathy and Fei-Fei, 2015]
[Vinyals et al., 2015]
Image caption examples
[Karpathy and Fei-Fei, 2015] http://cs.stanford.edu/people/karpathy/deepimagesent/
See:
Attention [Bahdanau et al., 2014]
El
h3
paciente
h4
cayó
h5
.
h5
h0
The
h1
patient
h2
fell
h3
.
RNNs and Structure Learning
• These are sometimes called seq2seq models.
• In addition to machine translation and generating
captions for images, can be used to learn just about
any kind of structure you’d want, as long as you
have lots of training data.
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Deep learning and question answering
RNNs answer questions.
What is the translation of this
phrase to French?
What is the next word?
Attention is useful for question
answering.
This can be generalized to which facts
the learner should pay attention to
when answering questions.
Deep learning and question answering
Bob went home.
Tim went to the junkyard.
Bob picked up the jar.
Bob went to town.
Where is the jar? A: town
• Memory Networks [Weston et al.,
2014]
• Updates memory vectors based on
a question and finds the best one to
give the output.
The office is north of the yard.
The bath is north of the office.
The yard is west of the kitchen.
How do you go from the office to
the kitchen? A: south, east
• Neural Reasoner [Peng et al.,
2015]
• Encodes the question and facts in
many layers, and the final layer is
put through a function that gives
the answer.
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Limitations of deep learning
The encoded meaning is grounded with respect to other words.
There is no linkage to the physical world.
"ICubLugan01 Reaching". Licensed under CC BY-SA 3.0 via Wikipedia - https://en.wikipedia.org/wiki/File:ICubLugan01_Reaching.png#/media/File:ICubLugan01_Reaching.png
The iCub http://www.icub.org/
Limitations of deep learning
Bob went home.
Tim went to the junkyard.
Bob picked up the jar.
Bob went to town.
Where is the jar? A: town
Deep learning has no
understanding of what it means for
the jar to be in town.
For example that it can’t also be at
the junkyard. Or that it may be in
Bob’s car, or still in his hands.
The encoded meaning is grounded with respect to other words.
There is no linkage to the physical world.
Limitations of deep learning
Imagine a dude standing
on a table. How would a
computer know that if
you move the table you
also move the dude?
Likewise, how could a
computer know that it
only rains outside?
Or, as Marvin Minsky asks, how could a computer learn
that you can pull a box with a string but not push it?
Limitations of deep learning
Imagine a dude standing
on a table. How would a
computer know that if
you move the table you
also move the dude?
Likewise, how could a
computer know that it
only rains outside?
Or, as Marvin Minsky asks, how could a computer learn
that you can pull a box with a string but not push it?
No one knows how to explain
all of these situations to a
computer. There’s just too
many variations.
A robot can learn through
experience, but it must be
able to efficiently generalize
that experience.
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Overview
• About me and DeepGrammar (4 minutes)
• Introduction to Deep Learning for NLP
• Recurrent Neural Networks
• Deep Learning and Question Answering
• Limitations of Deep Learning for NLP
• How You Can Get Started
Best learning resources
Stanford class on deep learning for
NLP.http://cs224d.stanford.edu/syllabus.html
Hinton’s Coursera Course. Get it right from the horse’s mouth.
He explains things well.
https://www.coursera.org/course/neuralnets
Online textbook in preparation for deep learning from Yoshua
Bengio and friends. Clear and understandable.
http://www.iro.umontreal.ca/~bengioy/dlbook/
TensorFlow tutorials.
https://www.tensorflow.org/versions/r0.8/tutorials/index.ht
ml
TensorFlow has a seq2seq abstraction
data_utils is vocabulary.
seq2seq_model puts buckets around seq2seq function.
translate trains the model.
Check out spaCy for simple text processing
https://nicschrading.com/project/Intro-to-NLP-with-spaCy/
It also does
word vectors.
See:
Thanks for listening
Jonathan Mugan
@jmugan
www.deepgrammar.com

More Related Content

What's hot (20)

PDF
Engineering Intelligent NLP Applications Using Deep Learning – Part 1
Saurabh Kaushik
 
PDF
Multi modal retrieval and generation with deep distributed models
Roelof Pieters
 
PDF
Sentence representations and question answering (YerevaNN)
YerevaNN research lab
 
PDF
Thai Text processing by Transfer Learning using Transformer (Bert)
Kobkrit Viriyayudhakorn
 
PDF
Deep Learning for Information Retrieval
Roelof Pieters
 
PDF
Visual-Semantic Embeddings: some thoughts on Language
Roelof Pieters
 
PDF
Deep Learning, an interactive introduction for NLP-ers
Roelof Pieters
 
PDF
Deep Learning for Natural Language Processing: Word Embeddings
Roelof Pieters
 
PDF
Deep Learning for NLP Applications
Samiur Rahman
 
PPTX
Talk from NVidia Developer Connect
Anuj Gupta
 
PDF
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
Matthew Lease
 
PPTX
KiwiPyCon 2014 talk - Understanding human language with Python
Alyona Medelyan
 
PDF
Deep learning for natural language embeddings
Roelof Pieters
 
PPTX
Chatbots from first principles
Jonathan Mugan
 
PDF
Practical Deep Learning for NLP
Textkernel
 
PDF
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Márton Miháltz
 
PDF
Deep learning for nlp
Viet-Trung TRAN
 
PDF
Natural Language Processing: L01 introduction
ananth
 
PDF
Chatbots and Deep Learning
Andherson Maeda
 
PDF
Machine Learning in NLP
Vijay Ganti
 
Engineering Intelligent NLP Applications Using Deep Learning – Part 1
Saurabh Kaushik
 
Multi modal retrieval and generation with deep distributed models
Roelof Pieters
 
Sentence representations and question answering (YerevaNN)
YerevaNN research lab
 
Thai Text processing by Transfer Learning using Transformer (Bert)
Kobkrit Viriyayudhakorn
 
Deep Learning for Information Retrieval
Roelof Pieters
 
Visual-Semantic Embeddings: some thoughts on Language
Roelof Pieters
 
Deep Learning, an interactive introduction for NLP-ers
Roelof Pieters
 
Deep Learning for Natural Language Processing: Word Embeddings
Roelof Pieters
 
Deep Learning for NLP Applications
Samiur Rahman
 
Talk from NVidia Developer Connect
Anuj Gupta
 
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
Matthew Lease
 
KiwiPyCon 2014 talk - Understanding human language with Python
Alyona Medelyan
 
Deep learning for natural language embeddings
Roelof Pieters
 
Chatbots from first principles
Jonathan Mugan
 
Practical Deep Learning for NLP
Textkernel
 
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Márton Miháltz
 
Deep learning for nlp
Viet-Trung TRAN
 
Natural Language Processing: L01 introduction
ananth
 
Chatbots and Deep Learning
Andherson Maeda
 
Machine Learning in NLP
Vijay Ganti
 

Similar to Deep Learning for Natural Language Processing (20)

PPTX
What Deep Learning Means for Artificial Intelligence
Jonathan Mugan
 
PDF
Deep network notes.pdf
Ramya Nellutla
 
PPTX
Writing methods
debaleena dutta
 
PDF
Course Design Best Practices
Keitaro Matsuoka
 
PDF
Lukasz Kaiser at AI Frontiers: How Deep Learning Quietly Revolutionized NLP
AI Frontiers
 
PPTX
Machine Translation Machine Translation Machine Translation
shruti954781
 
PPTX
Knowledge & logic in Artificial Intelligence.pptx
BisweswarThakur1
 
PPT
Moore_slides.ppt
butest
 
PPTX
Sanskrit in Natural Language Processing
Hitesh Joshi
 
PPTX
An Introduction to Recent Advances in the Field of NLP
Rrubaa Panchendrarajan
 
PPT
Unit 12 Future Technologies
Sonia Osuna
 
PPTX
NLP_KASHK:Evaluating Language Model
Hemantha Kulathilake
 
PPTX
Natural-Language-Processing -Stages and application area.pptx
madhuridalal1012
 
PDF
Natural Language Processing for development
Aravind Reddy
 
PDF
Natural Language Processing for development
Aravind Reddy
 
PPTX
Unit 1 Introduction to Artificial Intelligence.pptx
Dr.M.Karthika parthasarathy
 
PDF
Natural Language Processing
punedevscom
 
PDF
Prompt-Engineering-Lecture-Elvis learn prompt engineering
SaweraKhadium
 
PDF
CSCE181 Big ideas in NLP
Insoo Chung
 
PDF
Pycon India 2018 Natural Language Processing Workshop
Lakshya Sivaramakrishnan
 
What Deep Learning Means for Artificial Intelligence
Jonathan Mugan
 
Deep network notes.pdf
Ramya Nellutla
 
Writing methods
debaleena dutta
 
Course Design Best Practices
Keitaro Matsuoka
 
Lukasz Kaiser at AI Frontiers: How Deep Learning Quietly Revolutionized NLP
AI Frontiers
 
Machine Translation Machine Translation Machine Translation
shruti954781
 
Knowledge & logic in Artificial Intelligence.pptx
BisweswarThakur1
 
Moore_slides.ppt
butest
 
Sanskrit in Natural Language Processing
Hitesh Joshi
 
An Introduction to Recent Advances in the Field of NLP
Rrubaa Panchendrarajan
 
Unit 12 Future Technologies
Sonia Osuna
 
NLP_KASHK:Evaluating Language Model
Hemantha Kulathilake
 
Natural-Language-Processing -Stages and application area.pptx
madhuridalal1012
 
Natural Language Processing for development
Aravind Reddy
 
Natural Language Processing for development
Aravind Reddy
 
Unit 1 Introduction to Artificial Intelligence.pptx
Dr.M.Karthika parthasarathy
 
Natural Language Processing
punedevscom
 
Prompt-Engineering-Lecture-Elvis learn prompt engineering
SaweraKhadium
 
CSCE181 Big ideas in NLP
Insoo Chung
 
Pycon India 2018 Natural Language Processing Workshop
Lakshya Sivaramakrishnan
 
Ad

More from Jonathan Mugan (6)

PPTX
How to build someone we can talk to
Jonathan Mugan
 
PDF
Moving Your Machine Learning Models to Production with TensorFlow Extended
Jonathan Mugan
 
PDF
Generating Natural-Language Text with Neural Networks
Jonathan Mugan
 
PPTX
Data Day Seattle, From NLP to AI
Jonathan Mugan
 
PPTX
Data Day Seattle, Chatbots from First Principles
Jonathan Mugan
 
PPTX
From Natural Language Processing to Artificial Intelligence
Jonathan Mugan
 
How to build someone we can talk to
Jonathan Mugan
 
Moving Your Machine Learning Models to Production with TensorFlow Extended
Jonathan Mugan
 
Generating Natural-Language Text with Neural Networks
Jonathan Mugan
 
Data Day Seattle, From NLP to AI
Jonathan Mugan
 
Data Day Seattle, Chatbots from First Principles
Jonathan Mugan
 
From Natural Language Processing to Artificial Intelligence
Jonathan Mugan
 
Ad

Recently uploaded (20)

PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PPTX
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
PPTX
The Future of AI & Machine Learning.pptx
pritsen4700
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PPTX
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
PDF
introduction to computer hardware and sofeware
chauhanshraddha2007
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PPTX
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
PDF
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
PDF
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PPTX
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
The Future of AI & Machine Learning.pptx
pritsen4700
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
introduction to computer hardware and sofeware
chauhanshraddha2007
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 

Deep Learning for Natural Language Processing

  • 1. Deep Learning for Natural Language Processing Jonathan Mugan, PhD NLP Community Day June 4, 2015
  • 2. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 3. The importance of finding dumb mistakes
  • 4. The importance of finding dumb mistakes
  • 5. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 6. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 7. Deep learning enables sub-symbolic processing Symbolic systems can be brittle. I bought a car . <i> <bought> <a> <car> <.> You have to remember to represent “purchased” and “automobile.” What about “truck”? How do you encode the meaning of the entire sentence?
  • 8. Deep learning begins with a little function It all starts with a humble linear function called a perceptron. weight1 ✖ input1 weight2 ✖ input2 weight3 ✖ input3 sum ✚ Perceptron: If sum > threshold: output 1 Else: output 0 Example: The inputs can be your data. Question: Should I buy this car? 0.2 ✖ gas milage 0.3 ✖ horepower 0.5 ✖ num cup holders sum ✚ Perceptron: If sum > threshold: buy Else: walk
  • 9. These little functions are chained together Deep learning comes from chaining a bunch of these little functions together. Chained together, they are called neurons. where To create a neuron, we add a nonlinearity to the perceptron to get extra representational power when we chain them together. Our nonlinear perceptron is sometimes called a sigmoid. The value 𝑏 just offsets the sigmoid so the center is at 0. Plot of a sigmoid
  • 10. Single artificial neuron Output, or input to next neuron weight1 ✖ input1 weight2 ✖ input2 weight3 ✖ input3
  • 11. Three-layered neural network A bunch of neurons chained together is called a neural network. Layer 2: hidden layer. Called this because it is neither input nor output. Layer 3: output. E.g., cat or not a cat; buy the car or walk. Layer 1: input data. Can be pixel values or the number of cup holders. This network has three layers. (Some edges lighter to avoid clutter.) [16.2, 17.3, −52.3, 11.1]
  • 12. Training with supervised learning Supervised Learning: You show the network a bunch of things with a labels saying what they are, and you want the network to learn to classify future things without labels. Example: here are some pictures of cats. Tell me which of these other pictures are of cats. To train the network, want to find the weights that correctly classify all of the training examples. You hope it will work on the testing examples. Done with an algorithm called Backpropagation [Rumelhart et al., 1986]. [16.2, 17.3, −52.3, 11.1]
  • 13. Training with supervised learning Supervised Learning: You show the network a bunch of things with a labels saying what they are, and you want the network to learn to classify future things without labels. 𝑤 𝑊 𝑦 𝑥 [16.2, 17.3, −52.3, 11.1] Learning is learning the parameter values. Why Google’s deep learning toolbox is called TensorFlow.
  • 14. Deep learning is adding more layers There is no exact definition of what constitutes “deep learning.” The number of weights (parameters) is generally large. Some networks have millions of parameters that are learned. (Some edges omitted to avoid clutter.) [16.2, 17.3, −52.3, 11.1]
  • 15. Recall our standard architecture Layer 2: hidden layer. Called this because it is neither input nor output. Layer 3: output. E.g., cat or not a cat; buy the car or walk. Layer 1: input data. Can be pixel values or the number of cup holders. Is this a cat? [16.2, 17.3, −52.3, 11.1]
  • 16. Neural nets with multiple outputs Okay, but what kind of cat is it? 𝑃(𝑥)𝑃(𝑥)𝑃(𝑥) 𝑃(𝑥) 𝑃(𝑥) Introduce a new node called a softmax. Probability a house cat Probability a lion Probability a panther Probability a bobcat Just normalize the output over the sum of the other outputs (using the exponential). Gives a probability. [16.2, 17.3, −52.3, 11.1]
  • 17. Learning word vectors 13.2, 5.4, −3.1 [−12.1, 13.1, 0.1] [7.2, 3.2,-1.9] the man ran From the sentence, “The man ran fast.” 𝑃(𝑥)𝑃(𝑥) 𝑃(𝑥) 𝑃(𝑥) Probability of “fast” Probability of “slow” Probability of “taco” Probability of “bobcat” Learns a vector for each word based on the “meaning” in the sentence by trying to predict the next word [Bengio et al., 2003]. These numbers updated along with the weights and become the vector representations of the words.
  • 18. Comparing vector and symbolic representations Vector representation taco = [17.32, 82.9, −4.6, 7.2] Symbolic representation taco = 𝑡𝑎𝑐𝑜 • Vectors have a similarity score. • A taco is not a burrito but similar. • Symbols can be the same or not. • A taco is just as different from a burrito as a Toyota. • Vectors have internal structure [Mikolov et al., 2013]. • Italy – Rome = France – Paris • King – Queen = Man – Woman • Symbols have no structure. • Symbols are arbitrarily assigned. • Meaning relative to other symbols. • Vectors are grounded in experience. • Meaning relative to predictions. • Ability to learn representations makes agents less brittle.
  • 19. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 20. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 21. Encoding sentence meaning into a vector h0 The “The patient fell.”
  • 22. Encoding sentence meaning into a vector h0 The h1 patient “The patient fell.”
  • 23. Encoding sentence meaning into a vector h0 The h1 patient h2 fell “The patient fell.”
  • 24. Encoding sentence meaning into a vector Like a hidden Markov model, but doesn’t make the Markov assumption and benefits from a vector representation. h0 The h1 patient h2 fell h3 . “The patient fell.”
  • 25. Decoding sentence meaning Machine translation, or structure learning more generally. El h3
  • 26. Decoding sentence meaning Machine translation, or structure learning more generally. El h3 h4
  • 27. Decoding sentence meaning Machine translation, or structure learning more generally. El h3 paciente h4
  • 28. Decoding sentence meaning Machine translation, or structure learning more generally. El h3 paciente h4 cayó h5 . h5 [Cho et al., 2014] It keeps generating until it generates a stop symbol.
  • 29. Generating image captions Convolutional neural network An h0 angry h1 sister h2 . h3 [Karpathy and Fei-Fei, 2015] [Vinyals et al., 2015]
  • 30. Image caption examples [Karpathy and Fei-Fei, 2015] http://cs.stanford.edu/people/karpathy/deepimagesent/ See:
  • 31. Attention [Bahdanau et al., 2014] El h3 paciente h4 cayó h5 . h5 h0 The h1 patient h2 fell h3 .
  • 32. RNNs and Structure Learning • These are sometimes called seq2seq models. • In addition to machine translation and generating captions for images, can be used to learn just about any kind of structure you’d want, as long as you have lots of training data.
  • 33. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 34. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 35. Deep learning and question answering RNNs answer questions. What is the translation of this phrase to French? What is the next word? Attention is useful for question answering. This can be generalized to which facts the learner should pay attention to when answering questions.
  • 36. Deep learning and question answering Bob went home. Tim went to the junkyard. Bob picked up the jar. Bob went to town. Where is the jar? A: town • Memory Networks [Weston et al., 2014] • Updates memory vectors based on a question and finds the best one to give the output. The office is north of the yard. The bath is north of the office. The yard is west of the kitchen. How do you go from the office to the kitchen? A: south, east • Neural Reasoner [Peng et al., 2015] • Encodes the question and facts in many layers, and the final layer is put through a function that gives the answer.
  • 37. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 38. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 39. Limitations of deep learning The encoded meaning is grounded with respect to other words. There is no linkage to the physical world. "ICubLugan01 Reaching". Licensed under CC BY-SA 3.0 via Wikipedia - https://en.wikipedia.org/wiki/File:ICubLugan01_Reaching.png#/media/File:ICubLugan01_Reaching.png The iCub http://www.icub.org/
  • 40. Limitations of deep learning Bob went home. Tim went to the junkyard. Bob picked up the jar. Bob went to town. Where is the jar? A: town Deep learning has no understanding of what it means for the jar to be in town. For example that it can’t also be at the junkyard. Or that it may be in Bob’s car, or still in his hands. The encoded meaning is grounded with respect to other words. There is no linkage to the physical world.
  • 41. Limitations of deep learning Imagine a dude standing on a table. How would a computer know that if you move the table you also move the dude? Likewise, how could a computer know that it only rains outside? Or, as Marvin Minsky asks, how could a computer learn that you can pull a box with a string but not push it?
  • 42. Limitations of deep learning Imagine a dude standing on a table. How would a computer know that if you move the table you also move the dude? Likewise, how could a computer know that it only rains outside? Or, as Marvin Minsky asks, how could a computer learn that you can pull a box with a string but not push it? No one knows how to explain all of these situations to a computer. There’s just too many variations. A robot can learn through experience, but it must be able to efficiently generalize that experience.
  • 43. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 44. Overview • About me and DeepGrammar (4 minutes) • Introduction to Deep Learning for NLP • Recurrent Neural Networks • Deep Learning and Question Answering • Limitations of Deep Learning for NLP • How You Can Get Started
  • 45. Best learning resources Stanford class on deep learning for NLP.http://cs224d.stanford.edu/syllabus.html Hinton’s Coursera Course. Get it right from the horse’s mouth. He explains things well. https://www.coursera.org/course/neuralnets Online textbook in preparation for deep learning from Yoshua Bengio and friends. Clear and understandable. http://www.iro.umontreal.ca/~bengioy/dlbook/ TensorFlow tutorials. https://www.tensorflow.org/versions/r0.8/tutorials/index.ht ml
  • 46. TensorFlow has a seq2seq abstraction data_utils is vocabulary. seq2seq_model puts buckets around seq2seq function. translate trains the model.
  • 47. Check out spaCy for simple text processing https://nicschrading.com/project/Intro-to-NLP-with-spaCy/ It also does word vectors. See:
  • 48. Thanks for listening Jonathan Mugan @jmugan www.deepgrammar.com