SlideShare a Scribd company logo
A presentation on
End to End Memory Networks (MemN2N)
Slides: 26
Time: 15 minutes
IE 594 Data Science 2
University of Illinois at Chicago, February 2017
Under the guidance of,
Prof. Dr. Ashkan Sharabiani
By,
Ashish Menkudale
The kitchen is north of the hallway.
The bathroom is west of the bedroom.
The den is east of the hallway.
The office is south of the bedroom.
How do you go from den to kitchen?
2
The kitchen is north of the hallway.
The bathroom is west of the bedroom.
The den is east of the hallway.
The office is south of the bedroom.
How do you go from den to kitchen?
Kitchen
Hallway
Bathroom Bedroom
Den
Office
West, North.
West
North
3
Brian is frog.
Lily is grey.
Brian is yellow.
Julius is green.
What color is Greg?
Greg is frog.
4
Brian is frog.
Lily is grey.
Brian is yellow.
Julius is green.
What color is Greg?
Greg is frog.
Yellow.
5
External Global Memory
Memory
Module
Controller
Module
Output
Read
Write
Input
Dedicated
separate
memory
module.
Memory
can be
stack or
list/set of
vectors.
Control
module
accesses
memory
(read, write).
Advantage: stable, scalable.
Charles Babbage
Invented
analytical engine
Concept.
1791-1871
Konrad Zuse
Invented
stored-program
concept.
1910-1995
6
Warren Sturgis
McCulloch.
Computational
model for neural
networks
1898-1969
Memory Networks
• Memory network with large external memory.
required for low level tasks like object recognition.
• Writes everything to the memory, but reads only relative information.
• Attempts to add long term memory component to make it more like artificial intelligence.
• Two types:
• Strongly supervised memory network: Hard addressing.
• Weekly supervised memory network: Soft addressing.
• Hard addressing: max of the inner product between the internal state and memory contents.
Mary is in garden.
John is in office. Q: Where is John?
Bob is in kitchen.
Walter Pitts.
Computational
model for
neural networks
1923-1969
7
Memory Vectors
Example: Constructing memory vectors with bag of words (BoW)
Embed each word
Sum embedding vectors
“Sam drops apple” 𝑉 𝑆𝑎𝑚 + 𝑉 𝑑𝑟𝑜𝑝𝑠 + 𝑉apple = 𝑚
Embedding vectors
memory vector
Example: Temporal structure – special words for time and include them in bag of words.
1. Sam moved to garden
2. Sam moved to kitchen.
3. Sam drops apple.
𝑉 𝑆𝑎𝑚 + 𝑉 𝑑𝑟𝑜𝑝𝑠 + 𝑉apple + 𝑉time = 𝑚
Time EmbeddingTime Stamp
8
Bob is in kitchen. Mary is in garden. John is in office. Where is John?
Embed Embed Embed Embed
X X X
Max
Internal State vector
John is in office
Embed +
Decoder
Office
Output
Memory Controller
Memory Networks
Input
9
Issues with Memory Network
• Requires explicit supervision of attention during training.
Need to say which memory the model should use.
• Need a model that just requires supervision at output.
No supervision of attention required.
• Only feasible for simple tasks.
Severely limits application of model.
10
End-To-End Memory Networks
• Soft attention version of MemN2N.
• Flexible read-only memory.
• Multiple memory lookups (hops).
• Can consider multiple memory before deciding output.
• More reasoning power.
• End-to-end training.
• Only needs final output for training.
• Simple back-propagation.
11
Sainbayar
Sukhbaatar
Arthur
Szlam
Jason
Weston
Rob
Fergus
Tanh / ReLU
Dot product
Softmax
Weighted Sum
Memory
content
Sum
Linear
State
State
Memory module
Output Target
Loss
Function
Input
Controller
module
E.g. RNN
MemN2N architecture
12
MemN2N in action : Single memory lookup
Sentences {Xi}
Softmax
Question q
Embedding BInner product
Embedding A
Embedding C
Probability
Weighted Sum
∑
InputsWeightsOutput
O
u
Weight
Softmax
Predicted
Answer
Mary is in garden.
John is in office.
Bob is in kitchen.
Where is John?
Office
Training: estimate embedding matrices A, B & C and output matrix W.
13
A3
C3
A2
C2
C1
Multiple Memory Lookups: Multiple Hops
Sentences {Xi}
Input 1
Output 1
∑
Question q
Input 2
Output 2
∑
Input 3
Output 3
∑
A1
O1
O1
O3
W 𝐴
Predicted Answer
u1
u2
u3
14
Components
15
I (Input): No conversion keep original text X.
G (Generalization): Stores I (X) in next available memory slot.
O (Output): Loops over all memories.
Find best match of 𝑚i with X.
Find best match of 𝑚j with (𝑚i , X)
Can be extended to multiple number of hops.
R (Response) : Ranks all words in dictionary given o and returns best single word.
Infact, RNN can be used here for better sentence correction.
Weight Tying
16
Weight tying : Indicates how weight vectors are multiplied to input and output
component.
Two Methods:
Adjacent:
Similar to stack layers
Output embedding of one layer are input embedding of the next layer.
Layer wise:
Input embedding remains the same for every layer in architecture.
Scoring function
17
Question : Answers are mapped to story using word embedding.
Word Embedding : Maps different words in low dimensional vector space with advantage to
calculate distance between word vectors.
Allow us to find similarity score between different sentence to understand
maximum correlation between them.
Match (‘Where is football?’, ‘John picked up the football’).
qTUTUd : This model is default word embedding used in memory networks.
q – Question.
U – matrix by which word embedding are obtained.
d – Answer.
Model Selection
18
Model Selection: Determines how to model story, questions and answer vectors for
word embedding.
Two possible approach:
Bag of words model:
Considers each word in a sentence.
Embeds each word and sums resulting vector.
Does not take into account context for each word.
Position Encoding:
Considers position/context of sentence/words.
Takes care of preceding and forwarding words.
Maps it to low dimensional vector space.
Model Refining
Addition of noise.
increasing training dataset.
Decisions for Configuration
19
• Number of hops
• Number of epochs
• Embedding size
• Training dataset
• Validation dataset
• Model selection
• Weight tying
RNN viewpoint of MemN2N
Plain RNN Memory Network
RNN
Input Sequence
Memory
RNN
All Input
Selected input Addressing signal
Inputs are fed to RNN one-by-one in order. RNN has only one
chance to look at a certain input symbol.
Place all inputs in the memory. Let the model decide which
part it reads next.
20
• More generic input format
• Any set of vectors can be input
• Each vector can be
o BOW of symbols (including location)
o Image feature + feature position
• Location can be 1D, 2D, …
• Variable size
Advantages of MemN2N over RNN
• Out-of-order access to input data
• Less distracted by unimportant inputs
• Longer term memorization
• No vanishing or exploding gradient problems
21
bAbi Project: Task CAtegories
Training dataset: 1000 questions for each tasks. Testing dataset: 1000 questions for each tasks.
23
Demo for bAbi tasks
24
bAbi Project: Benchmark results
1. GitHub project archives: https://github.com/vinhkhuc/MemN2N-babi-python
2. https://www.msri.org/workshops/796/schedules/20462/documents/2704/assets/24734
3. Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes: https://arxiv.org/pdf/1607.00036.pdf
4. bAbi answers: https://arxiv.org/pdf/1502.05698.pdf
5. Memory Networks by Microsoft research: https://www.youtube.com/watch?v=ZwvWY9Yy76Q&t=1s
6. Memory Networks (Jenil Shah): https://www.youtube.com/watch?v=BN7Kp0JD04o
7. N gram – SVM – generative models difference. http://stackoverflow.com/questions/20315897/n-grams-vs-other-
classifiers-in-text-categorization
8. Paper on results for bAbi tasks by Facebook AI team: https://papers.nips.cc/paper/5846-end-to-end-memory-
networks.pdf
9. Towards AI-complete question answering : a set of prerequisite toy tasks https://arxiv.org/pdf/1502.05698.pdf
25
References
26
Questions

More Related Content

What's hot (12)

PDF
Sk t academy lecture note
Susang Kim
 
PPTX
#02 Next RNN
Terence Huang
 
PDF
Deep Learning for NLP: An Introduction to Neural Word Embeddings
Roelof Pieters
 
PDF
Chat bot making process using Python 3 & TensorFlow
Jeongkyu Shin
 
PDF
[2A4]DeepLearningAtNAVER
NAVER D2
 
PDF
Devoxx traitement automatique du langage sur du texte en 2019
Alexis Agahi
 
PDF
Information Retrieval with Deep Learning
Adam Gibson
 
PDF
Word2Vec: Learning of word representations in a vector space - Di Mitri & Her...
Daniele Di Mitri
 
PDF
(Deep) Neural Networks在 NLP 和 Text Mining 总结
君 廖
 
PDF
Deep Learning and Text Mining
Will Stanton
 
PPTX
머신러닝 시그 세미나_(deep learning for visual recognition)
Yonghoon Kwon
 
PDF
Creating Chatbots Using TensorFlow | Chatbot Tutorial | Deep Learning Trainin...
Edureka!
 
Sk t academy lecture note
Susang Kim
 
#02 Next RNN
Terence Huang
 
Deep Learning for NLP: An Introduction to Neural Word Embeddings
Roelof Pieters
 
Chat bot making process using Python 3 & TensorFlow
Jeongkyu Shin
 
[2A4]DeepLearningAtNAVER
NAVER D2
 
Devoxx traitement automatique du langage sur du texte en 2019
Alexis Agahi
 
Information Retrieval with Deep Learning
Adam Gibson
 
Word2Vec: Learning of word representations in a vector space - Di Mitri & Her...
Daniele Di Mitri
 
(Deep) Neural Networks在 NLP 和 Text Mining 总结
君 廖
 
Deep Learning and Text Mining
Will Stanton
 
머신러닝 시그 세미나_(deep learning for visual recognition)
Yonghoon Kwon
 
Creating Chatbots Using TensorFlow | Chatbot Tutorial | Deep Learning Trainin...
Edureka!
 

Viewers also liked (20)

PPT
Garlic Seed Foundation Pres
miniarra
 
PPT
The Golden Ratio
Carlos A Carle
 
PPTX
Apple Daily Animation Business Analysis
Wenbin Zhao
 
PPTX
Purple garlic. presentation.
lmjf92
 
PDF
Diagram peel the onion power point presentation slides
SlideTeam.net
 
PPTX
Blanca nieves
lilita68
 
PPSX
Sandeep kulkarni architect presentation
mumbaiarchitect2016
 
PPS
Peel The Onion
mpmeier
 
PDF
Peel the onion powerpoint ppt slides.
SlideTeam.net
 
PDF
Keen IO's Community Commitment Curve + Community Onion
Tim Falls
 
PPTX
Text procedure
ghea Azzahra
 
PPTX
Slide of procedure text
Agnes Kasih
 
PPTX
Inside the earth
Antonio Saorín Pérez-muelas
 
PPT
Gr10 u4 printmaking
Mrs. Magdolene Dykstra
 
PPT
golden section
zubda sajda
 
PDF
New Intro to Architecture Week 4
Hamdija Velagic
 
PPTX
Golden Ratio in Architecture
guest1e1cf87
 
PPT
Structure And Plates
SHS Geog
 
PPTX
Golden ratio
bshreya62
 
PDF
Golden section
roger Pitiot
 
Garlic Seed Foundation Pres
miniarra
 
The Golden Ratio
Carlos A Carle
 
Apple Daily Animation Business Analysis
Wenbin Zhao
 
Purple garlic. presentation.
lmjf92
 
Diagram peel the onion power point presentation slides
SlideTeam.net
 
Blanca nieves
lilita68
 
Sandeep kulkarni architect presentation
mumbaiarchitect2016
 
Peel The Onion
mpmeier
 
Peel the onion powerpoint ppt slides.
SlideTeam.net
 
Keen IO's Community Commitment Curve + Community Onion
Tim Falls
 
Text procedure
ghea Azzahra
 
Slide of procedure text
Agnes Kasih
 
Gr10 u4 printmaking
Mrs. Magdolene Dykstra
 
golden section
zubda sajda
 
New Intro to Architecture Week 4
Hamdija Velagic
 
Golden Ratio in Architecture
guest1e1cf87
 
Structure And Plates
SHS Geog
 
Golden ratio
bshreya62
 
Golden section
roger Pitiot
 
Ad

Similar to Study of End to End memory networks (20)

PPTX
Advanced_NLP_with_Transformers_PPT_final 50.pptx
Shiwani Gupta
 
PDF
5_RNN_LSTM.pdf
FEG
 
PDF
Generating Natural-Language Text with Neural Networks
Jonathan Mugan
 
PDF
Convolutional and Recurrent Neural Networks
Ramesh Ragala
 
PDF
Deep Learning for Personalized Search and Recommender Systems
Benjamin Le
 
PPTX
Differential Neural Computers
deawoo Kim
 
PDF
AINL 2016: Nikolenko
Lidia Pivovarova
 
PPTX
Introduction to Neural Information Retrieval and Large Language Models
sadjadeb
 
PDF
Cheatsheet recurrent-neural-networks
Steve Nouri
 
PDF
Sequencing and Attention Models - 2nd Version
ssuserbd372d
 
PDF
Ted Willke - The Brain’s Guide to Dealing with Context in Language Understanding
MLconf
 
PDF
Big Data Intelligence: from Correlation Discovery to Causal Reasoning
Wanjin Yu
 
PPTX
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Orien...
ivaderivader
 
PDF
Icon18revrec sudeshna
Muthusamy Chelliah
 
PPTX
Semantic, Cognitive and Perceptual Computing -Deep learning
Artificial Intelligence Institute at UofSC
 
PPTX
Deep Learning for Natural Language Processing
Jonathan Mugan
 
PPTX
Deep Neural Methods for Retrieval
Bhaskar Mitra
 
PPTX
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Simplilearn
 
PDF
David Barber - Deep Nets, Bayes and the story of AI
Bayes Nets meetup London
 
PDF
Sequence learning and modern RNNs
Grigory Sapunov
 
Advanced_NLP_with_Transformers_PPT_final 50.pptx
Shiwani Gupta
 
5_RNN_LSTM.pdf
FEG
 
Generating Natural-Language Text with Neural Networks
Jonathan Mugan
 
Convolutional and Recurrent Neural Networks
Ramesh Ragala
 
Deep Learning for Personalized Search and Recommender Systems
Benjamin Le
 
Differential Neural Computers
deawoo Kim
 
AINL 2016: Nikolenko
Lidia Pivovarova
 
Introduction to Neural Information Retrieval and Large Language Models
sadjadeb
 
Cheatsheet recurrent-neural-networks
Steve Nouri
 
Sequencing and Attention Models - 2nd Version
ssuserbd372d
 
Ted Willke - The Brain’s Guide to Dealing with Context in Language Understanding
MLconf
 
Big Data Intelligence: from Correlation Discovery to Causal Reasoning
Wanjin Yu
 
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Orien...
ivaderivader
 
Icon18revrec sudeshna
Muthusamy Chelliah
 
Semantic, Cognitive and Perceptual Computing -Deep learning
Artificial Intelligence Institute at UofSC
 
Deep Learning for Natural Language Processing
Jonathan Mugan
 
Deep Neural Methods for Retrieval
Bhaskar Mitra
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Simplilearn
 
David Barber - Deep Nets, Bayes and the story of AI
Bayes Nets meetup London
 
Sequence learning and modern RNNs
Grigory Sapunov
 
Ad

More from ASHISH MENKUDALE (7)

PDF
Breast cancerdetection IE594 Project Report
ASHISH MENKUDALE
 
PPTX
Facility Layout and operations efficiency optimization.
ASHISH MENKUDALE
 
PDF
Data Science: Prediction analysis for houses in Ames, Iowa.
ASHISH MENKUDALE
 
PPTX
Factorial Design analysis
ASHISH MENKUDALE
 
PDF
Cummins
ASHISH MENKUDALE
 
PDF
Basics of airplanes
ASHISH MENKUDALE
 
PPTX
Design improvements and costing analysis
ASHISH MENKUDALE
 
Breast cancerdetection IE594 Project Report
ASHISH MENKUDALE
 
Facility Layout and operations efficiency optimization.
ASHISH MENKUDALE
 
Data Science: Prediction analysis for houses in Ames, Iowa.
ASHISH MENKUDALE
 
Factorial Design analysis
ASHISH MENKUDALE
 
Basics of airplanes
ASHISH MENKUDALE
 
Design improvements and costing analysis
ASHISH MENKUDALE
 

Recently uploaded (20)

PPTX
NEUROMOROPHIC nu iajwojeieheueueueu.pptx
knkoodalingam39
 
PPTX
Green Building & Energy Conservation ppt
Sagar Sarangi
 
PPTX
Electron Beam Machining for Production Process
Rajshahi University of Engineering & Technology(RUET), Bangladesh
 
PPTX
REINFORCEMENT AS CONSTRUCTION MATERIALS.pptx
mohaiminulhaquesami
 
PDF
ARC--BUILDING-UTILITIES-2-PART-2 (1).pdf
IzzyBaniquedBusto
 
PDF
Additional Information in midterm CPE024 (1).pdf
abolisojoy
 
PDF
IoT - Unit 2 (Internet of Things-Concepts) - PPT.pdf
dipakraut82
 
PPTX
site survey architecture student B.arch.
sri02032006
 
PDF
monopile foundation seminar topic for civil engineering students
Ahina5
 
PDF
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
PPTX
UNIT DAA PPT cover all topics 2021 regulation
archu26
 
PPTX
Benefits_^0_Challigi😙🏡💐8fenges[1].pptx
akghostmaker
 
PDF
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
PPTX
EC3551-Transmission lines Demo class .pptx
Mahalakshmiprasannag
 
PDF
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
PPT
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
PDF
6th International Conference on Machine Learning Techniques and Data Science ...
ijistjournal
 
PDF
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
PDF
Statistical Data Analysis Using SPSS Software
shrikrishna kesharwani
 
PPTX
Hashing Introduction , hash functions and techniques
sailajam21
 
NEUROMOROPHIC nu iajwojeieheueueueu.pptx
knkoodalingam39
 
Green Building & Energy Conservation ppt
Sagar Sarangi
 
Electron Beam Machining for Production Process
Rajshahi University of Engineering & Technology(RUET), Bangladesh
 
REINFORCEMENT AS CONSTRUCTION MATERIALS.pptx
mohaiminulhaquesami
 
ARC--BUILDING-UTILITIES-2-PART-2 (1).pdf
IzzyBaniquedBusto
 
Additional Information in midterm CPE024 (1).pdf
abolisojoy
 
IoT - Unit 2 (Internet of Things-Concepts) - PPT.pdf
dipakraut82
 
site survey architecture student B.arch.
sri02032006
 
monopile foundation seminar topic for civil engineering students
Ahina5
 
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
UNIT DAA PPT cover all topics 2021 regulation
archu26
 
Benefits_^0_Challigi😙🏡💐8fenges[1].pptx
akghostmaker
 
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
EC3551-Transmission lines Demo class .pptx
Mahalakshmiprasannag
 
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
6th International Conference on Machine Learning Techniques and Data Science ...
ijistjournal
 
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
Statistical Data Analysis Using SPSS Software
shrikrishna kesharwani
 
Hashing Introduction , hash functions and techniques
sailajam21
 

Study of End to End memory networks

  • 1. A presentation on End to End Memory Networks (MemN2N) Slides: 26 Time: 15 minutes IE 594 Data Science 2 University of Illinois at Chicago, February 2017 Under the guidance of, Prof. Dr. Ashkan Sharabiani By, Ashish Menkudale
  • 2. The kitchen is north of the hallway. The bathroom is west of the bedroom. The den is east of the hallway. The office is south of the bedroom. How do you go from den to kitchen? 2
  • 3. The kitchen is north of the hallway. The bathroom is west of the bedroom. The den is east of the hallway. The office is south of the bedroom. How do you go from den to kitchen? Kitchen Hallway Bathroom Bedroom Den Office West, North. West North 3
  • 4. Brian is frog. Lily is grey. Brian is yellow. Julius is green. What color is Greg? Greg is frog. 4
  • 5. Brian is frog. Lily is grey. Brian is yellow. Julius is green. What color is Greg? Greg is frog. Yellow. 5
  • 6. External Global Memory Memory Module Controller Module Output Read Write Input Dedicated separate memory module. Memory can be stack or list/set of vectors. Control module accesses memory (read, write). Advantage: stable, scalable. Charles Babbage Invented analytical engine Concept. 1791-1871 Konrad Zuse Invented stored-program concept. 1910-1995 6
  • 7. Warren Sturgis McCulloch. Computational model for neural networks 1898-1969 Memory Networks • Memory network with large external memory. required for low level tasks like object recognition. • Writes everything to the memory, but reads only relative information. • Attempts to add long term memory component to make it more like artificial intelligence. • Two types: • Strongly supervised memory network: Hard addressing. • Weekly supervised memory network: Soft addressing. • Hard addressing: max of the inner product between the internal state and memory contents. Mary is in garden. John is in office. Q: Where is John? Bob is in kitchen. Walter Pitts. Computational model for neural networks 1923-1969 7
  • 8. Memory Vectors Example: Constructing memory vectors with bag of words (BoW) Embed each word Sum embedding vectors “Sam drops apple” 𝑉 𝑆𝑎𝑚 + 𝑉 𝑑𝑟𝑜𝑝𝑠 + 𝑉apple = 𝑚 Embedding vectors memory vector Example: Temporal structure – special words for time and include them in bag of words. 1. Sam moved to garden 2. Sam moved to kitchen. 3. Sam drops apple. 𝑉 𝑆𝑎𝑚 + 𝑉 𝑑𝑟𝑜𝑝𝑠 + 𝑉apple + 𝑉time = 𝑚 Time EmbeddingTime Stamp 8
  • 9. Bob is in kitchen. Mary is in garden. John is in office. Where is John? Embed Embed Embed Embed X X X Max Internal State vector John is in office Embed + Decoder Office Output Memory Controller Memory Networks Input 9
  • 10. Issues with Memory Network • Requires explicit supervision of attention during training. Need to say which memory the model should use. • Need a model that just requires supervision at output. No supervision of attention required. • Only feasible for simple tasks. Severely limits application of model. 10
  • 11. End-To-End Memory Networks • Soft attention version of MemN2N. • Flexible read-only memory. • Multiple memory lookups (hops). • Can consider multiple memory before deciding output. • More reasoning power. • End-to-end training. • Only needs final output for training. • Simple back-propagation. 11 Sainbayar Sukhbaatar Arthur Szlam Jason Weston Rob Fergus
  • 12. Tanh / ReLU Dot product Softmax Weighted Sum Memory content Sum Linear State State Memory module Output Target Loss Function Input Controller module E.g. RNN MemN2N architecture 12
  • 13. MemN2N in action : Single memory lookup Sentences {Xi} Softmax Question q Embedding BInner product Embedding A Embedding C Probability Weighted Sum ∑ InputsWeightsOutput O u Weight Softmax Predicted Answer Mary is in garden. John is in office. Bob is in kitchen. Where is John? Office Training: estimate embedding matrices A, B & C and output matrix W. 13
  • 14. A3 C3 A2 C2 C1 Multiple Memory Lookups: Multiple Hops Sentences {Xi} Input 1 Output 1 ∑ Question q Input 2 Output 2 ∑ Input 3 Output 3 ∑ A1 O1 O1 O3 W 𝐴 Predicted Answer u1 u2 u3 14
  • 15. Components 15 I (Input): No conversion keep original text X. G (Generalization): Stores I (X) in next available memory slot. O (Output): Loops over all memories. Find best match of 𝑚i with X. Find best match of 𝑚j with (𝑚i , X) Can be extended to multiple number of hops. R (Response) : Ranks all words in dictionary given o and returns best single word. Infact, RNN can be used here for better sentence correction.
  • 16. Weight Tying 16 Weight tying : Indicates how weight vectors are multiplied to input and output component. Two Methods: Adjacent: Similar to stack layers Output embedding of one layer are input embedding of the next layer. Layer wise: Input embedding remains the same for every layer in architecture.
  • 17. Scoring function 17 Question : Answers are mapped to story using word embedding. Word Embedding : Maps different words in low dimensional vector space with advantage to calculate distance between word vectors. Allow us to find similarity score between different sentence to understand maximum correlation between them. Match (‘Where is football?’, ‘John picked up the football’). qTUTUd : This model is default word embedding used in memory networks. q – Question. U – matrix by which word embedding are obtained. d – Answer.
  • 18. Model Selection 18 Model Selection: Determines how to model story, questions and answer vectors for word embedding. Two possible approach: Bag of words model: Considers each word in a sentence. Embeds each word and sums resulting vector. Does not take into account context for each word. Position Encoding: Considers position/context of sentence/words. Takes care of preceding and forwarding words. Maps it to low dimensional vector space. Model Refining Addition of noise. increasing training dataset.
  • 19. Decisions for Configuration 19 • Number of hops • Number of epochs • Embedding size • Training dataset • Validation dataset • Model selection • Weight tying
  • 20. RNN viewpoint of MemN2N Plain RNN Memory Network RNN Input Sequence Memory RNN All Input Selected input Addressing signal Inputs are fed to RNN one-by-one in order. RNN has only one chance to look at a certain input symbol. Place all inputs in the memory. Let the model decide which part it reads next. 20
  • 21. • More generic input format • Any set of vectors can be input • Each vector can be o BOW of symbols (including location) o Image feature + feature position • Location can be 1D, 2D, … • Variable size Advantages of MemN2N over RNN • Out-of-order access to input data • Less distracted by unimportant inputs • Longer term memorization • No vanishing or exploding gradient problems 21
  • 22. bAbi Project: Task CAtegories Training dataset: 1000 questions for each tasks. Testing dataset: 1000 questions for each tasks.
  • 25. 1. GitHub project archives: https://github.com/vinhkhuc/MemN2N-babi-python 2. https://www.msri.org/workshops/796/schedules/20462/documents/2704/assets/24734 3. Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes: https://arxiv.org/pdf/1607.00036.pdf 4. bAbi answers: https://arxiv.org/pdf/1502.05698.pdf 5. Memory Networks by Microsoft research: https://www.youtube.com/watch?v=ZwvWY9Yy76Q&t=1s 6. Memory Networks (Jenil Shah): https://www.youtube.com/watch?v=BN7Kp0JD04o 7. N gram – SVM – generative models difference. http://stackoverflow.com/questions/20315897/n-grams-vs-other- classifiers-in-text-categorization 8. Paper on results for bAbi tasks by Facebook AI team: https://papers.nips.cc/paper/5846-end-to-end-memory- networks.pdf 9. Towards AI-complete question answering : a set of prerequisite toy tasks https://arxiv.org/pdf/1502.05698.pdf 25 References