NEURAL MODELS FOR
INFORMATION RETRIEVAL
BHASKAR MITRA
Principal Applied Scientist
Microsoft AI and Research
Research Student
Dept. of Computer Science
University College London
November, 2017
This talk is based on work done in collaboration with
Nick Craswell, Fernando Diaz, Emine Yilmaz, Rich Caruana, Eric Nalisnick, Hamed Zamani,
Christophe Van Gysel, Nicola Cancedda, Matteo Venanzi, Saurabh Tiwary, Xia Song, Laura Dietz,
Federico Nanni, Matt Magnusson, Roy Rosemarin, Grzegorz Kukla, Piotr Grudzien, and many others
For a general overview of neural IR refer to the manuscript under review for
Foundations and Trends® in Information Retrieval
Pre-print is available for free download
http://bit.ly/neuralir-intro
Final manuscript may contain additional content and changes
Or check out the presentations from these recent tutorials,
WSDM 2017 tutorial: http://bit.ly/NeuIRTutorial-WSDM2017
SIGIR 2017 tutorial: http://nn4ir.com/
NEURAL
NETWORKS
Amazingly successful on many difficult
application areas
Dominating multiple fields:
Each application is different, motivates
new innovations in machine learning
2011 2013 2015 2017
speech vision NLP IR?
Our research:
Novel representation learning methods and neural architectures
motivated by specific needs and challenges of IR tasks
Source: (Mitra and Craswell, 2018)
https://www.microsoft.com/en-us/research/...
IR TASK MAY REFER TO DOCUMENT RANKING
Information
need
query
results ranking (document list)
retrieval system indexes a
document corpus
Relevance
(documents satisfy
information need
e.g. useful)
BUT IT MAY ALSO REFER TO…
QUERY AUTO-COMPLETION NEXT QUERY SUGGESTION
cheap flights from london t|
cheap flights from london to frankfurt
cheap flights from london to new york
cheap flights from london to miami
cheap flights from london to sydney
cheap flights from london to miami
Related searches
package deals to miami
ba flights to miami
things to do in miami
miami tourist attractions
ANATOMY OF AN IR MODEL
IR in three simple steps:
1. Generate input (query or prefix)
representation
2. Generate candidate (document
or suffix or query) representation
3. Estimate relevance based on
input and candidate
representations
Neural networks can be useful for
one or more of these steps
input text
generate input
representation
candidate text
generate
candidate
representation
estimate relevance
input
vector
candidate
vector
point of input
representation
point of match
point of candidate
representation
NEURAL NETWORKS CAN HELP WITH…
Learning a matching function on top of traditional
feature based representation of query and document
But it can also help with learning good representations
of text to deal with vocabulary mismatch
In this part of the talk, we focus on learning good
vector representations of text for retrieval
input text candidate text
generate manually designed features
neural network model for matching
A QUICK REFRESHER ON
VECTOR SPACE
REPRESENTATIONS
Under local representation the terms banana,
mango, and dog are distinct items
But distributed representation (e.g., project items
to a feature space) may recognize that banana
and mango are both fruits, but dog is different
Important note: the choice of features defines
what items are similar
A QUICK REFRESHER ON
VECTOR SPACE
REPRESENTATIONS
An embedding is a new (latent) space such that the properties
of, and the relationships between, the items are preserved
Compared to original feature space an embedding space may
have one or more of the following:
• Less number of dimensions
• Less sparseness
• Disentangled principle components
NOTIONS OF
SIMILARITY
Is “Seattle” more similar to…
“Sydney” (similar type)
Or
“Seahawks” (similar topic)
Depends on what feature space you choose
NOTIONS OF
SIMILARITY
Consider the following toy corpus…
Now consider the different vector
representations of terms you can derive
from this corpus and how the items that
are similar differ in these vector spaces
NOTIONS OF
SIMILARITY
NOTIONS OF
SIMILARITY
NOTIONS OF
SIMILARITY
NOTIONS OF
SIMILARITY
Consider the following toy corpus…
Now consider the different vector
representations of terms you can derive
from this corpus and how the items that
are similar differ in these vector spaces
RETRIEVAL USING
EMBEDDINGS
Given an input the retrieval model
predicts a point in the embedding space
Items close to this point in the embedding
space are retrieved as results
Relevant items should be similar to /near
each other in the embedding space
SO THE DESIRED NOTION OF SIMILARITY
SHOULD DEPEND ON THE TARGET TASK
DOCUMENT RANKING
✓ cheap flights to london
✗ cheap flights to sydney
✗ hotels in london
QUERY AUTO-COMPLETION
✓ cheap flights to london
✓ cheap flights to sydney
✗ cheap flights to big ben
NEXT QUERY SUGGESTION
✓ budget flights to london
✓ hotels in london
✗ cheap flights to sydney
cheap flights to london cheap flights to | cheap flights to london
Next, we take a sample model and show how the same model captures
different notions of similarity based on the data it is trained on
DEEP SEMANTIC SIMILARITY MODEL
(DSSM)
Siamese network with two deep sub-models
Projects input and candidate texts into
embedding space
Trained by maximizing cosine similarity between
correct input-output pairs
(Huang et al., 2013)
https://dl.acm.org/citation...
DSSM TRAINED ON DIFFERENT TYPES OF DATA
Trained on pairs of… Sample training data Useful for Paper
Query and document titles <“things to do in seattle”, “seattle tourist attractions”> Document ranking (Shen et al., 2014)
https://dl.acm.org/citation...
Query prefix and suffix <“things to do in”, “seattle”> Query auto-completion (Mitra and Craswell, 2015)
https://dl.acm.org/citation...
Consecutive queries in
user sessions
<“things to do in seattle”, “space needle”> Next query suggestion (Mitra, 2015)
https://dl.acm.org/citation...
Each model captures a different notion of similarity / regularity in the learnt embedding space
Nearest neighbors for “seattle” and “taylor swift” based on two DSSM
models – one trained on query-document pairs and the other trained on
query prefix-suffix pairs
DIFFERENT REGULARITIES IN DIFFERENT
EMBEDDING SPACES
DIFFERENT REGULARITIES IN DIFFERENT
EMBEDDING SPACES
Groups of similar search intent
transitions from a query log
The DSSM trained on session query pairs
can capture regularities in the query space
(similar to word2vec for terms)
DSSM TRAINED ON SESSION QUERY PAIRS
ALLOWS FOR ANALOGIES OVER SHORT TEXT!
“
”
WHEN IS IT PARTICULARLY IMPORTANT TO
THINK ABOUT NOTIONS OF SIMILARITY?
If you are using pre-trained embeddings, instead of learning the text
representations in an end-to-end model for the target task
USING PRE-TRAINED WORD EMBEDDINGS
FOR DOCUMENT RANKING
Non-matching terms “population” and “area”
indicate first passage is more relevant to the query
“Albuquerque”
Use word2vec embeddings to compare every query
and document terms
Passage about Albuquerque
Passage not about Albuquerque
DUAL EMBEDDING SPACE MODEL (DESM)
…but what if I told you that everyone using
word2vec is throwing half the model away?
IN-OUT similarity captures a more Topical notion of term-
term relationship compared to IN-IN and OUT-OUT
Better to represent query terms using IN embeddings and
document terms using OUT embeddings
(Mitra et al., 2016)
https://arxiv.org/abs/1602.01137
GET THE DATA
IN+OUT Embeddings for 2.7M words trained on 600M+ Bing queries
https://www.microsoft.com/en-us/download/details.aspx?id=52597
Download
BUT TERMS ARE INHERENTLY AMBIGUOUS
or
TOPIC-SPECIFIC TERM
REPRESENTATIONS
Terms can take different meanings in different
context – global representations likely to be coarse
and inappropriate under certain topics
Global model likely to focus more on learning
accurate representations of popular terms
Often impractical to train on full corpus – without
topic specific sampling important training instances
may be ignored
TOPIC-SPECIFIC TERM
EMBEDDINGS FOR
QUERY EXPANSION
corpuscut gasoline tax
results
topic-specific
term embeddings
cut gasoline tax deficit budget
expanded query
final results
query
(Diaz et al., 2015)
http://anthology.aclweb.org/...
Use documents from first round of
retrieval to learn a query-specific
embedding space
Use learnt embeddings to find related
terms for query expansion for second
round of retrieval
global
local
Now, let’s talk about deep neural network
models for document ranking…
CHALLENGES IN SHORT VS. LONG
TEXT RETRIEVAL
Short-text
Vocabulary mismatch more serious problem
Long-text
Documents contain mixture of many topics
Matches in different parts of a long document contribute unequally
Term proximity is an important consideration
A TALE OF TWO QUERIES
“PEKAROVIC LAND COMPANY”
Hard to learn good representation for
rare term pekarovic
But easy to estimate relevance based
on patterns of exact matches
Proposal: Learn a neural model to
estimate relevance from patterns of
exact matches
“WHAT CHANNEL SEAHAWKS ON TODAY”
Target document likely contains ESPN
or sky sports instead of channel
An embedding model can associate
ESPN in document to channel in query
Proposal: Learn embeddings of text
and match query with document in
the embedding space
The Duet Architecture
Use neural networks to model both functions and learn their parameters jointly
THE DUET
ARCHITECTURE
Linear combination of two models
trained jointly on labelled query-
document pairs
Local model operates on lexical
interaction matrix, and Distributed
model projects text into an
embedding space for matching
(Mitra et al., 2017)
https://dl.acm.org/citation...
(Nanni et al., 2017)
https://dl.acm.org/citation...
LOCAL
SUB-MODEL
Focuses on patterns of
exact matches of query
terms in document
INTERACTION MATRIX OF QUERY AND
DOCUMENT TERMS
𝑋𝑖,𝑗 =
1, 𝑖𝑓 𝑞𝑖 = 𝑑𝑗
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
In relevant documents,
→Many matches, typically in clusters
→Matches localized early in
document
→Matches for all query terms
→In-order (phrasal) matches
ESTIMATING RELEVANCE FROM INTERACTION
MATRIX
← document words →
Convolve using window of size 𝑛 𝑑 × 1
Each window instance compares a query term w/
whole document
Fully connected layers aggregate evidence
across query terms - can model phrasal matches
LOCAL
SUB-MODEL
Focuses on patterns of
exact matches of query
terms in document
THE DUET
ARCHITECTURE
Linear combination of two models
trained jointly on labelled query-
document pairs
Local model operates on lexical
interaction matrix
Distributed model projects n-graph
vectors of text into an embedding
space and then estimates match
DISTRIBUTED
SUB-MODEL
Learns representation of
text and matches query
with document in the
embedding space
convolutio
n
pooling
Query
embedding
…
…
…
HadamardproductHadamardproductFullyconnected
query document
ESTIMATING RELEVANCE FROM TEXT
EMBEDDINGS
Convolve over query and
document terms
Match query with moving
windows over document
Learn text embeddings
specifically for the task
Matching happens in
embedding space
* Network architecture slightly
simplified for visualization – refer paper
for exact details
DISTRIBUTED
SUB-MODEL
Learns representation of
text and matches query
with document in the
embedding space
THE DUET
MODEL
Training sample: 𝑄, 𝐷+, 𝐷1
−
𝐷2
−
𝐷3
−
𝐷4
−
𝐷+
= 𝐷𝑜𝑐𝑢𝑚𝑒𝑛𝑡 𝑟𝑎𝑡𝑒𝑑 𝐸𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡 𝑜𝑟 𝐺𝑜𝑜𝑑
𝐷−
= 𝐷𝑜𝑐𝑢𝑚𝑒𝑛𝑡 2 𝑟𝑎𝑡𝑖𝑛𝑔𝑠 𝑤𝑜𝑟𝑠𝑒 𝑡ℎ𝑎𝑛 𝐷+
Optimize cross-entropy loss
Implemented using CNTK (GitHub link)
EFFECT OF TRAINING DATA VOLUME
Key finding: large quantity of training data necessary for learning good
representations, less impactful for training local model
If we classify models by
query level performance
there is a clear clustering of
lexical (local) and semantic
(distributed) models
GET THE CODE
Implemented using CNTK python API
https://github.com/bmitra-msft/NDRM/blob/master/notebooks/Duet.ipynb
Download
BUT WEB DOCUMENTS ARE MORE
THAN JUST BODY TEXT…
URL
incoming
anchor text
title
body
clicked query
EXTENDING NEURAL RANKING MODELS TO
MULTIPLE DOCUMENT FIELDS
BM25
Neural ranking model
→
→
BM25F
?
RANKING DOCUMENTS
WITH MULTIPLE FIELDS
Document consists of multiple text fields (e.g., title,
URL, body, incoming anchors, and clicked queries)
Fields, such as incoming anchors and clicked
queries, contains variable number of short text
instances
Body field has long text, whereas clicked queries are
typically only few words in length
URL field contains non-natural language text
(Zamani et al., 2018)
https://arxiv.org/abs/1711.09174
Neural Models for Information Retrieval
Learn a different embedding space
for each document field
For multiple-instance fields,
average pool the instance level
embeddings
Mask empty text instances, and
average only among non-empty
instances to avoid preferring
documents with more instances
Learn different query
embeddings for matching
against different fields
Different fields may match
different aspects of the query
Ideal query representation for
matching against URL likely to
be different from for matching
with title
Represent per field match by a
vector, not a score
Allows the model to validate
that across the different fields
all aspects of the query intent
have been covered
(Similar intuition as BM25F)
Aggregate evidence of relevance
across all document fields
High precision fields, such as
clicked queries, can negatively
impact the modeling of the
other fields
Field level dropout during
training can regularize against
over-dependency on any
individual field
The design of both Duet and NRM-F models were motivated by long-
standing IR insights
Deep neural networks have not reduced the importance of core IR
research nor the studying and understanding of IR tasks
In fact, the same intuitions behind the design of classical IR models were
important for feature design in early learning-to-rank models, and now
in the architecture design of deep neural networks
Looking forward: proactive retrieval
REPLY WITH: PROACTIVE
RECOMMENDATION OF
EMAIL ATTACHMENTS
Given the context of the current
conversation, the neural model
formulates a short text query to
pro-actively retrieve relevant
attachments that the user may
want to include in her response
For example,
(Van Gysel et al., 2017)
https://arxiv.org/abs/1710.06061
SUMMARY OF PAPERS DISCUSSED
AN INTRODUCTION TO NEURAL INFORMATION RETRIEVAL
Bhaskar Mitra and Nick Craswell, in Foundations and Trends® in Information Retrieval, Now Publishers, 2018 (upcoming).
https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/fntir-neuralir-mitra.pdf
NEURAL RANKING MODELS WITH MULTIPLE DOCUMENT FIELDS
Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary, in Proc. WSDM, 2018 (upcoming).
https://arxiv.org/abs/1711.09174
REPLY WITH: PROACTIVE RECOMMENDATION OF EMAIL ATTACHMENTS
Christophe Van Gysel, Bhaskar Mitra, Matteo Venanzi, and others, in Proc. CIKM, 2017 (upcoming).
https://arxiv.org/abs/1710.06061
LEARNING TO MATCH USING LOCAL AND DISTRIBUTED REPRESENTATIONS OF TEXT FOR WEB SEARCH
Bhaskar Mitra, Fernando Diaz, and Nick Craswell, in Proc. WWW, 2017.
https://dl.acm.org/citation.cfm?id=3052579
BENCHMARK FOR COMPLEX ANSWER RETRIEVAL
Federico Nanni, Bhaskar Mitra, Matt Magnusson, and Laura Dietz, in Proc. ICTIR, 2017.
https://dl.acm.org/citation.cfm?id=3121099
QUERY EXPANSION WITH LOCALLY-TRAINED WORD EMBEDDINGS
Fernando Diaz, Bhaskar Mitra, and Nick Craswell, in Proc. ACL, 2016.
http://anthology.aclweb.org/P/P16/P16-1035.pdf
A DUAL EMBEDDING SPACE MODEL FOR DOCUMENT RANKING
Bhaskar Mitra, Eric Nalisnick, Nick Craswell, and Rich Caruana, arXiv preprint, 2016.
https://arxiv.org/abs/1602.01137
IMPROVING DOCUMENT RANKING WITH DUAL WORD EMBEDDINGS
Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana, in Proc. WWW, 2016.
https://dl.acm.org/citation.cfm?id=2889361
QUERY AUTO-COMPLETION FOR RARE PREFIXES
Bhaskar Mitra and Nick Craswell, in Proc. CIKM, 2015.
https://dl.acm.org/citation.cfm?id=2806599
EXPLORING SESSION CONTEXT USING DISTRIBUTED REPRESENTATIONS OF QUERIES AND REFORMULATIONS
Bhaskar Mitra, in Proc. SIGIR, 2015.
https://dl.acm.org/citation.cfm?id=2766462.2767702
AN INTRODUCTION TO NEURAL
INFORMATION RETRIEVAL
Manuscript under review for
Foundations and Trends® in Information Retrieval
Pre-print is available for free download
http://bit.ly/neuralir-intro
(Final manuscript may contain additional content and changes)
THANK YOU

More Related Content

PPTX
A Simple Introduction to Neural Information Retrieval
PPTX
The vector space model
PPTX
Probabilistic information retrieval models & systems
PPTX
Semi-Supervised Learning
PDF
Introduction to Information Retrieval & Models
PPTX
Neural Models for Information Retrieval
PDF
Syntactic analysis in NLP
PDF
Naive Bayes Classifier in Python | Naive Bayes Algorithm | Machine Learning A...
A Simple Introduction to Neural Information Retrieval
The vector space model
Probabilistic information retrieval models & systems
Semi-Supervised Learning
Introduction to Information Retrieval & Models
Neural Models for Information Retrieval
Syntactic analysis in NLP
Naive Bayes Classifier in Python | Naive Bayes Algorithm | Machine Learning A...

What's hot (20)

PPT
Latent Semantic Indexing For Information Retrieval
PPTX
Text clustering
PPT
String matching algorithm
PPTX
Word embedding
PPTX
Information retrieval 7 boolean model
PDF
Topic Modeling
PDF
Genetic Algorithms
PDF
tf idf example machine learning
PPTX
Automatic indexing
PPT
Inverted index
PPTX
Model of information retrieval (3)
PDF
CS6007 information retrieval - 5 units notes
PPTX
PDF
Intro to nlp
PPTX
String matching algorithms-pattern matching.
PPTX
Evolutionary Algorithms
PPTX
NLP_KASHK:Evaluating Language Model
PDF
String matching algorithms
PPT
Predicate logic_2(Artificial Intelligence)
PPTX
Introduction For seq2seq(sequence to sequence) and RNN
Latent Semantic Indexing For Information Retrieval
Text clustering
String matching algorithm
Word embedding
Information retrieval 7 boolean model
Topic Modeling
Genetic Algorithms
tf idf example machine learning
Automatic indexing
Inverted index
Model of information retrieval (3)
CS6007 information retrieval - 5 units notes
Intro to nlp
String matching algorithms-pattern matching.
Evolutionary Algorithms
NLP_KASHK:Evaluating Language Model
String matching algorithms
Predicate logic_2(Artificial Intelligence)
Introduction For seq2seq(sequence to sequence) and RNN
Ad

Similar to Neural Models for Information Retrieval (20)

PPTX
Vectorland: Brief Notes from Using Text Embeddings for Search
PPTX
Using Text Embeddings for Information Retrieval
PPTX
5 Lessons Learned from Designing Neural Models for Information Retrieval
PPTX
Deep Neural Methods for Retrieval
PPTX
Neural Models for Document Ranking
PPTX
A Simple Introduction to Word Embeddings
PPTX
Haystack 2019 - Search with Vectors - Simon Hughes
PPTX
Searching with vectors
PPTX
Vectors in Search - Towards More Semantic Matching
PPTX
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
PDF
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
PPTX
Neural Text Embeddings for Information Retrieval (WSDM 2017)
PPTX
Deep Learning for Search
PDF
Improving search with neural ranking methods
PPTX
lecture14-distributed-reprennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsentations.pptx
PDF
Dmitry Voitekh "Applications of Multimodal Learning in media search engines"
PDF
Big Data Intelligence: from Correlation Discovery to Causal Reasoning
PDF
MediaEval 2017 Retrieving Diverse Social Images Task: NLE@MediaEval’17: Combi...
PDF
Deep Learning for NLP Applications
PPTX
Deep Learning and Watson Studio
Vectorland: Brief Notes from Using Text Embeddings for Search
Using Text Embeddings for Information Retrieval
5 Lessons Learned from Designing Neural Models for Information Retrieval
Deep Neural Methods for Retrieval
Neural Models for Document Ranking
A Simple Introduction to Word Embeddings
Haystack 2019 - Search with Vectors - Simon Hughes
Searching with vectors
Vectors in Search - Towards More Semantic Matching
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
Neural Text Embeddings for Information Retrieval (WSDM 2017)
Deep Learning for Search
Improving search with neural ranking methods
lecture14-distributed-reprennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsentations.pptx
Dmitry Voitekh "Applications of Multimodal Learning in media search engines"
Big Data Intelligence: from Correlation Discovery to Causal Reasoning
MediaEval 2017 Retrieving Diverse Social Images Task: NLE@MediaEval’17: Combi...
Deep Learning for NLP Applications
Deep Learning and Watson Studio
Ad

More from Bhaskar Mitra (20)

PPTX
Emancipatory Information Retrieval (Invited Talk at UCC)
PPTX
Emancipatory Information Retrieval (SWIRL 2025)
PPTX
Sociotechnical Implications of Generative AI for Information Access
PDF
Bias and Beyond: On Generative AI and the Future of Search and Society
PPTX
Search and Society: Reimagining Information Access for Radical Futures
PPTX
Joint Multisided Exposure Fairness for Search and Recommendation
PPTX
What’s next for deep learning for Search?
PDF
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
PPTX
Efficient Machine Learning and Machine Learning for Efficiency in Information...
PPTX
Multisided Exposure Fairness for Search and Recommendation
PPTX
Neural Learning to Rank
PPTX
Neural Information Retrieval: In search of meaningful progress
PPTX
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
PPTX
Neural Learning to Rank
PPTX
Duet @ TREC 2019 Deep Learning Track
PPTX
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
PPTX
Neural Learning to Rank
PPTX
Learning to Rank with Neural Networks
PPTX
Deep Learning for Search
PPTX
Deep Learning for Search
Emancipatory Information Retrieval (Invited Talk at UCC)
Emancipatory Information Retrieval (SWIRL 2025)
Sociotechnical Implications of Generative AI for Information Access
Bias and Beyond: On Generative AI and the Future of Search and Society
Search and Society: Reimagining Information Access for Radical Futures
Joint Multisided Exposure Fairness for Search and Recommendation
What’s next for deep learning for Search?
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
Efficient Machine Learning and Machine Learning for Efficiency in Information...
Multisided Exposure Fairness for Search and Recommendation
Neural Learning to Rank
Neural Information Retrieval: In search of meaningful progress
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
Neural Learning to Rank
Duet @ TREC 2019 Deep Learning Track
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
Neural Learning to Rank
Learning to Rank with Neural Networks
Deep Learning for Search
Deep Learning for Search

Recently uploaded (20)

PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PPTX
observCloud-Native Containerability and monitoring.pptx
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
STKI Israel Market Study 2025 version august
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Developing a website for English-speaking practice to English as a foreign la...
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PPTX
Chapter 5: Probability Theory and Statistics
PDF
Getting started with AI Agents and Multi-Agent Systems
PPTX
The various Industrial Revolutions .pptx
PPT
What is a Computer? Input Devices /output devices
PDF
Architecture types and enterprise applications.pdf
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Five Habits of High-Impact Board Members
Univ-Connecticut-ChatGPT-Presentaion.pdf
O2C Customer Invoices to Receipt V15A.pptx
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
observCloud-Native Containerability and monitoring.pptx
Zenith AI: Advanced Artificial Intelligence
NewMind AI Weekly Chronicles – August ’25 Week III
Taming the Chaos: How to Turn Unstructured Data into Decisions
Benefits of Physical activity for teenagers.pptx
STKI Israel Market Study 2025 version august
Assigned Numbers - 2025 - Bluetooth® Document
Developing a website for English-speaking practice to English as a foreign la...
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
Chapter 5: Probability Theory and Statistics
Getting started with AI Agents and Multi-Agent Systems
The various Industrial Revolutions .pptx
What is a Computer? Input Devices /output devices
Architecture types and enterprise applications.pdf
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Five Habits of High-Impact Board Members

Neural Models for Information Retrieval

  • 1. NEURAL MODELS FOR INFORMATION RETRIEVAL BHASKAR MITRA Principal Applied Scientist Microsoft AI and Research Research Student Dept. of Computer Science University College London November, 2017
  • 2. This talk is based on work done in collaboration with Nick Craswell, Fernando Diaz, Emine Yilmaz, Rich Caruana, Eric Nalisnick, Hamed Zamani, Christophe Van Gysel, Nicola Cancedda, Matteo Venanzi, Saurabh Tiwary, Xia Song, Laura Dietz, Federico Nanni, Matt Magnusson, Roy Rosemarin, Grzegorz Kukla, Piotr Grudzien, and many others
  • 3. For a general overview of neural IR refer to the manuscript under review for Foundations and Trends® in Information Retrieval Pre-print is available for free download http://bit.ly/neuralir-intro Final manuscript may contain additional content and changes Or check out the presentations from these recent tutorials, WSDM 2017 tutorial: http://bit.ly/NeuIRTutorial-WSDM2017 SIGIR 2017 tutorial: http://nn4ir.com/
  • 4. NEURAL NETWORKS Amazingly successful on many difficult application areas Dominating multiple fields: Each application is different, motivates new innovations in machine learning 2011 2013 2015 2017 speech vision NLP IR? Our research: Novel representation learning methods and neural architectures motivated by specific needs and challenges of IR tasks Source: (Mitra and Craswell, 2018) https://www.microsoft.com/en-us/research/...
  • 5. IR TASK MAY REFER TO DOCUMENT RANKING Information need query results ranking (document list) retrieval system indexes a document corpus Relevance (documents satisfy information need e.g. useful)
  • 6. BUT IT MAY ALSO REFER TO… QUERY AUTO-COMPLETION NEXT QUERY SUGGESTION cheap flights from london t| cheap flights from london to frankfurt cheap flights from london to new york cheap flights from london to miami cheap flights from london to sydney cheap flights from london to miami Related searches package deals to miami ba flights to miami things to do in miami miami tourist attractions
  • 7. ANATOMY OF AN IR MODEL IR in three simple steps: 1. Generate input (query or prefix) representation 2. Generate candidate (document or suffix or query) representation 3. Estimate relevance based on input and candidate representations Neural networks can be useful for one or more of these steps input text generate input representation candidate text generate candidate representation estimate relevance input vector candidate vector point of input representation point of match point of candidate representation
  • 8. NEURAL NETWORKS CAN HELP WITH… Learning a matching function on top of traditional feature based representation of query and document But it can also help with learning good representations of text to deal with vocabulary mismatch In this part of the talk, we focus on learning good vector representations of text for retrieval input text candidate text generate manually designed features neural network model for matching
  • 9. A QUICK REFRESHER ON VECTOR SPACE REPRESENTATIONS Under local representation the terms banana, mango, and dog are distinct items But distributed representation (e.g., project items to a feature space) may recognize that banana and mango are both fruits, but dog is different Important note: the choice of features defines what items are similar
  • 10. A QUICK REFRESHER ON VECTOR SPACE REPRESENTATIONS An embedding is a new (latent) space such that the properties of, and the relationships between, the items are preserved Compared to original feature space an embedding space may have one or more of the following: • Less number of dimensions • Less sparseness • Disentangled principle components
  • 11. NOTIONS OF SIMILARITY Is “Seattle” more similar to… “Sydney” (similar type) Or “Seahawks” (similar topic) Depends on what feature space you choose
  • 12. NOTIONS OF SIMILARITY Consider the following toy corpus… Now consider the different vector representations of terms you can derive from this corpus and how the items that are similar differ in these vector spaces
  • 16. NOTIONS OF SIMILARITY Consider the following toy corpus… Now consider the different vector representations of terms you can derive from this corpus and how the items that are similar differ in these vector spaces
  • 17. RETRIEVAL USING EMBEDDINGS Given an input the retrieval model predicts a point in the embedding space Items close to this point in the embedding space are retrieved as results Relevant items should be similar to /near each other in the embedding space
  • 18. SO THE DESIRED NOTION OF SIMILARITY SHOULD DEPEND ON THE TARGET TASK DOCUMENT RANKING ✓ cheap flights to london ✗ cheap flights to sydney ✗ hotels in london QUERY AUTO-COMPLETION ✓ cheap flights to london ✓ cheap flights to sydney ✗ cheap flights to big ben NEXT QUERY SUGGESTION ✓ budget flights to london ✓ hotels in london ✗ cheap flights to sydney cheap flights to london cheap flights to | cheap flights to london Next, we take a sample model and show how the same model captures different notions of similarity based on the data it is trained on
  • 19. DEEP SEMANTIC SIMILARITY MODEL (DSSM) Siamese network with two deep sub-models Projects input and candidate texts into embedding space Trained by maximizing cosine similarity between correct input-output pairs (Huang et al., 2013) https://dl.acm.org/citation...
  • 20. DSSM TRAINED ON DIFFERENT TYPES OF DATA Trained on pairs of… Sample training data Useful for Paper Query and document titles <“things to do in seattle”, “seattle tourist attractions”> Document ranking (Shen et al., 2014) https://dl.acm.org/citation... Query prefix and suffix <“things to do in”, “seattle”> Query auto-completion (Mitra and Craswell, 2015) https://dl.acm.org/citation... Consecutive queries in user sessions <“things to do in seattle”, “space needle”> Next query suggestion (Mitra, 2015) https://dl.acm.org/citation... Each model captures a different notion of similarity / regularity in the learnt embedding space
  • 21. Nearest neighbors for “seattle” and “taylor swift” based on two DSSM models – one trained on query-document pairs and the other trained on query prefix-suffix pairs DIFFERENT REGULARITIES IN DIFFERENT EMBEDDING SPACES
  • 22. DIFFERENT REGULARITIES IN DIFFERENT EMBEDDING SPACES Groups of similar search intent transitions from a query log The DSSM trained on session query pairs can capture regularities in the query space (similar to word2vec for terms)
  • 23. DSSM TRAINED ON SESSION QUERY PAIRS ALLOWS FOR ANALOGIES OVER SHORT TEXT!
  • 24. “ ” WHEN IS IT PARTICULARLY IMPORTANT TO THINK ABOUT NOTIONS OF SIMILARITY? If you are using pre-trained embeddings, instead of learning the text representations in an end-to-end model for the target task
  • 25. USING PRE-TRAINED WORD EMBEDDINGS FOR DOCUMENT RANKING Non-matching terms “population” and “area” indicate first passage is more relevant to the query “Albuquerque” Use word2vec embeddings to compare every query and document terms Passage about Albuquerque Passage not about Albuquerque
  • 26. DUAL EMBEDDING SPACE MODEL (DESM) …but what if I told you that everyone using word2vec is throwing half the model away? IN-OUT similarity captures a more Topical notion of term- term relationship compared to IN-IN and OUT-OUT Better to represent query terms using IN embeddings and document terms using OUT embeddings (Mitra et al., 2016) https://arxiv.org/abs/1602.01137
  • 27. GET THE DATA IN+OUT Embeddings for 2.7M words trained on 600M+ Bing queries https://www.microsoft.com/en-us/download/details.aspx?id=52597 Download
  • 28. BUT TERMS ARE INHERENTLY AMBIGUOUS or
  • 29. TOPIC-SPECIFIC TERM REPRESENTATIONS Terms can take different meanings in different context – global representations likely to be coarse and inappropriate under certain topics Global model likely to focus more on learning accurate representations of popular terms Often impractical to train on full corpus – without topic specific sampling important training instances may be ignored
  • 30. TOPIC-SPECIFIC TERM EMBEDDINGS FOR QUERY EXPANSION corpuscut gasoline tax results topic-specific term embeddings cut gasoline tax deficit budget expanded query final results query (Diaz et al., 2015) http://anthology.aclweb.org/... Use documents from first round of retrieval to learn a query-specific embedding space Use learnt embeddings to find related terms for query expansion for second round of retrieval
  • 32. Now, let’s talk about deep neural network models for document ranking…
  • 33. CHALLENGES IN SHORT VS. LONG TEXT RETRIEVAL Short-text Vocabulary mismatch more serious problem Long-text Documents contain mixture of many topics Matches in different parts of a long document contribute unequally Term proximity is an important consideration
  • 34. A TALE OF TWO QUERIES “PEKAROVIC LAND COMPANY” Hard to learn good representation for rare term pekarovic But easy to estimate relevance based on patterns of exact matches Proposal: Learn a neural model to estimate relevance from patterns of exact matches “WHAT CHANNEL SEAHAWKS ON TODAY” Target document likely contains ESPN or sky sports instead of channel An embedding model can associate ESPN in document to channel in query Proposal: Learn embeddings of text and match query with document in the embedding space The Duet Architecture Use neural networks to model both functions and learn their parameters jointly
  • 35. THE DUET ARCHITECTURE Linear combination of two models trained jointly on labelled query- document pairs Local model operates on lexical interaction matrix, and Distributed model projects text into an embedding space for matching (Mitra et al., 2017) https://dl.acm.org/citation... (Nanni et al., 2017) https://dl.acm.org/citation...
  • 36. LOCAL SUB-MODEL Focuses on patterns of exact matches of query terms in document
  • 37. INTERACTION MATRIX OF QUERY AND DOCUMENT TERMS 𝑋𝑖,𝑗 = 1, 𝑖𝑓 𝑞𝑖 = 𝑑𝑗 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 In relevant documents, →Many matches, typically in clusters →Matches localized early in document →Matches for all query terms →In-order (phrasal) matches
  • 38. ESTIMATING RELEVANCE FROM INTERACTION MATRIX ← document words → Convolve using window of size 𝑛 𝑑 × 1 Each window instance compares a query term w/ whole document Fully connected layers aggregate evidence across query terms - can model phrasal matches
  • 39. LOCAL SUB-MODEL Focuses on patterns of exact matches of query terms in document
  • 40. THE DUET ARCHITECTURE Linear combination of two models trained jointly on labelled query- document pairs Local model operates on lexical interaction matrix Distributed model projects n-graph vectors of text into an embedding space and then estimates match
  • 41. DISTRIBUTED SUB-MODEL Learns representation of text and matches query with document in the embedding space
  • 42. convolutio n pooling Query embedding … … … HadamardproductHadamardproductFullyconnected query document ESTIMATING RELEVANCE FROM TEXT EMBEDDINGS Convolve over query and document terms Match query with moving windows over document Learn text embeddings specifically for the task Matching happens in embedding space * Network architecture slightly simplified for visualization – refer paper for exact details
  • 43. DISTRIBUTED SUB-MODEL Learns representation of text and matches query with document in the embedding space
  • 44. THE DUET MODEL Training sample: 𝑄, 𝐷+, 𝐷1 − 𝐷2 − 𝐷3 − 𝐷4 − 𝐷+ = 𝐷𝑜𝑐𝑢𝑚𝑒𝑛𝑡 𝑟𝑎𝑡𝑒𝑑 𝐸𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡 𝑜𝑟 𝐺𝑜𝑜𝑑 𝐷− = 𝐷𝑜𝑐𝑢𝑚𝑒𝑛𝑡 2 𝑟𝑎𝑡𝑖𝑛𝑔𝑠 𝑤𝑜𝑟𝑠𝑒 𝑡ℎ𝑎𝑛 𝐷+ Optimize cross-entropy loss Implemented using CNTK (GitHub link)
  • 45. EFFECT OF TRAINING DATA VOLUME Key finding: large quantity of training data necessary for learning good representations, less impactful for training local model
  • 46. If we classify models by query level performance there is a clear clustering of lexical (local) and semantic (distributed) models
  • 47. GET THE CODE Implemented using CNTK python API https://github.com/bmitra-msft/NDRM/blob/master/notebooks/Duet.ipynb Download
  • 48. BUT WEB DOCUMENTS ARE MORE THAN JUST BODY TEXT… URL incoming anchor text title body clicked query
  • 49. EXTENDING NEURAL RANKING MODELS TO MULTIPLE DOCUMENT FIELDS BM25 Neural ranking model → → BM25F ?
  • 50. RANKING DOCUMENTS WITH MULTIPLE FIELDS Document consists of multiple text fields (e.g., title, URL, body, incoming anchors, and clicked queries) Fields, such as incoming anchors and clicked queries, contains variable number of short text instances Body field has long text, whereas clicked queries are typically only few words in length URL field contains non-natural language text (Zamani et al., 2018) https://arxiv.org/abs/1711.09174
  • 52. Learn a different embedding space for each document field
  • 53. For multiple-instance fields, average pool the instance level embeddings Mask empty text instances, and average only among non-empty instances to avoid preferring documents with more instances
  • 54. Learn different query embeddings for matching against different fields Different fields may match different aspects of the query Ideal query representation for matching against URL likely to be different from for matching with title
  • 55. Represent per field match by a vector, not a score Allows the model to validate that across the different fields all aspects of the query intent have been covered (Similar intuition as BM25F)
  • 56. Aggregate evidence of relevance across all document fields
  • 57. High precision fields, such as clicked queries, can negatively impact the modeling of the other fields Field level dropout during training can regularize against over-dependency on any individual field
  • 58. The design of both Duet and NRM-F models were motivated by long- standing IR insights Deep neural networks have not reduced the importance of core IR research nor the studying and understanding of IR tasks In fact, the same intuitions behind the design of classical IR models were important for feature design in early learning-to-rank models, and now in the architecture design of deep neural networks
  • 60. REPLY WITH: PROACTIVE RECOMMENDATION OF EMAIL ATTACHMENTS Given the context of the current conversation, the neural model formulates a short text query to pro-actively retrieve relevant attachments that the user may want to include in her response For example, (Van Gysel et al., 2017) https://arxiv.org/abs/1710.06061
  • 61. SUMMARY OF PAPERS DISCUSSED AN INTRODUCTION TO NEURAL INFORMATION RETRIEVAL Bhaskar Mitra and Nick Craswell, in Foundations and Trends® in Information Retrieval, Now Publishers, 2018 (upcoming). https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/fntir-neuralir-mitra.pdf NEURAL RANKING MODELS WITH MULTIPLE DOCUMENT FIELDS Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary, in Proc. WSDM, 2018 (upcoming). https://arxiv.org/abs/1711.09174 REPLY WITH: PROACTIVE RECOMMENDATION OF EMAIL ATTACHMENTS Christophe Van Gysel, Bhaskar Mitra, Matteo Venanzi, and others, in Proc. CIKM, 2017 (upcoming). https://arxiv.org/abs/1710.06061 LEARNING TO MATCH USING LOCAL AND DISTRIBUTED REPRESENTATIONS OF TEXT FOR WEB SEARCH Bhaskar Mitra, Fernando Diaz, and Nick Craswell, in Proc. WWW, 2017. https://dl.acm.org/citation.cfm?id=3052579 BENCHMARK FOR COMPLEX ANSWER RETRIEVAL Federico Nanni, Bhaskar Mitra, Matt Magnusson, and Laura Dietz, in Proc. ICTIR, 2017. https://dl.acm.org/citation.cfm?id=3121099 QUERY EXPANSION WITH LOCALLY-TRAINED WORD EMBEDDINGS Fernando Diaz, Bhaskar Mitra, and Nick Craswell, in Proc. ACL, 2016. http://anthology.aclweb.org/P/P16/P16-1035.pdf A DUAL EMBEDDING SPACE MODEL FOR DOCUMENT RANKING Bhaskar Mitra, Eric Nalisnick, Nick Craswell, and Rich Caruana, arXiv preprint, 2016. https://arxiv.org/abs/1602.01137 IMPROVING DOCUMENT RANKING WITH DUAL WORD EMBEDDINGS Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana, in Proc. WWW, 2016. https://dl.acm.org/citation.cfm?id=2889361 QUERY AUTO-COMPLETION FOR RARE PREFIXES Bhaskar Mitra and Nick Craswell, in Proc. CIKM, 2015. https://dl.acm.org/citation.cfm?id=2806599 EXPLORING SESSION CONTEXT USING DISTRIBUTED REPRESENTATIONS OF QUERIES AND REFORMULATIONS Bhaskar Mitra, in Proc. SIGIR, 2015. https://dl.acm.org/citation.cfm?id=2766462.2767702
  • 62. AN INTRODUCTION TO NEURAL INFORMATION RETRIEVAL Manuscript under review for Foundations and Trends® in Information Retrieval Pre-print is available for free download http://bit.ly/neuralir-intro (Final manuscript may contain additional content and changes) THANK YOU