SlideShare a Scribd company logo
CS246: Mining Massive
Datasets
Jure Leskovec, Stanford
University
Note to other teachers and users of these slides: We would be delighted if
you found our material useful for giving your own lectures. Feel free to use
these slides verbatim, or to modify them to fit your own needs. If you make
use of a significant portion of these slides
in your own lecture, please include this message, or a link to our web site:
http://www.mmds.org
High
dim.
data
Locality
sensitive
hashing
Clustering
Dimension-
ality
reduction
Grap
h
data
PageRank
,
SimRank
Community
Detection
Spam
Detection
Infinit
e
data
Filtering
data
streams
Web
advertising
Queries
on
streams
Machi
ne
learnin
g
SVM
Decision
Trees
Perceptron
, kNN
App
s
Recommen-
der
systems
Association
Rules
Duplicate
document
detection
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 2
◾ Customer X
 Buys Metallica CD
 Buys Megadeth CD
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 3
◾ Customer Y
 Clicks on Metallica album
 Recommender system
suggests Megadeth
from data collected
about customer X
Item
s
Retrieval Recommendations
Products, web sites,
blogs, news items,
…
Example
s:
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 4
◾ Shelf space is a scarce commodity
for traditional retailers
 Also: TV networks, movie theaters,…
◾ Web enables near-zero-cost dissemination
of information about products
 From scarcity to abundance
◾ More choice necessitates better filters:
 Recommendation engines
 Association rules: How Into Thin Air made Touching
the Void a bestseller:
http://www.wired.com/wired/archive/12.10/tail.html
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 5
Source: Chris Anderson
(2004)
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 6
◾ Non-personalized recommendations:
 Editorial and hand curated
 List of favorites
 List of “essential” items
 Simple aggregates
 Top 10, Most Popular, Recent Uploads
◾ Personalized recommendations:
 Tailored to individual users
 Examples: Amazon, Netflix, Youtube,…
Today’s
class
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 7
◾ X = set of Customers
◾ S = set of Items
◾ Utility function u: X × S  R
 R = set of ratings
 R is a totally ordered set
 e.g., 1-5 stars, real number in [0,1]
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 8
0.4
1
0.2
0.3
0.5
0.2
1
Avata
r
LOT
R
Matri
x
Pirate
s
Alic
e
Bo
b
Car
ol
Davi
d
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 9
◾ (1) Gathering “known” ratings for matrix
 How to collect the data in the utility matrix
◾ (2) Extrapolating unknown ratings from
the known ones
 Mainly interested in high unknown ratings
 We are not interested in knowing what you don’t like
but what you like
◾ (3) Evaluating extrapolation methods
 How to measure success/performance of
recommendation methods
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 10
◾ Explicit
 Ask people to rate items
 Doesn’t work well in practice – people
don’t like being bothered
 Crowdsourcing: Pay people to label items
◾ Implicit
 Learn ratings from user actions
 E.g., purchase implies high rating
 What about low ratings?
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 11
◾ Key problem: Utility matrix U is sparse
 Most people have not rated most items
 Cold start:
 New items have no ratings
 New users have no history
◾ Three approaches to recommender systems:
 1) Content-based
 2) Collaborative filtering Today
!
 3) Latent factor based
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 12
Mining massive datasets using recommender system
◾ Main idea:
 Items have profiles:
 Video -> [genre, director, actors, plot, release year]
 News -> [set of keywords]
 Recommend items to customer x similar to
previous items rated highly by x
Example:
◾ Movie recommendations
 Recommend movies with same actor(s),
director, genre, …
◾ Websites, blogs, news
 Recommend other sites with “similar” content
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 14
likes
Item profiles
Red
Circles
Triangle
s
User profile
match
recommend
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 15
build
◾ For each item, create an item profile
◾ Profile is a set (vector) of features
 Movies: author, title, actor, director,…
 Text: Set of “important” words in
document
◾ How to pick important features?
 Usual heuristic from text mining is TF-
IDF
(Term frequency * Inverse Doc
Frequency)
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 16
fij = frequency of term (feature) i in doc (item) j
Note: we normalize TF
to discount for
“longer” documents
ni = number of docs that mention term
i N = total number of docs
TF-IDF score: wij = TFij × IDFi
Doc profile = set of words with highest
TF-IDF
scores, together with their scores
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 17
◾ User profile possibilities:
 Weighted average of rated item profiles
 Variation: weight by difference from average
rating for item
◾ Prediction heuristic: Cosine similarity of
user and item profiles
 Given user profile x and item profile i, estimate
𝑢 𝒙, 𝒊 = cos𝒙, 𝒊
=
𝒙·
𝒊
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 18
𝒙 ⋅
𝒊
◾ How do you quickly find items closest to
𝒙?
◾ +: No need for data on other users
 No item cold-start problem, no sparsity problem
◾ +: Able to recommend to users
with unique tastes
◾ +: Able to recommend new &
unpopular items
 No first-rater problem
◾ +: Able to provide explanations
 Can provide explanations of recommended items by
listing content-features that caused an item to be
recommended
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 19
◾ –: Finding the appropriate features is hard
 E.g., images, movies, music
◾ –: Recommendations for new users
 How to build a user profile?
◾ –: Overspecialization
 Never recommends items outside user’s
content profile
 People might have multiple interests
 Unable to exploit quality judgments of other users
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 20
Harnessing quality judgments of other
users
◾ Does not build item profile or user profile
◾ In place of item-profile (user-profile) we
use its row (column) in the utility matrix.
◾ Comes in two flavors:
 User-user collaborative filtering
 Item-Item collaborative filtering
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 22
◾ Consider user x
◾ Find set N of other
users whose
ratings are
“similar” to
x’s ratings
◾ Estimate x’s ratings
based on ratings
of users in N
x
N
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 23
◾ Let rx be the vector of user x’s ratings
◾ Jaccard similarity measure
 Problem: Ignores the value of the rating
◾ Cosine similarity measure
 sim(x, y) = cos(rx, ry) = 𝑟𝑥⋅𝑟
𝑦
||𝑟𝑥||⋅||𝑟𝑦||
 Problem: Treats some missing ratings as “negative”
◾ Pearson correlation coefficient
 Sxy = items rated by both users x and y
rx = [1, _, _, 1, 3]
ry = [1, _, 2, 2, _]
rx, ry as sets:
rx = {1, 4, 5}
ry = {1, 3, 4}
rx, ry as points:
rx = {1, 0, 0, 1, 3}
ry = {1, 0, 2, 2, 0}
𝒔𝒊𝒎 𝒙
𝒚
σ
∈𝑺𝒙𝒚
𝒙 𝒙 𝒚
𝒚
σ ∈𝑺𝒙
�
�
�
�
�
�
σ ∈𝑺𝒙
�
�
�
�
𝟐
rx, ry … avg.
rating of x,
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 24
◾ Intuitively we want: sim(A, B) > sim(A,
C)
◾ Jaccard similarity: 1/5 < 2/4
◾ Cosine similarity: 0.380 > 0.322
 Considers missing ratings as “negative”
 Solution: subtract the (row) mean sim A,B vs.
A,C:
0.092 > -0.559
Notice cosine sim.
is correlation
σ𝒊 𝒓𝒙𝒊 ⋅
𝒓𝒚𝒊
σ𝒊 𝒓𝟐
⋅ σ
𝒓𝟐
𝒙𝒊
𝒊
𝒚𝒊
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 25
Cosine sim:
𝒔𝒊𝒎(𝒙, 𝒚)
=
From similarity metric to
recommendations:
◾ Let rx be the vector of user x’s ratings
◾ Let N be the set of k users most similar to x
who have rated item i
◾ Prediction for item i of user x:
 �
�
𝑥
𝑖
=
1
�
�
σ𝑦∈𝑁
𝑟𝑦𝑖
 Or even better: 𝑥
𝑖
σ 𝑦∈𝑁
𝑠𝑥𝑦
𝑦
𝑖
σ 𝑦∈𝑁
𝑠𝑥𝑦
◾ Many other tricks
possible…
Shorthand:
𝒔𝒙𝒚 =
𝒔𝒊𝒎 𝒙, 𝒚
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 26
◾ So far: User-user collaborative filtering
◾ Another view: Item-item
 For item i, find other similar items
 Estimate rating for item i based
on ratings for similar items
 Can use same similarity metrics and
prediction functions as in user-user model

jN (i;
x)
sij
jN (i; x)
sij
 rxj
rxi

sij… similarity of items i and
j rxj…rating of user x on
item j
N(i;x)… set of items which
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 27
1 2 3 4 5 6 7 8 9 10 11 12
1 1 3 5 5 4
2 5 4 4 2 1 3
3 2 4 1 2 3 4 3 5
4 2 4 5 4 2
5 4 3 4 2 2 5
6 1 3 3 2 4
user
s
movie
s
- unknown
rating
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 28
- rating between 1 to
5
1 2 3 4 5 6 7 8 9 10 11 12
1 1 3 ? 5 5 4
2 5 4 4 2 1 3
3 2 4 1 2 3 4 3 5
4 2 4 5 4 2
5 4 3 4 2 2 5
6 1 3 3 2 4
user
s
- estimate rating of movie 1 by
user 5
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 29
movie
s
1 2 3 4 5 6 7 8 9 10 11 12
1 1 3 ? 5 5 4
2 5 4 4 2 1 3
3 2 4 1 2 3 4 3 5
4 2 4 5 4 2
5 4 3 4 2 2 5
6 1 3 3 2 4
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 30
user
s
Neighbor selection:
Identify movies similar
to movie 1, rated by
movie
s
s1,m
1.00
-0.18
0.41
-0.10
-0.31
0.59
Here we use Pearson correlation as similarity:
1) Subtract mean rating mi from each movie i
m1 = (1+3+5+5+4)/5 = 3.6
row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0]
2) Compute dot products between rows
1 2 3 4 5 6 7 8 9 10 11 12
1 1 3 ? 5 5 4
2 5 4 4 2 1 3
3 2 4 1 2 3 4 3 5
4 2 4 5 4 2
5 4 3 4 2 2 5
6 1 3 3 2 4
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 31
user
s
Compute similarity weights:
s1,3=0.41, s1,6=0.59
movie
s
s1,m
1.00
-0.18
0.41
-0.10
-0.31
0.59
1 2 3 4 5 6 7 8 9 10 11 12
1 1 3 2.6 5 5 4
2 5 4 4 2 1 3
3 2 4 1 2 3 4 3 5
4 2 4 5 4 2
5 4 3 4 2 2 5
6 1 3 3 2 4
user
s
Predict by taking weighted average:
1.5
r = (0.41*2 + 0.59*3) / (0.41+0.59) =
2.6
movie
s
𝒓𝒊𝒙
=
σ𝒋∈𝑵(𝒊;𝒙) 𝒔𝒊𝒋
⋅ 𝒓𝒋𝒙
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 32
σ
𝒔
𝒊
𝒋
◾ Define similarity sij of items i and j
◾ Select k nearest neighbors N(i; x)
 Items most similar to i, that were rated by x
◾ Estimate rating rxi as the weighted average:
baseline estimate
for rxi
◾ μ = overall mean movie rating
◾ bx = rating deviation of user x
= (avg. rating of user x) – μ
◾ bi = rating deviation of movie i
jN (i;
x)
ij
s
jN (i; x)
sij
rxj
rxi 

Before:

1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 33
jN (i;
x)
rxi  bxi 
ij
s
jN (i; x)
sij  (rxj 
bxj )
𝒃𝒙𝒊 = 𝝁 + 𝒃𝒙
+ 𝒃𝒊
1 0.4
0.3
0.9 1 0.8
0.5
1
Avata
r
LOT
R
Matrix
0.8
Pirate
s
Alic
e
Bo
b
Car
ol
Davi
d
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 34
◾ In practice, it has been observed that item-item
often works better than user-user
◾ Why? Items are “simpler”, users have multiple
tastes
◾ + Works for any kind of item
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 35
 No feature selection needed
◾ - Cold Start:
 Need enough users in the system to find a match
◾ - Sparsity:
 The user/ratings matrix is
sparse
 Hard to find users that have rated the same
items
◾ - First rater:
 Cannot recommend an item that has not
been
previously rated
 New items, Esoteric items
◾ - Popularity bias:
 Cannot recommend items to someone with
unique taste
 Tends to recommend popular items
◾ Implement two or more different
recommenders and combine predictions
 Perhaps using a linear model
◾ Add content-based methods
to collaborative filtering
 Item profiles for new item
problem
 Demographics to deal with new
user problem
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 36
- Evaluation
- Error metrics
- Complexity / Speed
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 37
1 3 4
3 5 5
4 5 5
3
3
2 2 2
5
2 1 1
3 3
1
movies
users
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 38
1 3 4
3 5 5
4 5 5
3
3
2 ? ?
?
2 1 ?
3 ?
1
Test Data Set
users
movies
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 39

�
�
1
σ
◾ Compare predictions with known ratings
 Root-mean-square error (RMSE)
2
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 40
𝒙
𝒊
𝑥𝑖 𝑥𝑖 𝑥𝑖
𝒙𝒊
𝑟 − 𝑟∗ where 𝒓 is predicted, 𝒓∗ is the true rating of x on
i
 N is the number of points we are making comparisons on
 Precision at top 10:
 % of relevant items in top 10
◾ Another approach: 0/1 model
 Coverage:
 Number of items/users for which the system can make predictions
 Precision:
 Accuracy of predictions
 Receiver operating characteristic (ROC)
 Tradeoff curve between false positives and false negatives
◾ Narrow focus on accuracy sometimes
misses the point
 Prediction Diversity
 Prediction Context
 Order of predictions
◾ In practice, we care only to predict
high ratings:
 RMSE might penalize a method that does well
for high ratings and badly for others
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 41
◾ Expensive step is finding k most
similar customers: O(|X|)
◾ Too expensive to do at runtime
 Could pre-compute
◾ Naïve pre-computation takes time O(k
·|X|)
 X … set of customers
◾ We already know how to do this!
 Near-neighbor search in high dimensions
(LSH)
 Clustering
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 42
◾ Leverage all the data
 Don’t try to reduce data size in an
effort to make fancy algorithms work
 Simple methods on large data do best
◾ Add more data
 e.g., add IMDB data on genres
◾ More data beats better algorithms
http://anand.typepad.com/datawocky/2008/03/more-data-usual.html
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 43
Mining massive datasets using recommender system
◾ Training data
 100 million ratings, 480,000 users, 17,770 movies
 6 years of data: 2000-2005
◾ Test data
 Last few ratings of each user (2.8 million)
 Evaluation criterion: root mean squared error
(RMSE)
 Netflix Cinematch RMSE: 0.9514
◾ Competition
 2,700+ teams
 $1 million prize for 10% improvement on
Cinematch
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 45
◾ Next topic: Recommendations
via Latent Factor models
Overview of Coffee Varieties
I2
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 46
Exoticness
/
Price
I1C
6 L5
Exotic
S5 C1
S7
S
2S1
S3 R4
S6
C7
RR2
6
R3
Flavored
F3
B2 CC
2
L4C3
4 a1
S4 TE
FR
B1 F9 F8 F6
R5 F5
R8
Popular Roasts
and Blends
F4
Complexity of Flavor
The bubbles above represent products sized by sales volume.
Products close to each other are recommended to each
other.
Geare
d
towar
ds
female
s
Geare
d
towar
ds
males
serio
us
escapi
st
The Princess
Diaries
The Lion
King
Braveheart
Independenc
e Day
Amadeus
The Color
Purple
Ocean’s 11
Sense and
Sensibilit
y
Gu
s
Dav
e
[Bellkor
Team]
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 47
Lethal
Weapo
n
Dumb and
Dumber
Koren, Bell, Volinksy, IEEE Computer,
1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 48

More Related Content

Similar to Mining massive datasets using recommender system (20)

PPTX
ch12-ml1gnsnnr5ưt5trhtgfnszfbaSDhbgdfb.pptx
linhkieuthingoc
 
PPTX
Recommender systems: Content-based and collaborative filtering
Viet-Trung TRAN
 
PPT
CS583-recommender-systems.ppt
ArfatAhmadKhan1
 
PDF
Context-aware Recommendation: A Quick View
YONG ZHENG
 
PDF
MediaEval 2017 Retrieving Diverse Social Images Task (Overview)
multimediaeval
 
PDF
Entity Summarization with User Feedback (ESWC 2020)
Qingxia Liu
 
PPTX
DS2014: Feature selection in hierarchical feature spaces
Petar Ristoski
 
PPT
Download
butest
 
PPT
Download
butest
 
PDF
Tutorial: Context In Recommender Systems
YONG ZHENG
 
PPT
01-introductionto Object ooriented Programming in JAVA CS.ppt
GESISLAMIAPATTOKI
 
PPTX
acmsigtalkshare-121023190142-phpapp01.pptx
dongchangim30
 
PPTX
Recommender Systems: Advances in Collaborative Filtering
Changsung Moon
 
PDF
Pinsage Stanford slides.pdf
ssuser3a8f33
 
PPT
Introduction to recommendation system
Aravindharamanan S
 
PDF
Machine Learning ebook.pdf
HODIT12
 
PDF
1_5_AI_edx_ml_51intro_240204_104838machine learning lecture 1
MostafaHazemMostafaa
 
PDF
Silk Data - Review Lecture on Recommendation Systems
Nikolay Karelin
 
PDF
Tutorial: Context-awareness In Information Retrieval and Recommender Systems
YONG ZHENG
 
PDF
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views
collwe
 
ch12-ml1gnsnnr5ưt5trhtgfnszfbaSDhbgdfb.pptx
linhkieuthingoc
 
Recommender systems: Content-based and collaborative filtering
Viet-Trung TRAN
 
CS583-recommender-systems.ppt
ArfatAhmadKhan1
 
Context-aware Recommendation: A Quick View
YONG ZHENG
 
MediaEval 2017 Retrieving Diverse Social Images Task (Overview)
multimediaeval
 
Entity Summarization with User Feedback (ESWC 2020)
Qingxia Liu
 
DS2014: Feature selection in hierarchical feature spaces
Petar Ristoski
 
Download
butest
 
Download
butest
 
Tutorial: Context In Recommender Systems
YONG ZHENG
 
01-introductionto Object ooriented Programming in JAVA CS.ppt
GESISLAMIAPATTOKI
 
acmsigtalkshare-121023190142-phpapp01.pptx
dongchangim30
 
Recommender Systems: Advances in Collaborative Filtering
Changsung Moon
 
Pinsage Stanford slides.pdf
ssuser3a8f33
 
Introduction to recommendation system
Aravindharamanan S
 
Machine Learning ebook.pdf
HODIT12
 
1_5_AI_edx_ml_51intro_240204_104838machine learning lecture 1
MostafaHazemMostafaa
 
Silk Data - Review Lecture on Recommendation Systems
Nikolay Karelin
 
Tutorial: Context-awareness In Information Retrieval and Recommender Systems
YONG ZHENG
 
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views
collwe
 

More from rosni (7)

PPTX
Normalization of database from first normal form to 5th normal form
rosni
 
PPT
Research_Ethics in the real world by first defining what we mean by ethics
rosni
 
PPTX
1602849813-research-definition-and-types.pptx
rosni
 
PDF
1602849813-introduction-the-purpose-and-kinds-of-research.pdf
rosni
 
PPT
Writing the Literature Review by James from Texas University
rosni
 
PPT
Intensive course in research writing from texas Univ
rosni
 
PDF
Ensemble Methods and Recommender Systems
rosni
 
Normalization of database from first normal form to 5th normal form
rosni
 
Research_Ethics in the real world by first defining what we mean by ethics
rosni
 
1602849813-research-definition-and-types.pptx
rosni
 
1602849813-introduction-the-purpose-and-kinds-of-research.pdf
rosni
 
Writing the Literature Review by James from Texas University
rosni
 
Intensive course in research writing from texas Univ
rosni
 
Ensemble Methods and Recommender Systems
rosni
 
Ad

Recently uploaded (20)

PDF
Reconstruct, Restore, Reimagine: New Perspectives on Stoke Newington’s Histor...
History of Stoke Newington
 
PPTX
GRADE-3-PPT-EVE-2025-ENG-Q1-LESSON-1.pptx
EveOdrapngimapNarido
 
PPTX
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
PPTX
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
PPTX
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
PPTX
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
PPTX
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
PDF
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
PDF
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
PDF
The Constitution Review Committee (CRC) has released an updated schedule for ...
nservice241
 
PDF
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PPTX
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
PDF
QNL June Edition hosted by Pragya the official Quiz Club of the University of...
Pragya - UEM Kolkata Quiz Club
 
PPTX
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
PPTX
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
PPTX
Post Dated Cheque(PDC) Management in Odoo 18
Celine George
 
PPTX
HUMAN RESOURCE MANAGEMENT: RECRUITMENT, SELECTION, PLACEMENT, DEPLOYMENT, TRA...
PRADEEP ABOTHU
 
PPTX
How to Set Up Tags in Odoo 18 - Odoo Slides
Celine George
 
PPTX
ASRB NET 2023 PREVIOUS YEAR QUESTION PAPER GENETICS AND PLANT BREEDING BY SAT...
Krashi Coaching
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
Reconstruct, Restore, Reimagine: New Perspectives on Stoke Newington’s Histor...
History of Stoke Newington
 
GRADE-3-PPT-EVE-2025-ENG-Q1-LESSON-1.pptx
EveOdrapngimapNarido
 
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
The Constitution Review Committee (CRC) has released an updated schedule for ...
nservice241
 
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
QNL June Edition hosted by Pragya the official Quiz Club of the University of...
Pragya - UEM Kolkata Quiz Club
 
Cultivation practice of Litchi in Nepal.pptx
UmeshTimilsina1
 
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
Post Dated Cheque(PDC) Management in Odoo 18
Celine George
 
HUMAN RESOURCE MANAGEMENT: RECRUITMENT, SELECTION, PLACEMENT, DEPLOYMENT, TRA...
PRADEEP ABOTHU
 
How to Set Up Tags in Odoo 18 - Odoo Slides
Celine George
 
ASRB NET 2023 PREVIOUS YEAR QUESTION PAPER GENETICS AND PLANT BREEDING BY SAT...
Krashi Coaching
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
Ad

Mining massive datasets using recommender system

  • 1. CS246: Mining Massive Datasets Jure Leskovec, Stanford University Note to other teachers and users of these slides: We would be delighted if you found our material useful for giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: http://www.mmds.org
  • 3. ◾ Customer X  Buys Metallica CD  Buys Megadeth CD 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 3 ◾ Customer Y  Clicks on Metallica album  Recommender system suggests Megadeth from data collected about customer X
  • 4. Item s Retrieval Recommendations Products, web sites, blogs, news items, … Example s: 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 4
  • 5. ◾ Shelf space is a scarce commodity for traditional retailers  Also: TV networks, movie theaters,… ◾ Web enables near-zero-cost dissemination of information about products  From scarcity to abundance ◾ More choice necessitates better filters:  Recommendation engines  Association rules: How Into Thin Air made Touching the Void a bestseller: http://www.wired.com/wired/archive/12.10/tail.html 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 5
  • 6. Source: Chris Anderson (2004) 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 6
  • 7. ◾ Non-personalized recommendations:  Editorial and hand curated  List of favorites  List of “essential” items  Simple aggregates  Top 10, Most Popular, Recent Uploads ◾ Personalized recommendations:  Tailored to individual users  Examples: Amazon, Netflix, Youtube,… Today’s class 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 7
  • 8. ◾ X = set of Customers ◾ S = set of Items ◾ Utility function u: X × S  R  R = set of ratings  R is a totally ordered set  e.g., 1-5 stars, real number in [0,1] 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 8
  • 10. ◾ (1) Gathering “known” ratings for matrix  How to collect the data in the utility matrix ◾ (2) Extrapolating unknown ratings from the known ones  Mainly interested in high unknown ratings  We are not interested in knowing what you don’t like but what you like ◾ (3) Evaluating extrapolation methods  How to measure success/performance of recommendation methods 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 10
  • 11. ◾ Explicit  Ask people to rate items  Doesn’t work well in practice – people don’t like being bothered  Crowdsourcing: Pay people to label items ◾ Implicit  Learn ratings from user actions  E.g., purchase implies high rating  What about low ratings? 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 11
  • 12. ◾ Key problem: Utility matrix U is sparse  Most people have not rated most items  Cold start:  New items have no ratings  New users have no history ◾ Three approaches to recommender systems:  1) Content-based  2) Collaborative filtering Today !  3) Latent factor based 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 12
  • 14. ◾ Main idea:  Items have profiles:  Video -> [genre, director, actors, plot, release year]  News -> [set of keywords]  Recommend items to customer x similar to previous items rated highly by x Example: ◾ Movie recommendations  Recommend movies with same actor(s), director, genre, … ◾ Websites, blogs, news  Recommend other sites with “similar” content 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 14
  • 15. likes Item profiles Red Circles Triangle s User profile match recommend 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 15 build
  • 16. ◾ For each item, create an item profile ◾ Profile is a set (vector) of features  Movies: author, title, actor, director,…  Text: Set of “important” words in document ◾ How to pick important features?  Usual heuristic from text mining is TF- IDF (Term frequency * Inverse Doc Frequency) 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 16
  • 17. fij = frequency of term (feature) i in doc (item) j Note: we normalize TF to discount for “longer” documents ni = number of docs that mention term i N = total number of docs TF-IDF score: wij = TFij × IDFi Doc profile = set of words with highest TF-IDF scores, together with their scores 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 17
  • 18. ◾ User profile possibilities:  Weighted average of rated item profiles  Variation: weight by difference from average rating for item ◾ Prediction heuristic: Cosine similarity of user and item profiles  Given user profile x and item profile i, estimate 𝑢 𝒙, 𝒊 = cos𝒙, 𝒊 = 𝒙· 𝒊 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 18 𝒙 ⋅ 𝒊 ◾ How do you quickly find items closest to 𝒙?
  • 19. ◾ +: No need for data on other users  No item cold-start problem, no sparsity problem ◾ +: Able to recommend to users with unique tastes ◾ +: Able to recommend new & unpopular items  No first-rater problem ◾ +: Able to provide explanations  Can provide explanations of recommended items by listing content-features that caused an item to be recommended 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 19
  • 20. ◾ –: Finding the appropriate features is hard  E.g., images, movies, music ◾ –: Recommendations for new users  How to build a user profile? ◾ –: Overspecialization  Never recommends items outside user’s content profile  People might have multiple interests  Unable to exploit quality judgments of other users 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 20
  • 22. ◾ Does not build item profile or user profile ◾ In place of item-profile (user-profile) we use its row (column) in the utility matrix. ◾ Comes in two flavors:  User-user collaborative filtering  Item-Item collaborative filtering 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 22
  • 23. ◾ Consider user x ◾ Find set N of other users whose ratings are “similar” to x’s ratings ◾ Estimate x’s ratings based on ratings of users in N x N 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 23
  • 24. ◾ Let rx be the vector of user x’s ratings ◾ Jaccard similarity measure  Problem: Ignores the value of the rating ◾ Cosine similarity measure  sim(x, y) = cos(rx, ry) = 𝑟𝑥⋅𝑟 𝑦 ||𝑟𝑥||⋅||𝑟𝑦||  Problem: Treats some missing ratings as “negative” ◾ Pearson correlation coefficient  Sxy = items rated by both users x and y rx = [1, _, _, 1, 3] ry = [1, _, 2, 2, _] rx, ry as sets: rx = {1, 4, 5} ry = {1, 3, 4} rx, ry as points: rx = {1, 0, 0, 1, 3} ry = {1, 0, 2, 2, 0} 𝒔𝒊𝒎 𝒙 𝒚 σ ∈𝑺𝒙𝒚 𝒙 𝒙 𝒚 𝒚 σ ∈𝑺𝒙 � � � � � � σ ∈𝑺𝒙 � � � � 𝟐 rx, ry … avg. rating of x, 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 24
  • 25. ◾ Intuitively we want: sim(A, B) > sim(A, C) ◾ Jaccard similarity: 1/5 < 2/4 ◾ Cosine similarity: 0.380 > 0.322  Considers missing ratings as “negative”  Solution: subtract the (row) mean sim A,B vs. A,C: 0.092 > -0.559 Notice cosine sim. is correlation σ𝒊 𝒓𝒙𝒊 ⋅ 𝒓𝒚𝒊 σ𝒊 𝒓𝟐 ⋅ σ 𝒓𝟐 𝒙𝒊 𝒊 𝒚𝒊 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 25 Cosine sim: 𝒔𝒊𝒎(𝒙, 𝒚) =
  • 26. From similarity metric to recommendations: ◾ Let rx be the vector of user x’s ratings ◾ Let N be the set of k users most similar to x who have rated item i ◾ Prediction for item i of user x:  � � 𝑥 𝑖 = 1 � � σ𝑦∈𝑁 𝑟𝑦𝑖  Or even better: 𝑥 𝑖 σ 𝑦∈𝑁 𝑠𝑥𝑦 𝑦 𝑖 σ 𝑦∈𝑁 𝑠𝑥𝑦 ◾ Many other tricks possible… Shorthand: 𝒔𝒙𝒚 = 𝒔𝒊𝒎 𝒙, 𝒚 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 26
  • 27. ◾ So far: User-user collaborative filtering ◾ Another view: Item-item  For item i, find other similar items  Estimate rating for item i based on ratings for similar items  Can use same similarity metrics and prediction functions as in user-user model  jN (i; x) sij jN (i; x) sij  rxj rxi  sij… similarity of items i and j rxj…rating of user x on item j N(i;x)… set of items which 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 27
  • 28. 1 2 3 4 5 6 7 8 9 10 11 12 1 1 3 5 5 4 2 5 4 4 2 1 3 3 2 4 1 2 3 4 3 5 4 2 4 5 4 2 5 4 3 4 2 2 5 6 1 3 3 2 4 user s movie s - unknown rating 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 28 - rating between 1 to 5
  • 29. 1 2 3 4 5 6 7 8 9 10 11 12 1 1 3 ? 5 5 4 2 5 4 4 2 1 3 3 2 4 1 2 3 4 3 5 4 2 4 5 4 2 5 4 3 4 2 2 5 6 1 3 3 2 4 user s - estimate rating of movie 1 by user 5 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 29 movie s
  • 30. 1 2 3 4 5 6 7 8 9 10 11 12 1 1 3 ? 5 5 4 2 5 4 4 2 1 3 3 2 4 1 2 3 4 3 5 4 2 4 5 4 2 5 4 3 4 2 2 5 6 1 3 3 2 4 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 30 user s Neighbor selection: Identify movies similar to movie 1, rated by movie s s1,m 1.00 -0.18 0.41 -0.10 -0.31 0.59 Here we use Pearson correlation as similarity: 1) Subtract mean rating mi from each movie i m1 = (1+3+5+5+4)/5 = 3.6 row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0] 2) Compute dot products between rows
  • 31. 1 2 3 4 5 6 7 8 9 10 11 12 1 1 3 ? 5 5 4 2 5 4 4 2 1 3 3 2 4 1 2 3 4 3 5 4 2 4 5 4 2 5 4 3 4 2 2 5 6 1 3 3 2 4 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 31 user s Compute similarity weights: s1,3=0.41, s1,6=0.59 movie s s1,m 1.00 -0.18 0.41 -0.10 -0.31 0.59
  • 32. 1 2 3 4 5 6 7 8 9 10 11 12 1 1 3 2.6 5 5 4 2 5 4 4 2 1 3 3 2 4 1 2 3 4 3 5 4 2 4 5 4 2 5 4 3 4 2 2 5 6 1 3 3 2 4 user s Predict by taking weighted average: 1.5 r = (0.41*2 + 0.59*3) / (0.41+0.59) = 2.6 movie s 𝒓𝒊𝒙 = σ𝒋∈𝑵(𝒊;𝒙) 𝒔𝒊𝒋 ⋅ 𝒓𝒋𝒙 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 32 σ 𝒔 𝒊 𝒋
  • 33. ◾ Define similarity sij of items i and j ◾ Select k nearest neighbors N(i; x)  Items most similar to i, that were rated by x ◾ Estimate rating rxi as the weighted average: baseline estimate for rxi ◾ μ = overall mean movie rating ◾ bx = rating deviation of user x = (avg. rating of user x) – μ ◾ bi = rating deviation of movie i jN (i; x) ij s jN (i; x) sij rxj rxi   Before:  1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 33 jN (i; x) rxi  bxi  ij s jN (i; x) sij  (rxj  bxj ) 𝒃𝒙𝒊 = 𝝁 + 𝒃𝒙 + 𝒃𝒊
  • 34. 1 0.4 0.3 0.9 1 0.8 0.5 1 Avata r LOT R Matrix 0.8 Pirate s Alic e Bo b Car ol Davi d 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 34 ◾ In practice, it has been observed that item-item often works better than user-user ◾ Why? Items are “simpler”, users have multiple tastes
  • 35. ◾ + Works for any kind of item 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 35  No feature selection needed ◾ - Cold Start:  Need enough users in the system to find a match ◾ - Sparsity:  The user/ratings matrix is sparse  Hard to find users that have rated the same items ◾ - First rater:  Cannot recommend an item that has not been previously rated  New items, Esoteric items ◾ - Popularity bias:  Cannot recommend items to someone with unique taste  Tends to recommend popular items
  • 36. ◾ Implement two or more different recommenders and combine predictions  Perhaps using a linear model ◾ Add content-based methods to collaborative filtering  Item profiles for new item problem  Demographics to deal with new user problem 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 36
  • 37. - Evaluation - Error metrics - Complexity / Speed 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 37
  • 38. 1 3 4 3 5 5 4 5 5 3 3 2 2 2 5 2 1 1 3 3 1 movies users 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 38
  • 39. 1 3 4 3 5 5 4 5 5 3 3 2 ? ? ? 2 1 ? 3 ? 1 Test Data Set users movies 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 39
  • 40.  � � 1 σ ◾ Compare predictions with known ratings  Root-mean-square error (RMSE) 2 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 40 𝒙 𝒊 𝑥𝑖 𝑥𝑖 𝑥𝑖 𝒙𝒊 𝑟 − 𝑟∗ where 𝒓 is predicted, 𝒓∗ is the true rating of x on i  N is the number of points we are making comparisons on  Precision at top 10:  % of relevant items in top 10 ◾ Another approach: 0/1 model  Coverage:  Number of items/users for which the system can make predictions  Precision:  Accuracy of predictions  Receiver operating characteristic (ROC)  Tradeoff curve between false positives and false negatives
  • 41. ◾ Narrow focus on accuracy sometimes misses the point  Prediction Diversity  Prediction Context  Order of predictions ◾ In practice, we care only to predict high ratings:  RMSE might penalize a method that does well for high ratings and badly for others 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 41
  • 42. ◾ Expensive step is finding k most similar customers: O(|X|) ◾ Too expensive to do at runtime  Could pre-compute ◾ Naïve pre-computation takes time O(k ·|X|)  X … set of customers ◾ We already know how to do this!  Near-neighbor search in high dimensions (LSH)  Clustering 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 42
  • 43. ◾ Leverage all the data  Don’t try to reduce data size in an effort to make fancy algorithms work  Simple methods on large data do best ◾ Add more data  e.g., add IMDB data on genres ◾ More data beats better algorithms http://anand.typepad.com/datawocky/2008/03/more-data-usual.html 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 43
  • 45. ◾ Training data  100 million ratings, 480,000 users, 17,770 movies  6 years of data: 2000-2005 ◾ Test data  Last few ratings of each user (2.8 million)  Evaluation criterion: root mean squared error (RMSE)  Netflix Cinematch RMSE: 0.9514 ◾ Competition  2,700+ teams  $1 million prize for 10% improvement on Cinematch 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 45
  • 46. ◾ Next topic: Recommendations via Latent Factor models Overview of Coffee Varieties I2 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 46 Exoticness / Price I1C 6 L5 Exotic S5 C1 S7 S 2S1 S3 R4 S6 C7 RR2 6 R3 Flavored F3 B2 CC 2 L4C3 4 a1 S4 TE FR B1 F9 F8 F6 R5 F5 R8 Popular Roasts and Blends F4 Complexity of Flavor The bubbles above represent products sized by sales volume. Products close to each other are recommended to each other.
  • 47. Geare d towar ds female s Geare d towar ds males serio us escapi st The Princess Diaries The Lion King Braveheart Independenc e Day Amadeus The Color Purple Ocean’s 11 Sense and Sensibilit y Gu s Dav e [Bellkor Team] 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 47 Lethal Weapo n Dumb and Dumber
  • 48. Koren, Bell, Volinksy, IEEE Computer, 1/25/22 Jure Leskovec, Stanford CS246: Mining Massive 48