SlideShare a Scribd company logo
A 
Bayesian 
Model 
for 
Recommenda3on 
in 
Social 
Ra3ng 
Networks 
with 
Trust 
Rela3onships 
Gianni 
Costa, 
Giuseppe 
Manco, 
Riccardo 
Ortale
Mo3va3ng 
example 
• Joe 
is 
looking 
for 
a 
restaurant 
– Likes 
fish 
– Enjoys 
rock 
music 
– No 
smoker 
Chez 
Marcel 
• Ra3ng 
2 
– “Came 
there 
with 
some 
friends. 
Too 
loud, 
and 
the 
choice 
was 
very 
limited. 
I 
had 
one 
steak 
which 
wasn’t 
great” 
– Doesn’t 
like 
fish 
– Doesn’t 
like 
rock 
music 
1 
• Ra3ng 
2 
– “Too 
noisy. 
But 
good 
assortment 
of 
cigars” 
– Doesn’t 
like 
rock 
music 
– Smoker 
2 
• Ra3ng 
5 
– “GoSa 
try 
the 
seabass. 
Wonderful!” 
– Member 
of 
“Slow 
Food” 
3 
• Ra3ng 
4 
– “Jam 
night 
every 
Wednesday. 
Good 
local 
groups. 
A 
must-­‐see 
place.” 
– 4 
Writes 
on 
“Rolling 
Stone” 
Overall 
ra3ng:
Mo3va3ng 
example 
• Joe 
is 
looking 
for 
a 
restaurant 
– Likes 
fish 
– Enjoys 
rock 
music 
– No 
smoker 
Chez 
Marcel 
• Ra3ng 
2 
– “Came 
there 
with 
some 
friends. 
Too 
loud, 
and 
the 
choice 
was 
very 
limited. 
I 
had 
one 
steak 
which 
wasn’t 
great” 
– Doesn’t 
like 
fish 
– Doesn’t 
like 
rock 
music 
1 
• Ra3ng 
2 
– “Too 
noisy. 
But 
good 
assortment 
of 
cigars” 
– Doesn’t 
like 
rock 
music 
– Smoker 
2 
• Ra3ng 
5 
– “GoSa 
try 
the 
seabass. 
Wonderful!” 
– Member 
of 
“Slow 
Food” 
3 
• Ra3ng 
4 
– “Jam 
night 
every 
Wednesday. 
Good 
local 
groups. 
A 
must-­‐see 
place.” 
– 4 
Writes 
on 
“Rolling 
Stone” 
Overall 
ra3ng: 
• Joe’s 
profile 
doesn’t 
match 
1 
and 
par3ally 
matches 
2 
• 3 
and 
4 
are 
authorita3ve 
in 
their 
fields
Mo3va3ng 
example 
• Joe 
is 
looking 
for 
a 
restaurant 
– Likes 
fish 
– Enjoys 
rock 
music 
– No 
smoker 
Chez 
Marcel 
• Ra3ng 
5 
– “GoSa 
try 
the 
seabass. 
Wonderful!” 
– Member 
of 
“Slow 
Food” 
3 
• Ra3ng 
4 
– “Jam 
night 
every 
Wednesday. 
Good 
local 
groups. 
A 
must-­‐see 
place.” 
– 4 
Writes 
on 
“Rolling 
Stone” 
Overall 
ra3ng:
Recommenda3on 
with 
trust 
(and 
distrust) 
• We 
need 
to 
only 
consider 
compa3ble 
profiles 
• Authorita3veness 
and 
suscep3bility 
play 
a 
role 
• Recommenda3on 
is 
twofold 
– Who 
should 
we 
trust? 
– What 
should 
we 
get 
suggested 
according 
to 
our 
trustees’ 
preferences?
Formal 
Framework 
Input: 
Users, 
items 
Basic 
assump3on: 
an 
underlying 
social 
network 
of 
trust 
rela3onships 
exists 
among 
users
Formal 
Framework 
Output: 
(Signed) 
Network 
of 
trust 
rela3onships 
+ 
item 
adop3ons 
-­‐ 
+
Related 
works 
• Ra3ng 
predic3on 
for 
item 
recommenda3on 
in 
social 
networks 
with 
– unilateral 
rela3onships 
• e.g., 
trust 
networks 
– coopera-ve 
and 
mutual 
rela3onships 
• e.g., 
friends, 
rela3ves, 
classmates 
and 
so 
forth 
• Link 
predic3on 
– temporal 
vs 
structural 
approaches 
• Assume 
graphs 
with 
evolving 
(resp. 
fixed) 
sets 
of 
nodes 
– unsupervised 
vs 
supervised 
approaches 
• Compute 
scores 
for 
node 
pairs 
based 
on 
the 
topology 
of 
network 
graph 
alone. 
• Cast 
link 
predic3on 
as 
a 
binary 
classifica3on 
task
Basic 
Idea: 
Latent 
Factor 
Modeling 
• Three 
factor 
matrices: 
P, 
Q, 
F 
– Pu,k 
represents 
the 
suscep3bility 
of 
user 
u 
to 
factor 
k 
– Fu,k 
represents 
the 
exper3se 
of 
user 
u 
into 
factor 
k 
– Qi,k 
represents 
the 
characteriza3on 
of 
item 
i 
within 
factor 
k
Modeling 
item 
adop3ons 
Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) 
• Likes 
fish 
• Enjoys 
rock 
music 
• No 
smoker 
u 
i 
• Seafood 
• Live 
music 
• Smoking 
areas
Modeling 
trust 
rela3onships 
Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) 
Pr(Au,v|P,Q,F)Pr(Y,P,Q,F|A,R) u 
Pr(Ru,i|P,Q,F)Pr(Y,P,Q,F|A,R) • Likes 
Pr(Pr(Ru,i|A,R) Pr(Au,v|A,R) 
fish 
• Enjoys 
rock 
music 
• No 
smoker 
Au,v | P,F, " ⇠ N(P0uFv, "−1) 
• Member 
of 
“Slow 
Food” 
Pr(Ru,i|A,R) = 
Z X 
Y 
Au,v|A,R) 
Z X 
Y 
v
The 
Bayesian 
Genera3ve 
Model 
W0, ⌫0 μ0, "0 
⇤P ⇤F μP μF μQ ⇤Q 
F P Q 
N M 
" a r ↵ 
N ⇥ N N ⇥M 
Fig. 1. Graphical representation of the proposed Bayesian hierarchical model. 
Sample 
⇥P ⇠NW(⇥0) 
⇥Q ⇠NW(⇥0) 
⇥F ⇠NW(⇥0) 
For each item i 2 I sample 
Qi ⇠ N(μQ,⇤−1 
Q ) 
W0, ⌫0 μ0, "0 
⇤P ⇤F μP μF μQ ⇤Q 
F P Q 
" N M 
a r ↵ 
N ⇥ N N ⇥M 
Fig. 1. Graphical representation of the proposed Bayesian hierarchical model. 
1. Sample 
⇥P ⇠NW(⇥0) 
⇥Q ⇠NW(⇥0) 
⇥F ⇠NW(⇥0) 
2. For each item i 2 I sample 
Qi ⇠ N(μQ,⇤−1 
Q ) 
3. For each user u 2 N sample 
Pu ⇠N(μP,⇤−1 
P ) 
Fu ⇠N(μF,⇤−1 
F ) 
4. For each pair hu, vi 2 N ⇥ N sample 
Au,v ⇠ N( 
! 
P0uFv 
" 
, "−1) 
5. For each pair hu, ii 2 N ⇥ I sample 
Ru,i ⇠ N((Pu + Fu)Q0j , ↵−1)
Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) 
Inference 
and 
Predic3on 
• Given 
Au,v | P,F, " ⇠ N(P0uFu, "−1) 
observed 
trust 
rela3onships 
(A) 
and 
item 
adop3ons 
(R) 
we 
want 
to 
infer 
Pr(Ru,i|A,R) Pr(Au,v|A,R) 
• Problem: 
trust 
bias 
– Observed 
rela3onships 
in 
a 
social 
network 
are 
rarely 
nega3ve: 
people 
only 
make 
posi3ve 
connec3ons 
explicit
Inference 
and 
Predic3on 
• Solu3on: 
latent 
variable 
modeling 
• Yu,v 
u 
represents 
a 
(bernoulli) 
latent 
variable 
sta3ng 
whether 
a 
nega3ve 
trust 
rela3onship 
exists 
between 
u 
and 
v 
v
Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) 
Inference, 
model 
learning 
• Inference 
Au,v | P,F, " ⇠ N(P0uFu, "−1) 
by 
averaging 
on 
latent 
variables 
Pr(Ru,i|A,R) = 
Pr(Au,v|A,R) 
• Posteriors 
Pr(Ru,i|A,R) Pr(Au,v|A,R) 
Z X 
Y 
Pr(Ru,i|P,Q,F)Pr(Y,P,Q,F|A,R) dPdFdQ 
Z X 
Y 
Pr(Au,v|P,Q,F)Pr(Y,P,Q,F|A,R) dPdFdQ 
sampled 
through 
Gibbs 
sampling
✓ ⇥ ! v62 2: P ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with P, SP; 
Evalua3on 
Initialize P(0), F(0), Q(0), Y(0); 
3: for h = 1 to H do 
4: Sample ⇥(h) 
• Two 
datasets 
– Product 
evalua3on, 
trust 
rela3onships 
– 5-­‐star 
ra3ng 
system 
5: Sample ⇥(h) 
F ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with F, SF; 
6: Sample ⇥(h) 
F ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with Q, SQ 
7: for each (u, v) 2 U do 
8: Sample ✏(h) 
u,v according to Eq. 4.4; 
9: end for 
10: for each (u, v) 2 U do 
11: Sample Y (h) 
uv according to Eq. 4.3; 
12: end for 
13: for each u 2 N do 
14: Sample Pu ⇠ N 
✓ 
μ⇤(u) 
P , 
h 
⇤⇤(u) 
P 
i 
−1 
Frequency 
◆ 
10 100 1000 10000 
; 
15: Sample Fu ⇠ N 
✓ 
μ⇤(u) 
F , 
h 
⇤⇤(u) 
F 
i 
−1 
◆ 
; 
16: end for 
17: for each i 2 I do 
18: Sample Qi ⇠ N 
✓ 
μ⇤(i) 
Q , 
h 
⇤⇤(i) 
Q 
i 
−1 
◆ 
; 
19: end for 
20: end for 
● 
● 
● 
● 
InDegree − Epinions 
● 
● 
● 
●● 
● 
● 
●● 
●● 
● 
● 
● 
● 
● 
● 
● 
●●● 
●● 
● 
● 
● 
● 
●● 
● 
● 
●● 
● 
●● 
●●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
●●●●●● 
● 
●●● 
● 
●● 
●●● 
● 
● 
● 
● 
● 
●●●●●● ● 
1 10 100 1000 
InDegree 
● 
● 
● 
OutDegree − Epinions 
● 
● 
● 
● 
● 
●● 
● 
●● 
● 
●● 
● 
● 
●● 
● 
● 
●●●● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●●● 
● 
● 
● 
●●●●● 
●●●●●●●●●● ● ●● ●● 
1 10 100 1000 
OutDegree 
Frequency 
10 100 1000 10000 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●●● 
● 
● 
●● 
● 
● 
● 
Fig. 4. The scheme of Gibbs sampling algorithm in pseudo code 
Ciao Epinions 
Users 7,375 49,289 
Trust Relationships 111,781 487,181 
Items 106,797 139,738 
Ratings 282,618 664,823 
InDegree (Avg/Median/Min/Max) 15.16/6/1/100 9.8/2/1/2589 
OutDegree (Avg/Median/Min/Max) 16.46/4/1/804 14.35/3/1/1760 
Ratings on items (Avg/Median/Min/Max) 2.68/1/1/915 4.75/1/1/2026 
Ratings by Users (Avg/Median/Min/Max) 38.32/18/4/1543 16.55/6/1/1023 
Table 1. Summary of the chosen social rating networks. 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
●● 
● 
● 
● 
●● 
● 
●● 
● 
●●● 
●● 
●●●● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
●● 
● 
● 
● 
●● 
● 
● 
●● 
●● 
● 
● 
●● 
●● 
● 
● 
●● 
● 
● 
● 
●● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
●● 
● 
●●●●●● 
●●●●●●●●●●● ● ●● ● 
ItemRatings − Epinions 
ItemRatings 
Frequency 
1 10 100 1000 
10 100 1000 10000 1e+05 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
●● 
● 
● 
● 
● 
●● 
●● 
● 
● 
● 
●● 
● 
● 
● 
●●●● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
●●● 
● 
● 
●● 
●●●●●●● 
● 
● 
●●●●●● 
●●● ●●● 
UserRatings − Epinions 
UserRatings 
Frequency 
1 10 100 1000 
10 100 1000 10000 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●●● 
● 
● 
● 
● 
● 
●● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
InDegree − Ciao 
InDegree 
Frequency 
1 10 100 
10 100 1000 
● 
● 
● 
● 
● 
● 
● 
● ● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
●●●●● 
●●●●●● ●● 
● 
●● ● 
OutDegree − Ciao 
OutDegree 
Frequency 
1 10 100 
10 100 1000 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
●● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
●● 
● 
● 
● 
●● 
● 
●●●●● 
● 
●●●●●● 
●●● ● ●●● ● 
ItemRatings − Ciao 
ItemRatings 
Frequency 
1 10 100 
10 100 1000 10000 1e+05 
● 
● 
● 
● 
● ● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●●● 
●● 
● 
● 
● 
● 
●● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
● 
● 
● 
● 
● 
●● 
● 
● 
●●●●● 
● 
●●●●● 
●●●●●●●●●● 
● 
●●●●●●●●●●●●●● ● ● 
UserRatings − Ciao 
UserRatings 
Frequency 
10 100 1000 
10 100 
Fig. 5. Distributions of trust relationships and ratings in Epinions and Ciao. 
adapted the framework described in [20]. For each user, we considered the rat-ings 
as user features and we trained the factorization model which minimizes the 
AUC loss. We exploited the implementation made available by the authors http://cseweb.ucsd.edu/ akmenon/code. We refer to this method as AUC-MF 
in the following. In addition, we considered a further comparison in terms both RMSE and AUC against a basic matrix factorization approach based on 
SVD named Joint SVD (JSVD) [11]. We computed a low-rank factorization the joint adjacency/feature matrix X = [A R] as X ⇡ U· diag(1, . . . , K) ·VT where K is the rank of the decomposition and 1, . . . , K are the square roots the K greatest eigenvalues of XTX. The matrices U and V resemble the roles
Evalua3on 
• RMSE 
on 
Ra3ng 
Predic3on 
• AUC 
on 
Link 
Predic3on 
• Compe3tors 
– RMSE: 
SocialMF, 
JSVD 
(SVD 
on 
the 
combined 
matrices) 
– AUC: 
Matrix 
Factoriza3on 
tuned 
on 
AUC 
loss 
(AUC-­‐MF), 
JSVD 
• Experiments 
– 5-­‐Fold 
Monte-­‐Carlo 
Cross 
Valida3on 
(70/30 
split 
on 
each 
trial, 
for 
the 
matrix 
to 
predict)
minimum RMSE on both datasets. There is a tendency decrease. However, this tendency is more evident the other two methods exhibit negligible di↵erences. 
RMSE 
4 8 16 32 64 128 
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 
Epinions 
N. of factors 
RMSE 
HBPMF 
JSVD 
SocialMF 
4 8 16 32 64 128 
0.0 0.2 0.4 0.6 0.8 1.0 1.2 
Ciao 
N. of factors 
RMSE 
HBPMF 
JSVD 
SocialMF 
0.2 0.4 0.6 0.8 1.0 1.2 
Epinions 
4 8 16 32 0.0 N. of factors 
AUC 
HBPMF 
JSVD 
AUC−MF 
Fig. 6. Prediction results. 
The opposite trend is observed in trust prediction. prefer a low number of factors, as the best results are
There is a tendency of the RMSE to pro-gressively 
tendency is more evident on SocialMF, while 
The opposite trend is observed in trust prediction. Here, all prefer a low number of factors, as the best results are achieved devised HBPMF model AUC 
achieves the maximum AUC on the and results comparable to JSVD on Ciao. The detailed results Fig. 7, where the ROC curves are reported. In general, the predictive of the Bayesian hierarchical model is stable with regards to the This is a direct result of the Bayesian modeling, which makes to the growth of the model complexity. Fig. 8 also shows varies according to the distributions which characterize the data. a correlation between accuracy and node degrees, as well as the provided by a user or received by an item. 
negligible di↵erences. 
4 8 16 32 64 128 
0.0 0.2 0.4 0.6 0.8 1.0 1.2 
Epinions 
N. of factors 
AUC 
HBPMF 
JSVD 
AUC−MF 
4 8 16 32 64 128 
0.0 0.2 0.4 0.6 0.8 1.0 
Ciao 
N. of factors 
AUC 
HBPMF 
JSVD 
AUC−MF 
Prediction results. 
Epinions 
trust prediction. Here, all methods tend to 
best results are achieved with K = 4. The 
maximum AUC on the Epinions dataset, 
False positive rate 
on Ciao. The detailed results are shown in 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
HBPMF 
JSVD 
AUC−MF 
Ciao 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
HBPMF 
JSVD 
AUC−MF 
4 factors 
Fig. 7. ROC curves on trust prediction for K =
Cold/Warm 
start 
effects 
Epinions 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
InDegree  10 
10  InDegree  100 
InDegree  100 
Epinions 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
OutDegree  10 
10  OutDegree  100 
OutDegree  100 
Ciao 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
InDegree  10 
10  InDegree  100 
Ciao 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
OutDegree  10 
10  OutDegree  100 
OutDegree  100 
Epinions 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
UserRatings(src)  10 
10  UserRatings(src)  100 
UserRatings(src)  100 
Epinions 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
UserRatings(dst)  10 
10  UserRatings(dst)  100 
UserRatings(dst)  100 
Ciao 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
UserRatings(src)  10 
10  UserRatings(src)  100 
UserRatings(src)  100 
Ciao 
False positive rate 
True positive rate 
0.0 0.2 0.4 0.6 0.8 1.0 
0.0 0.2 0.4 0.6 0.8 1.0 
UserRatings(dst)  10 
10  UserRatings(dst)  100 
UserRatings(dst)  100 
0.0 0.2 0.4 0.6 0.8 1.0 
Epinions 
RMSE 
ItemRatings  10 
10ItemRatings  100 
100 ItemRatings  1000 
ItemRatings  1000 
UserRatings  10 
10UserRatings  100 
UserRatings  100 
0.0 0.2 0.4 0.6 0.8 1.0 
epinions 
RMSE 
InDegree  10 
10InDegree  100 
OutDegree  10 
10OutDegree  100 
OutDegree  100 
0.0 0.2 0.4 0.6 0.8 1.0 
Ciao 
RMSE 
ItemRatings  10 
10ItemRatings  100 
ItemRatings  100 
UserRatings  10 
10UserRatings  100 
UserRatings  100 
0.0 0.2 0.4 0.6 0.8 
Ciao 
RMSE 
InDegree  10 
10InDegree  100 
OutDegree  10 
10OutDegree  100 
OutDegree  100 
Fig. 8. Data distribution vs. AUC and rating prediction. 
More precisely, we performed a simple BPMF (as described in [25]). Dually, we
Joint 
modeling 
• Significant 
on 
RMSE 
RMSE (1) AUC (1) RMSE (2) AUC (2) 
0.0 0.2 0.4 0.6 0.8 1.0 
Metric (RMSE/AUC) 
Full Model 
Partial Model 
1000 2000 3000 4000 5000 6000 
Epinions 
4 8 16 32 0 N. of factors 
Time (secs.) 
HBPMF 
JSVD 
SocialMF 
AUC−MF 
Fig. 9. (a) E↵ects of the joint modeling. (1 denotes Average running time for iteration (JSVD reports
Computa3onal 
cost 
4 8 16 32 64 128 
0 1000 2000 3000 4000 5000 6000 
Epinions 
N. of factors 
Time (secs.) 
HBPMF 
JSVD 
SocialMF 
AUC−MF 
4 8 16 32 64 128 
0 50 100 150 200 250 300 350 
Ciao 
N. of factors 
Time (secs.) 
HBPMF 
JSVD 
SocialMF 
AUC−MF 
modeling. (1 denotes Epinions, and 2 denotes Ciao).
Conclusions 
• Unified 
approach 
item 
recommenda3on 
and 
trust 
rela3onships 
– Mi3gates 
the 
effect 
of 
not 
matching 
profiles 
– Simple, 
intui3ve, 
robust 
mathema3cal 
formula3on 
– Good 
predic3ve 
performance 
• Issues 
– Inferring 
the 
number 
of 
factors 
• Indian 
Buffet 
Process 
easy 
to 
plug 
– Modeling 
alterna3ves 
• Logis3c, 
probit 
– Computa3onal 
cost 
• Paralleliza3on 
• Reformula3on 
as 
tensor 
decomposi3on?
Thank 
you 
Ques3ons 
manco@icar.cnr.it 
@beman

More Related Content

Similar to A Bayesian Model for Recommendation in Social Rating Networks with Trust Relationships (20)

DOCX
Rating System Algorithms Document
Scandala Tamang
 
PPTX
RS NAIVE BAYES ASSOCIATION RULE MINING AND BLACK BOX
khsbharadwaj123
 
PDF
PPT by Jannach_organized.pdf presentation on the recommendation
sai419417
 
PPT
Lightweight Distributed Trust Propagation
Daniele Quercia
 
PDF
Validity and Reliability of Cranfield-like Evaluation in Information Retrieval
Julián Urbano
 
PDF
Recommender Systems with Implicit Feedback Challenges, Techniques, and Applic...
NAVER Engineering
 
DOC
uai2004_V1.doc.doc.doc
butest
 
PDF
HOP-Rec_RecSys18
Matt Yang
 
PDF
Comparing State-of-the-Art Collaborative Filtering Systems
nextlib
 
PDF
Sociocast CF Benchmark
Albert Azout
 
PDF
Introduction to behavior based recommendation system
Kimikazu Kato
 
PDF
Rating Prediction for Restaurant
Yaqing Wang
 
PDF
Sociocast NODE vs. Collaborative Filtering Benchmark
Albert Azout
 
PDF
Recommender Systems
Carlos Castillo (ChaTo)
 
PDF
Social Networks
Svitlana volkova
 
PDF
Using Trust in Recommender Systems: an experimental analysis
Paolo Massa
 
PPTX
Predictability of conversation partners
Naoki Masuda
 
PDF
Incremental Item-based Collaborative Filtering
jnvms
 
PDF
Recommendation System --Theory and Practice
Kimikazu Kato
 
PPTX
RELIN: Relatedness and Informativeness-based Centrality for Entity Summarization
Gong Cheng
 
Rating System Algorithms Document
Scandala Tamang
 
RS NAIVE BAYES ASSOCIATION RULE MINING AND BLACK BOX
khsbharadwaj123
 
PPT by Jannach_organized.pdf presentation on the recommendation
sai419417
 
Lightweight Distributed Trust Propagation
Daniele Quercia
 
Validity and Reliability of Cranfield-like Evaluation in Information Retrieval
Julián Urbano
 
Recommender Systems with Implicit Feedback Challenges, Techniques, and Applic...
NAVER Engineering
 
uai2004_V1.doc.doc.doc
butest
 
HOP-Rec_RecSys18
Matt Yang
 
Comparing State-of-the-Art Collaborative Filtering Systems
nextlib
 
Sociocast CF Benchmark
Albert Azout
 
Introduction to behavior based recommendation system
Kimikazu Kato
 
Rating Prediction for Restaurant
Yaqing Wang
 
Sociocast NODE vs. Collaborative Filtering Benchmark
Albert Azout
 
Recommender Systems
Carlos Castillo (ChaTo)
 
Social Networks
Svitlana volkova
 
Using Trust in Recommender Systems: an experimental analysis
Paolo Massa
 
Predictability of conversation partners
Naoki Masuda
 
Incremental Item-based Collaborative Filtering
jnvms
 
Recommendation System --Theory and Practice
Kimikazu Kato
 
RELIN: Relatedness and Informativeness-based Centrality for Entity Summarization
Gong Cheng
 

Recently uploaded (20)

PPTX
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
DOC
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
PPTX
Human-Action-Recognition-Understanding-Behavior.pptx
nreddyjanga
 
PDF
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
PDF
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
PDF
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
PDF
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
PPTX
Hadoop_EcoSystem slide by CIDAC India.pptx
migbaruget
 
PDF
Incident Response and Digital Forensics Certificate
VICTOR MAESTRE RAMIREZ
 
PDF
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
PPTX
Rational Functions, Equations, and Inequalities (1).pptx
mdregaspi24
 
PDF
Early_Diabetes_Detection_using_Machine_L.pdf
maria879693
 
PPTX
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
PPTX
recruitment Presentation.pptxhdhshhshshhehh
devraj40467
 
PPTX
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
PDF
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
PDF
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
PDF
WEF_Future_of_Global_Fintech_Second_Edition_2025.pdf
AproximacionAlFuturo
 
PPTX
Module-5-Measures-of-Central-Tendency-Grouped-Data-1.pptx
lacsonjhoma0407
 
PPTX
Climate Action.pptx action plan for climate
justfortalabat
 
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
MATRIX_AMAN IRAWAN_20227479046.docbbbnnb
vanitafiani1
 
Human-Action-Recognition-Understanding-Behavior.pptx
nreddyjanga
 
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
Web Scraping with Google Gemini 2.0 .pdf
Tamanna
 
Product Management in HealthTech (Case Studies from SnappDoctor)
Hamed Shams
 
Hadoop_EcoSystem slide by CIDAC India.pptx
migbaruget
 
Incident Response and Digital Forensics Certificate
VICTOR MAESTRE RAMIREZ
 
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
Rational Functions, Equations, and Inequalities (1).pptx
mdregaspi24
 
Early_Diabetes_Detection_using_Machine_L.pdf
maria879693
 
apidays Helsinki & North 2025 - Vero APIs - Experiences of API development in...
apidays
 
recruitment Presentation.pptxhdhshhshshhehh
devraj40467
 
apidays Munich 2025 - Building an AWS Serverless Application with Terraform, ...
apidays
 
List of all the AI prompt cheat codes.pdf
Avijit Kumar Roy
 
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
WEF_Future_of_Global_Fintech_Second_Edition_2025.pdf
AproximacionAlFuturo
 
Module-5-Measures-of-Central-Tendency-Grouped-Data-1.pptx
lacsonjhoma0407
 
Climate Action.pptx action plan for climate
justfortalabat
 
Ad

A Bayesian Model for Recommendation in Social Rating Networks with Trust Relationships

  • 1. A Bayesian Model for Recommenda3on in Social Ra3ng Networks with Trust Rela3onships Gianni Costa, Giuseppe Manco, Riccardo Ortale
  • 2. Mo3va3ng example • Joe is looking for a restaurant – Likes fish – Enjoys rock music – No smoker Chez Marcel • Ra3ng 2 – “Came there with some friends. Too loud, and the choice was very limited. I had one steak which wasn’t great” – Doesn’t like fish – Doesn’t like rock music 1 • Ra3ng 2 – “Too noisy. But good assortment of cigars” – Doesn’t like rock music – Smoker 2 • Ra3ng 5 – “GoSa try the seabass. Wonderful!” – Member of “Slow Food” 3 • Ra3ng 4 – “Jam night every Wednesday. Good local groups. A must-­‐see place.” – 4 Writes on “Rolling Stone” Overall ra3ng:
  • 3. Mo3va3ng example • Joe is looking for a restaurant – Likes fish – Enjoys rock music – No smoker Chez Marcel • Ra3ng 2 – “Came there with some friends. Too loud, and the choice was very limited. I had one steak which wasn’t great” – Doesn’t like fish – Doesn’t like rock music 1 • Ra3ng 2 – “Too noisy. But good assortment of cigars” – Doesn’t like rock music – Smoker 2 • Ra3ng 5 – “GoSa try the seabass. Wonderful!” – Member of “Slow Food” 3 • Ra3ng 4 – “Jam night every Wednesday. Good local groups. A must-­‐see place.” – 4 Writes on “Rolling Stone” Overall ra3ng: • Joe’s profile doesn’t match 1 and par3ally matches 2 • 3 and 4 are authorita3ve in their fields
  • 4. Mo3va3ng example • Joe is looking for a restaurant – Likes fish – Enjoys rock music – No smoker Chez Marcel • Ra3ng 5 – “GoSa try the seabass. Wonderful!” – Member of “Slow Food” 3 • Ra3ng 4 – “Jam night every Wednesday. Good local groups. A must-­‐see place.” – 4 Writes on “Rolling Stone” Overall ra3ng:
  • 5. Recommenda3on with trust (and distrust) • We need to only consider compa3ble profiles • Authorita3veness and suscep3bility play a role • Recommenda3on is twofold – Who should we trust? – What should we get suggested according to our trustees’ preferences?
  • 6. Formal Framework Input: Users, items Basic assump3on: an underlying social network of trust rela3onships exists among users
  • 7. Formal Framework Output: (Signed) Network of trust rela3onships + item adop3ons -­‐ +
  • 8. Related works • Ra3ng predic3on for item recommenda3on in social networks with – unilateral rela3onships • e.g., trust networks – coopera-ve and mutual rela3onships • e.g., friends, rela3ves, classmates and so forth • Link predic3on – temporal vs structural approaches • Assume graphs with evolving (resp. fixed) sets of nodes – unsupervised vs supervised approaches • Compute scores for node pairs based on the topology of network graph alone. • Cast link predic3on as a binary classifica3on task
  • 9. Basic Idea: Latent Factor Modeling • Three factor matrices: P, Q, F – Pu,k represents the suscep3bility of user u to factor k – Fu,k represents the exper3se of user u into factor k – Qi,k represents the characteriza3on of item i within factor k
  • 10. Modeling item adop3ons Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) • Likes fish • Enjoys rock music • No smoker u i • Seafood • Live music • Smoking areas
  • 11. Modeling trust rela3onships Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) Pr(Au,v|P,Q,F)Pr(Y,P,Q,F|A,R) u Pr(Ru,i|P,Q,F)Pr(Y,P,Q,F|A,R) • Likes Pr(Pr(Ru,i|A,R) Pr(Au,v|A,R) fish • Enjoys rock music • No smoker Au,v | P,F, " ⇠ N(P0uFv, "−1) • Member of “Slow Food” Pr(Ru,i|A,R) = Z X Y Au,v|A,R) Z X Y v
  • 12. The Bayesian Genera3ve Model W0, ⌫0 μ0, "0 ⇤P ⇤F μP μF μQ ⇤Q F P Q N M " a r ↵ N ⇥ N N ⇥M Fig. 1. Graphical representation of the proposed Bayesian hierarchical model. Sample ⇥P ⇠NW(⇥0) ⇥Q ⇠NW(⇥0) ⇥F ⇠NW(⇥0) For each item i 2 I sample Qi ⇠ N(μQ,⇤−1 Q ) W0, ⌫0 μ0, "0 ⇤P ⇤F μP μF μQ ⇤Q F P Q " N M a r ↵ N ⇥ N N ⇥M Fig. 1. Graphical representation of the proposed Bayesian hierarchical model. 1. Sample ⇥P ⇠NW(⇥0) ⇥Q ⇠NW(⇥0) ⇥F ⇠NW(⇥0) 2. For each item i 2 I sample Qi ⇠ N(μQ,⇤−1 Q ) 3. For each user u 2 N sample Pu ⇠N(μP,⇤−1 P ) Fu ⇠N(μF,⇤−1 F ) 4. For each pair hu, vi 2 N ⇥ N sample Au,v ⇠ N( ! P0uFv " , "−1) 5. For each pair hu, ii 2 N ⇥ I sample Ru,i ⇠ N((Pu + Fu)Q0j , ↵−1)
  • 13. Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) Inference and Predic3on • Given Au,v | P,F, " ⇠ N(P0uFu, "−1) observed trust rela3onships (A) and item adop3ons (R) we want to infer Pr(Ru,i|A,R) Pr(Au,v|A,R) • Problem: trust bias – Observed rela3onships in a social network are rarely nega3ve: people only make posi3ve connec3ons explicit
  • 14. Inference and Predic3on • Solu3on: latent variable modeling • Yu,v u represents a (bernoulli) latent variable sta3ng whether a nega3ve trust rela3onship exists between u and v v
  • 15. Ru,i | P,Q,F, ↵ ⇠ N((Pu + Fu)0 Qi, ↵−1) Inference, model learning • Inference Au,v | P,F, " ⇠ N(P0uFu, "−1) by averaging on latent variables Pr(Ru,i|A,R) = Pr(Au,v|A,R) • Posteriors Pr(Ru,i|A,R) Pr(Au,v|A,R) Z X Y Pr(Ru,i|P,Q,F)Pr(Y,P,Q,F|A,R) dPdFdQ Z X Y Pr(Au,v|P,Q,F)Pr(Y,P,Q,F|A,R) dPdFdQ sampled through Gibbs sampling
  • 16. ✓ ⇥ ! v62 2: P ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with P, SP; Evalua3on Initialize P(0), F(0), Q(0), Y(0); 3: for h = 1 to H do 4: Sample ⇥(h) • Two datasets – Product evalua3on, trust rela3onships – 5-­‐star ra3ng system 5: Sample ⇥(h) F ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with F, SF; 6: Sample ⇥(h) F ⇠ NW(⇥n) where ⇥n is computed by updating ⇥0 with Q, SQ 7: for each (u, v) 2 U do 8: Sample ✏(h) u,v according to Eq. 4.4; 9: end for 10: for each (u, v) 2 U do 11: Sample Y (h) uv according to Eq. 4.3; 12: end for 13: for each u 2 N do 14: Sample Pu ⇠ N ✓ μ⇤(u) P , h ⇤⇤(u) P i −1 Frequency ◆ 10 100 1000 10000 ; 15: Sample Fu ⇠ N ✓ μ⇤(u) F , h ⇤⇤(u) F i −1 ◆ ; 16: end for 17: for each i 2 I do 18: Sample Qi ⇠ N ✓ μ⇤(i) Q , h ⇤⇤(i) Q i −1 ◆ ; 19: end for 20: end for ● ● ● ● InDegree − Epinions ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ●● ● ● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●●●●●● ● ●●● ● ●● ●●● ● ● ● ● ● ●●●●●● ● 1 10 100 1000 InDegree ● ● ● OutDegree − Epinions ● ● ● ● ● ●● ● ●● ● ●● ● ● ●● ● ● ●●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●●●●● ●●●●●●●●●● ● ●● ●● 1 10 100 1000 OutDegree Frequency 10 100 1000 10000 ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● Fig. 4. The scheme of Gibbs sampling algorithm in pseudo code Ciao Epinions Users 7,375 49,289 Trust Relationships 111,781 487,181 Items 106,797 139,738 Ratings 282,618 664,823 InDegree (Avg/Median/Min/Max) 15.16/6/1/100 9.8/2/1/2589 OutDegree (Avg/Median/Min/Max) 16.46/4/1/804 14.35/3/1/1760 Ratings on items (Avg/Median/Min/Max) 2.68/1/1/915 4.75/1/1/2026 Ratings by Users (Avg/Median/Min/Max) 38.32/18/4/1543 16.55/6/1/1023 Table 1. Summary of the chosen social rating networks. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ● ●● ● ●●● ●● ●●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ●● ●● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●●●●●● ●●●●●●●●●●● ● ●● ● ItemRatings − Epinions ItemRatings Frequency 1 10 100 1000 10 100 1000 10000 1e+05 ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ●●●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ●● ●●●●●●● ● ● ●●●●●● ●●● ●●● UserRatings − Epinions UserRatings Frequency 1 10 100 1000 10 100 1000 10000 ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● InDegree − Ciao InDegree Frequency 1 10 100 10 100 1000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●●●●● ●●●●●● ●● ● ●● ● OutDegree − Ciao OutDegree Frequency 1 10 100 10 100 1000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ●●●●● ● ●●●●●● ●●● ● ●●● ● ItemRatings − Ciao ItemRatings Frequency 1 10 100 10 100 1000 10000 1e+05 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●●●●● ● ●●●●● ●●●●●●●●●● ● ●●●●●●●●●●●●●● ● ● UserRatings − Ciao UserRatings Frequency 10 100 1000 10 100 Fig. 5. Distributions of trust relationships and ratings in Epinions and Ciao. adapted the framework described in [20]. For each user, we considered the rat-ings as user features and we trained the factorization model which minimizes the AUC loss. We exploited the implementation made available by the authors http://cseweb.ucsd.edu/ akmenon/code. We refer to this method as AUC-MF in the following. In addition, we considered a further comparison in terms both RMSE and AUC against a basic matrix factorization approach based on SVD named Joint SVD (JSVD) [11]. We computed a low-rank factorization the joint adjacency/feature matrix X = [A R] as X ⇡ U· diag(1, . . . , K) ·VT where K is the rank of the decomposition and 1, . . . , K are the square roots the K greatest eigenvalues of XTX. The matrices U and V resemble the roles
  • 17. Evalua3on • RMSE on Ra3ng Predic3on • AUC on Link Predic3on • Compe3tors – RMSE: SocialMF, JSVD (SVD on the combined matrices) – AUC: Matrix Factoriza3on tuned on AUC loss (AUC-­‐MF), JSVD • Experiments – 5-­‐Fold Monte-­‐Carlo Cross Valida3on (70/30 split on each trial, for the matrix to predict)
  • 18. minimum RMSE on both datasets. There is a tendency decrease. However, this tendency is more evident the other two methods exhibit negligible di↵erences. RMSE 4 8 16 32 64 128 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Epinions N. of factors RMSE HBPMF JSVD SocialMF 4 8 16 32 64 128 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Ciao N. of factors RMSE HBPMF JSVD SocialMF 0.2 0.4 0.6 0.8 1.0 1.2 Epinions 4 8 16 32 0.0 N. of factors AUC HBPMF JSVD AUC−MF Fig. 6. Prediction results. The opposite trend is observed in trust prediction. prefer a low number of factors, as the best results are
  • 19. There is a tendency of the RMSE to pro-gressively tendency is more evident on SocialMF, while The opposite trend is observed in trust prediction. Here, all prefer a low number of factors, as the best results are achieved devised HBPMF model AUC achieves the maximum AUC on the and results comparable to JSVD on Ciao. The detailed results Fig. 7, where the ROC curves are reported. In general, the predictive of the Bayesian hierarchical model is stable with regards to the This is a direct result of the Bayesian modeling, which makes to the growth of the model complexity. Fig. 8 also shows varies according to the distributions which characterize the data. a correlation between accuracy and node degrees, as well as the provided by a user or received by an item. negligible di↵erences. 4 8 16 32 64 128 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Epinions N. of factors AUC HBPMF JSVD AUC−MF 4 8 16 32 64 128 0.0 0.2 0.4 0.6 0.8 1.0 Ciao N. of factors AUC HBPMF JSVD AUC−MF Prediction results. Epinions trust prediction. Here, all methods tend to best results are achieved with K = 4. The maximum AUC on the Epinions dataset, False positive rate on Ciao. The detailed results are shown in True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 HBPMF JSVD AUC−MF Ciao False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 HBPMF JSVD AUC−MF 4 factors Fig. 7. ROC curves on trust prediction for K =
  • 20. Cold/Warm start effects Epinions False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 InDegree 10 10 InDegree 100 InDegree 100 Epinions False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 OutDegree 10 10 OutDegree 100 OutDegree 100 Ciao False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 InDegree 10 10 InDegree 100 Ciao False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 OutDegree 10 10 OutDegree 100 OutDegree 100 Epinions False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 UserRatings(src) 10 10 UserRatings(src) 100 UserRatings(src) 100 Epinions False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 UserRatings(dst) 10 10 UserRatings(dst) 100 UserRatings(dst) 100 Ciao False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 UserRatings(src) 10 10 UserRatings(src) 100 UserRatings(src) 100 Ciao False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 UserRatings(dst) 10 10 UserRatings(dst) 100 UserRatings(dst) 100 0.0 0.2 0.4 0.6 0.8 1.0 Epinions RMSE ItemRatings 10 10ItemRatings 100 100 ItemRatings 1000 ItemRatings 1000 UserRatings 10 10UserRatings 100 UserRatings 100 0.0 0.2 0.4 0.6 0.8 1.0 epinions RMSE InDegree 10 10InDegree 100 OutDegree 10 10OutDegree 100 OutDegree 100 0.0 0.2 0.4 0.6 0.8 1.0 Ciao RMSE ItemRatings 10 10ItemRatings 100 ItemRatings 100 UserRatings 10 10UserRatings 100 UserRatings 100 0.0 0.2 0.4 0.6 0.8 Ciao RMSE InDegree 10 10InDegree 100 OutDegree 10 10OutDegree 100 OutDegree 100 Fig. 8. Data distribution vs. AUC and rating prediction. More precisely, we performed a simple BPMF (as described in [25]). Dually, we
  • 21. Joint modeling • Significant on RMSE RMSE (1) AUC (1) RMSE (2) AUC (2) 0.0 0.2 0.4 0.6 0.8 1.0 Metric (RMSE/AUC) Full Model Partial Model 1000 2000 3000 4000 5000 6000 Epinions 4 8 16 32 0 N. of factors Time (secs.) HBPMF JSVD SocialMF AUC−MF Fig. 9. (a) E↵ects of the joint modeling. (1 denotes Average running time for iteration (JSVD reports
  • 22. Computa3onal cost 4 8 16 32 64 128 0 1000 2000 3000 4000 5000 6000 Epinions N. of factors Time (secs.) HBPMF JSVD SocialMF AUC−MF 4 8 16 32 64 128 0 50 100 150 200 250 300 350 Ciao N. of factors Time (secs.) HBPMF JSVD SocialMF AUC−MF modeling. (1 denotes Epinions, and 2 denotes Ciao).
  • 23. Conclusions • Unified approach item recommenda3on and trust rela3onships – Mi3gates the effect of not matching profiles – Simple, intui3ve, robust mathema3cal formula3on – Good predic3ve performance • Issues – Inferring the number of factors • Indian Buffet Process easy to plug – Modeling alterna3ves • Logis3c, probit – Computa3onal cost • Paralleliza3on • Reformula3on as tensor decomposi3on?