Introduction of Privacy
Protection Mathematical Models
Private Information Retrieval
IR with Homomorphic Encryption
k-anonymity
l-diversity
Defamation caused by k-Anonymity
Hiroshi Nakagawa
The University of Tokyo
Overview of Privacy Protection Technologies
Whose privacy?
questioner Data subject whose personal data is in DB
Transform
query
Homomorphic
encryption
Private IR
Add
dummy
Semantic
preserving
query
transform
Decompose
query
Secure
computation:
Encrypt query
and DB by
questioner’s
secret key. Then
search w.o.
decryption
Method? What data is perturbed?
DB Whether
respond or
not
Query audit
response
Add noise
Differential
Privacy=Math.
models of added
noise
Deterministic
vs
Probabilistic
Transform
many has
the same QI
k-anonym.
l-diversity
t-close
anatomy
psudonymize:randomize
Personal ID by hash func.
1/k-anonym, obscurity
Whose privacy?
questioner Data subject whose personal data is in DB
Transform
query
Homomorphic
encryption
Private IR
Add
dummy
Semantic
preserving
query
transform
Decompose
query
Secure
computation:
Encrypt query
and DB by
questioner’s
secret key. Then
search w.o.
decryption
Method? What data is perturbed?
DB Whether
respond or
not
Query audit
response
Add noise
Differential
Privacy=Math.
models of added
noise
Deterministic
vs
Probablistic
Transform
many has
the same QI
k-anonym.
l-diversity
t-close
anatomy
psudonymize:randomize
Personal ID by hash func.
1/k-anonym, obscurity
Overview of Privacy Protection Technologies
Private Information Retrieval
(PIR)
Why user privacy should be protected in IR?
 While we mainly focused on privacy protection of
personal data in DB, user privacy of query in IR should be
protected as well.
 Knowledge-based scheme to create privacy-preserving but
semantically-related queries for web search engines
– David Sanchez, Jordi Castella-Roca, Alexandre Viejo
– Information Sciences, http://dx.doi.org/10.1016/j.ins.2012.06.025
Why user privacy should be protected in IR?
• IT companies in US transfer or even sell user profile to the
government authorities such as:
– AOL responds more than 1000 a month,
– Facebook responds 10 to 20 request a day
– US Yahoo sells its members’ account, e-mail by 30$-40$ for
one account
• These make amount of profit for IT companies , but
no return to data subjects.
– Even worse, bad guy may steel them.
• Then, internet search engine users should employ technologies
that protect him/herself identity from search engine.
What information a user wants keep
secret from the search engine?
Anonymity: a user does not want to be inferred
who send a query from S.E.
 Tor (onion routing)
 A proxy user mixes up several users queries and send
them to S.E.
Obfuscation: S.E. does not know the exact query,
even if the S.E. knows who sends a query.
What should be kept secret is:
– A set of words that consist of the query
– More generally, what the user wants to search.
– Majority of research is for Web search cases.
Keep secret the location a user sends a query
• A user wants to use a location based services
such as showing near by good restaurants, but
does not want the service provider his/her
location
• Using the tursted third party :TPP if exists
User ID, location
response
TPP alters the user ID and
location if necessary
response
The service
provider using a
user’s location
A user TPP
Mixing up several users’ locations
• In case of no TPP, several users trusting each other make a group, and use the
location based services
• L(n) is a location of a user whose ID=n
• Starting from ID=1, and add up each user’s location and finally k th user sends the
mixed up locations and request the services
• Each user only memorizes the previous user’s ID and when receives the response ,
return it to the previous user as shown in the figure below.
• By shuffling locations, each user does not recognize which response is for whose
reqest.
• Similar to k-anonymization.
ID=1
ID=2
ID=3
ID=4
[1,L(1)]
[L(1),2,L(2)] [L(1),2,L(2),3,L(3)]
Request for services
[L(1),L(2),L(3),L(4),4]
サービス結果リスト
[Res(1),Res(2),
Res(3),Res(4)]
[Res(1),Res(2),
Res(3),Res(4)]
[Res(1),Res(2),
Res(3),Res(4)]
[Res(1),Res(2),
Res(3),Res(4)]
The service
provider using a
user’s location
①
②
③
④
⑤
⑥⑦
⑧
Private Information Retrieval
 Researchers in industry send queries to S.E. to search the
DB. Their queries indicate the information of R&D of their
company.
 They want to make the queries secret from S.E. of the DB.
 Ex. Query including both chemical compound A and B, which is
crucial for R&D.
Data Base
Try to preserve
the whole
contents of the
DB.
Query
Try to keep secret the query
Queries are the company’s
secret about their R&D.
what should be kept secret?
• Information which can identify a searcher of
DB or a user of services.
• Internet ID, name
• Location from where a searcher send the query
• Time of sending the query
• Query contents
• See next slide
• Existence of query
Query length and structure
should be kept secret
• In case of a query consists of words,
– Add dummy words, or replace the word with word(s) of same meaning
• A set of words having some structure such as
sentences, ordered sequence of words
– More complex paraphrasing is needed
• Location info., numerical info. etc.
– Add noise, reduce the precision or whatever
How to make it difficult to infer
the real query ?  Obfuscation
• A query is divided into words. Each word is used as
distinct query
• Add noise, say confusing words, to the query
• Replace a query word with semantically similar
word(s)
 When we get response( list of documents, etc.), we have to
select out the originally intended answer from them.
Outlook of PIR with obfuscation
Searcher’s profile:X= multinomial
distribution of 𝑝𝑖 which is the probability
of i th topic
Dummy
Generation
System:
DGS
Internet
Semantic
Classification
R,R,R
D,R,D,D,R
R:real query
D:dummy query
:generated by DGS
Q,Q,Q
D and R are
indistinguishable
from S.E.
Semantically
classification
Profile
refiner
X
Y
Dummy
filter
Z learned with profile
and dummy
Throw
awayQ if
regareded as
dummy
Revise
profile by
Q
regarded
as true
query
Search Engine: S.E.
(possibly adversary)Questioner:A
Y is the
inferred
value of
X
Supplemental explanation
 A questioner : A makes dummy queries D by DGS(dummy
generater system) based on the real query R, and send R
and D to the search engine: S.E., which might be an
adversary.
 S.E. receives Q which actually consists of R and D. Then S.E.
learns a questiner’s profile Z, and classifies Q into real
query and dummy queries.
 In this setting, the questioner wants Q not be classified into
R and D. In addition, he/she would not like his/her profile
inferred by S.E.. That is why adding D or replacing true R
with other words.
Overview of Privacy Protection Technologies
Whose privacy?
questioner Data subject whose personal data is in DB
Transform
query
Homomorphic
encryption
Private IR
Add
dummy
Semantic
preserving
query
transform
Decompose
query
Secure
computation:
Encrypt query
and DB by
questioner’s
secret key. Then
search w.o.
decryption
Method? What data is perturbed?
DB Whether
respond or
not
Query audit
response
Add noise
Differential
Privacy=Math.
models of added
noise
Deterministic
vs
Probabilistic
Transform
many has
the same QI
k-anonym.
l-diversity
t-close
anatomy
psudonymize:randomize
Personal ID by hash func.
1/k-anonym, obscurity
IR with Secure Computation
Original
DB
Encrypted
DB
Encrypted
response
Encrypt DB with PKq.
Big DB requires big
amount of time to
encrypt.
Questioner has both
of public key:PKq
and secret key:SKq
Query encrypted
with PKq
Decrypt with SKq
Searching without
decryption.
Questioner’s Public key: PKq
Addition (and multiplication) can be done without
decryption for encrypted data if homomorphic
public key encryption is employed.
N
Finger print
Finger print expressions of
Chemical compound DB:much
smaller than the original
chemical compound formulaEncrypt this compound:X
with additive homomorphic
encryption:Enc(X)
Enc(X)and public key PKq
Encrypt DB with received
PKq, and calculate the
similarity based on Tversky
values between Enc(X) and
each encrypted compound.
Encrypted Tversky
values: Tv(X)
Decrypt Tv(X)
with SKq and
get to know the
similar
compound with X
Researcher in
chemical industry
0 1 1
0 1 1 ・ ・ ・
0 1 1 ・ ・ ・
0 0 1 ・ ・ ・
1 0 1 ・ ・ ・
Chemical Compounds IR based on Secure Computation:
Developed by AIST Japan
X:
Overview of Privacy Protection Technologies
Whose privacy?
questioner Data subject whose personal data is in DB
Transform
query
Homomorphic
encryption
Private IR
Add
dummy
Semantic
preserving
query
transform
Decompose
query
Secure
computation:
Encrypt query
and DB by
questioner’s
secret key. Then
search w.o.
decryption
Method? What data is perturbed?
DB Whether
respond or
not
Query audit
response
Add noise
Differential
Privacy=Math.
models of added
noise
Deterministic
vs
Probabilistic
Transform
many has
the same QI
k-anonym.
l-diversity
t-close
anatomy
psudonymize:randomize
Personal ID by hash func.
1/k-anonym, obscurity
k-anomymity, l-diversity
motivation
Can we anonymize personal data only by removing
invididula ID such as name and exact address?
No
Private information can be inferred by combining the
publicly open data: Link Attack
Un-connetable anonymity in Japanese medicine mainly for
research purpose: Pseudonymize and delete the linking data
between psedonym and personal ID.
 If the linking data is not deleted, we call “Connetable anonymity.”
Un-connetable anonymity is thought to be protecting patients’
personal medical data because this kind of data are only
confined in the medical organization.
If, however, the patients’ data are used in nursing care
organization or medicine related companies such as
pharmaceutical companies.
Classic Example of Link
• Sweeney [S01a] said the governor of Massachusetts
William Weld ‘s medical record was identified by
linking his medical data which deletes his name, and
the voter as shown in the figure.
• Combining both database
• 6 people have the same birth date of the
governor
• Within these 6 people, three are male.
• Within these three, only one has the
same ZIP code!
• According to the US 1990 census data,
– 87% of people are uniquely identified by zipcode, sex, and birth
 K-anonymization was proposed to remedy this situation.
Voter List
Ethnicity
Diagnosis
Medication
Total charge
ZIP Name
Birth date Adress
Sex Data
registered
Party
affiliation
Medical
Data
k-anonymity
• Two methods to protect personal data stored in databases from link
attacks when this database is transferred or sold to the third party.
– Method1: Only Randomly sampled personal data is transferred
because whether specific person is stored in this sample DB or not is
unknown.
– Method2: Transform Quasi ID ( address, birthdate, sex ) less accurate
ones in order that at least k people has the same less accurate Quasi ID:
k-anonymization.
– In the right DB of the figure below, 3 people has the same (less accurate)
Quasi ID, say old lady, young girl, young boy 3-anonymity
3-anonymity DB
Transform Quasi ID
into less accurate
ones to make DB 3-
anonymity.
Example of transforming Quasi ID less
accurate
• Attribute of Quasi ID
– Personal ID(explicit identifiers) is deleted: anonymize
– Quasi ID can be used to identify individuals
– Attribute, especially sensitive attribute value should be
protected
Personal ID Quasi ID Sensitive info.
name Birth date gender Zipcode Disease name
John 21/1/79 M 53715 flu
Alice 10/1/81 F 55410 pneumonia
Beatrice 1/10/44 F 90210 bronchitis
Jack 21/2/84 M 02174 sprain
Joan 19/4/72 F 02237 AIDS
The objective : Keep each individual identified by Quasi ID
delete
Terminology: identify, specify
• Just the summary of basic terminology in Japanese
 specify:A data record becomes known to match to the real world
uniquely specified natural person by linking an anonymized
personal DB and other non anonymized personal DB
 identify:Data records of several DBs, are known to be the unique
same person’s data record by linking Quasi ID of these DBs
 Without identified, specification is generally hard
 Neither identified nor specified case: Non-identify&non-specify
 Identified but not specified: Identify&non-specify
27
k-anonymization
• Sweeney and Samarati [S01, S02a, S02b]
• k-anonymization: transform quasi IDs to less accurate ones
so that at least k people have the same quasi IDs.
– By k-anonymization, the probability of being identified becomes
less than 1/k against link attack.
• Method
– Generalization of quasi ID values, or suppress a record having a
certain value of quasi ID.
• Not adding noise to attribute value
• Notice the tradeoff between privacy protection and data
value degradation ( especially for data mining)!
– Don’t transform more than necessary for k-anonymity!
Example of k-anonymity
Birth day gender Zipcode
21/1/79 M 53715
10/1/79 F 55410
1/10/44 F 90210
21/2/83 M 02274
19/4/82 M 02237
Birth day gender Zipcode
group 1
*/1/79 human 5****
*/1/79 human 5****
suppress 1/10/44 F 90210
group 2
*/*/8* M 022**
*/*/8* M 022**
Original DB 2-anonymized DB
Generalizations (1)
• Every node of the same level of classification tree are generalized as
shown in the figure below:
• Global generalization  accuracy downgraded a lot
– If a lawyer and an engineer are generalized as a specialist, then a musician and a painter
are generalized as an artisit, too.
• sepcialist artist
• lawyer engineer musician painter
• Only generalizing nodes in the subtree
– Even if a lawyer and an engineer are generalized as a specialist, a musician and
a painter are not generalized. Avoiding non-necessary generalization.
• sepcialist artist
• lawyer engineer musician painter
•
30
Generalizations (2)
• Only one of children in a subtree is generalized
• specialist artist
• lawyer engineer musician painter
• Local generalization :
– not all records but individual records are
generalized .
– Good point is less accuracy reduction.
• i.e. John(lawyer)  John(specialist) but Alex(lawyer) still
remains a lawyer. 31
Evaluation function in k-anonymization
• K-anonymization algorithm uses the following evaluation function
to control whether generalization continues or stop.
• minimal distortion metric:MD
– The number of lost precise data by generalization.
– For example, 10 engineers are generalized into specialist, MD=10
• 𝐼𝐿𝑜𝑠𝑠 𝑣𝑔 =
𝑣 𝑔 −1
𝐷 𝐴
: The loss when more precise data than 𝑣𝑔 is
generalized to 𝑣𝑔
•
𝑣𝑔 is the number of kinds of data of 𝑣𝑔’s children.
𝐷𝐴 is the number of kinds of data of 𝑣𝑔
′
s attribute:A
32
Math science Bio science
Mathematics Statistics Chemistry Biology
𝐷𝐴 =4 𝑣𝑔 =2
𝐼𝐿𝑜𝑠𝑠 𝑣𝑔 =
𝑣𝑔 − 1
𝐷𝐴
=
2 − 1
4
=
1
4
33
• Trade-off between information accuracy and
privacy
• 𝐼𝐺𝑃𝐿(𝑠) =
𝐼𝐺 𝑠
𝑃𝐿 𝑠 +1
– s means generalizing to data
– 𝐼𝐺 𝑠 is the loss of information gain or MD by
applying s
– 𝑃𝐿 𝑠 is the degree of anonymization by applying s
• If k-anonymization, the degree is k.
34
Lattice for generalization
k-anonymity
zipcode Birth date sex
Lattice for generalization of all quasi
IDs
Objective
Minimum generalization
Subject to k-anonymity
generality
less
more
Z0
Z1
Z2
={53715, 53710, 53706, 53703}
={5371*, 5370*}
={537**}
B0
B1
={26/3/1979, 11/3/1980, 16/5/1978}
={*}
<S0, Z0>
<S1, Z0> <S0, Z1>
<S1, Z1>
<S1, Z2>
<S0, Z2>
[0, 0]
[1, 0] [0, 1]
[1, 1]
[1, 2]
[0, 2]
S0
S1
={Male, Female}
={Person}
Use lattice for efficient generalization
incognito [LDR05]
Using monotonicity
<S0, Z0>
<S1, Z0> <S0, Z1>
<S1, Z1>
<S1, Z2>
<S0, Z2>
(I) Generalization property (~rollup)
if k-anonymity at a node
then nodes above the node satisfy k-anonymity
(II) Subset prpperty (~apriori)
if a set of quasi ID does not satisfy k-anonymity at a node
then a subset of the set of quasi ID does not satisfy k-
anonymity
e.g., <S1, Z0> satisfies k-anonymity
 <S1, Z1> and <S1, Z2> satisfy k-anonymity
e.g., <S0, Z0> k-匿名性 でない
 <S0, Z0, B0> and <S0, Z0, B1> k-匿名性 でない
To simplify, only about <S,Z>
Example Case:
Dividing does not anonymize
Example of Incognito
2 quasi ID , 7 data point
zipcode
sex
group 1
w. 2 tuples
group 2
w. 3 tuples
group 3
w. 2 tuples
not 2-anonymity
2-anonymity
Examples [LDR05, LDR06]
Each dimension is
sequentially
generalized
incognito [LDR05]
Each dimension is
independently
generalized
mondrian [LDR06]
All dimensions are
generalized at the same
time
topdown [XWP+06]
Strength of generalization
Mondrian
[LDR06]
2-anonymity
Grouping by boundary length[XWP+06]:
Bad
generalization
Long rectangle
Low datamining
accuracy
Good
generalization
Rectangle near
square
High datamining
accuracy
Topdown [XWP+06]
split algorithm
Start with the most distant two data points
• Heuristics
• aggregate to 2 groups from seeds to
The near point is to combined to the group so
that the boundary length of the combined
group is the minimum among cases other
point is combined.
The right figure shows the growing of red and
green group by adding ①, ② and ③.
①②
③
③
②
①
The problem of k-anonymity
• 4-anonymity example
• Homogeneity attack: The third group only consists of cancer patients. Then if combine
other DB, the four people in the third group are known to be cancer patients.
• Background knowledge attack: If it is known that in the first group is there one Japanese
who has rarely cardiac disease, the Japanese person’s illness is inferred as infectious
disease.
id Zipcode age nationality disease
1 13053 28 Russia Cardiac disease
2 13068 29 US Cardiac disease
3 13068 21 Japan Infectious dis.
4 13053 23 US Infectious dis.
5 14853 50 India Cancer
6 14853 55 Russia Cardiac disease
7 14850 47 US Infectious dis.
8 14850 49 US Infectious dis.
9 13053 31 US Cancer
10 13053 37 India Cancer
11 13068 36 Japan Cancer
12 13068 35 US Cancer
id Zipcode age nationality disease
1 130** <30 ∗ Cardiac disease
2 130** <30 ∗ Cardiac disease
3 130** <30 ∗ Infectious dis.
4 130** <30 ∗ Infectious dis.
5 1485* ≥40 ∗ Cancer
6 1485* ≥40 ∗ Cardiac disease
7 1485* ≥40 ∗ Infectious dis.
8 1485* ≥40 ∗ Infectious dis.
9 130** 3∗ ∗ Cancer
10 130** 3∗ ∗ Cancer
11 130** 3∗ ∗ Cancer
12 130** 3∗ ∗ Cancer
Anonymous DB 4-anonymity DB
l-diversity
[MGK+06]
• The purpose is that the sensitive information in each group is not skewed.
– Prevent homogeneity attack
– Prevent background knowledge attack
l-diversity (intuitive definition)
That a group is l-diverse is defined as at least
l kinds of values in the group.
44
name age sex disease
John 65 M flu
Jack 30 M gastritis
Alice 43 F pneumonia
Bill 50 M flu
Pat 70 F pneumonia
Peter 32 M flu
Joan 60 F flu
Ivan 55 M pneumonia
Chris 40 F rhinitis
john flu
Peter flu
Joan flu
Bill flu
Alice pneumonia
Pat pneumonia
Ivan pneumonia
Jack gastritis
Chris rhinitis
Divide into
disease
based sub
Databases
l-diversity algorithm part1
•DB is divided according to each value of sensitive information( disease
name).
45
John flu
Peter flu
Joan flu
Bill flu
Alice pneumonia
Pat pneumonia
Ivan pneumonia
Jack gastritis
Chris rhinitis
John flu
Joan flu
Alice pneumonia
Ivan pneumonia
Chris rhinitis
Peter flu
Bill flu
Pat pneumonia
Jack gastritis
Each of these two groups contains at
least 3 diseases: 3-diversity
l-diversity algorithm part2
•Select records from each of left hand side date group and sequentially add each of
the right hand side data group
Anatomy [Xiaokui06]
• Divide the original table( appeared in l-divesity algorithm part 1) into two
tables. The left and right table are only linked by group ID, here 1 and 2.
• 3-diversity
46
Group ID disease frequency
1 flu 2
1 pneumonia 2
1 rhinitis 1
2 flu 2
2 pneumonia 1
2 gastritis 1
name age sex Group
ID
John 65 M 1
Jack 30 M 1
Alice 43 F 1
Bill 50 M 1
Pat 70 F 1
Peter 32 M 2
Joan 60 F 2
Ivan 55 M 2
Chris 40 F 2 Data mining is done on these two tables.
Since each value is not generalized, the
expected accuracy is high.
Side effects of k-anonymity
Defamation
name age sex address Location at 2016/6/6 12:00
John 35 M Bunkyo hongo 11 K consumer finance shop
Dan 30 M Bunkyo Yusima 22 T University
Jack 33 M Bunkyo Yayoi 33 T University
Bill 39 M Bunkyo Nezu 44 Y hospital
name age sex address Location at 2016/6/6 12:00
John 30’s M Bunkyo K consumer finance shop
Dan 30’s M Bunkyo T University
Jack 30’s M Bunkyo T University
Bill 30’s M Bunkyo Y hospital
4-anonymize
Dan , Jack and Bill are not recognized a person different
from John by 4-anonyumity, all four persons are suspected
to stay at K consumer finance shopk-anonymization
provokes defamation on Dan, Jack and Bill.
name age sex address Location at 2016/6/6 12:00
John 35 M Bunkyo hongo 11 K consumer finance shop
Dan 30 M Bunkyo Yusima 22 K consumer finance shop
Jack 33 M Bunkyo Yayoi 33 K consumer finance shop
Bill 39 M Bunkyo Nezu 44 K consumer finance shop
Exchange one person to
make DB 2-diversity
By 2-diversifying, Alex becomes strongly suspected to be at K
consumer finance shop  l-diversity provokes defamation
l-diversity makes situation worse
These values shows all four is
at K consumer finance shop
name age sex address Location at 2016/6/6 12:00
John 30’s M Bunkyo K consumer finance shop
Dan 30’s M Bunkyo K consumer finance shop
Jack 30’s M Bunkyo K consumer finance shop
Alex 30’s M Bunkyo T Univeristy
k-anonymity provokes defamation
in sub area aggregation
k-anonymmized area : at
least k people are in this area
consumer
finance
shop
This university student who is
trying to find a job, is
suspected to stay at consumer
finance shop, and this situation
is not good for his job seeking
process.
Defamation
Why defamation happens?
• Case study
– A job candidate who is a good university student.
– He is in k people group that includes at least one
person who went to a consumer finance shop.
– A company he tries to take entrance examination
does not want hire a person who goes to a
consumer finance shop.
– He is suspected to go to a consumer finance
shop. defamation!
– At the same time he is a good university student.
Back ground situation of defamation
• Case study cont.
– If the company deletes him from candidate list, it must
use another time and money, say X, to check another
candidate:
– If the company hires a bad buy, it will suffer a certain
amount of damage, say Y, by his bad behavior.
– Then if the expected value of Y is more than X, the
company becomes very negative, otherwise not
negative about him.
– This is a defamation from an economical point of view.
Back ground situation of defamation
• Case study cont.
– Another factor is the probability that he actually
went to a consumer finance shop.
– This probability is proportional to the number of
consumer finance shop visitors, say s, in k people of
k-anonymity group.
– Then the relation is sketched in the figure on next
slide.
1
0 1
The subjective
probability of the
company suspects him
The expected
damage if the
company hires
him
The money
the company
has to spend
for checking
another
candidate
s/k
1
0 1
The subjective
probability of the
company suspects him
The expected
damage if the
company hires
him
The money
the company
has to spend
for checking
another
candidate
s/k
In this area, the company does
not pay if it suspects him
In this area, the company
should suspect him to avoid
the expected damage
C
The border
line between
defamation or
not
Solution
• Then the solution is simple:
– Make the border line as small as possible.
– But how?
• We can revise k-anonymization algorithm in order to
minimize the number of bad behavior guys in k-
anonymity group.
– This revision, however, reduce the accuracy of the data.
– Then the problem comes to be a optimization problem:
Maxmize Accuracy of data
subject to number of bad guys ≤ 1
in k-anonymity group
A consumer finance shop is devided into 4 parts to
reduce # of poepole visit it is less or equal than one
K-anonymity area isdevided
into 4 areas
A concumer
finance
shop
Outline of algorithm
1. Do k-anonymization.
2. If one group includes more than one bad guys
① Then combine this and two nearest groups
② Do k-anonymization to this combined group to make
two groups that includes at most one bad guys.
③ If step ② fails,
④ then go back to one step in 1. Do k-anonymization,
namely try to find another generalization in k-
anonymization.
Reference
• [LDR 05]LeFevre, K., DeWitt, D.J., Ramakrishnan, R. Incognito: Efficient Full-domain k-Anonymity.
SIGMOD, 2005.
• [LDR06]LeFevre, K., DeWitt, D.J., Ramakrishnan, R. Mondrian Multidimensional k-Anonymity. ICDE,
2006.
• [XWP+06] Xu, J., Wang, W., Pei, J., Wang, X., Shi, B., Fu, A., Utility-Based Anonymization Using Local
Recoding. SIGKDD, 2006.
• [MGK2007]MACHANAVAJJHALA,A. KIFER,D. GEHRKE,J. and VENKITASUBRAMANIAM, U. l-Diversity:
Privacy Beyond k-Anonymity. ACM Transactions on Knowledge Discovery from Data, Vol. 1, No. 1,
Article 3,2007
• [S01] Samarati, P. Protecting Respondents' Identities in Microdata Release. IEEE TKDE, 13(6):1010-
1027, 2001.
• [S02a] Sweeney, L. k-Anonymity: A Model for Protecting Privacy. International Journal on
Uncertainty, Fuzziness and Knowledge-based Systems, 2002.
• [S02b] Sweeney, L. k-Anonymity: Achieving k-Anonymity Privacy Protection using Generalization
and Suppresion. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems,
2002.
• Ninghui Li,Tiancheng Li,Venkatasubramanian, S. “t-Closeness: Privacy Beyond k-Anonymity and –
Diversity”. ICDE2007, pp.106-115, 2007.
• [SMP] Sacharidis, D., Mouratidis, K., Papadias, D. k-Anonymity in the Presence of External
Databases(to be appeared)
• [Xiaokui06] X. Xiaokui and T. Yufei. (2006). Anatomy: Simple and Effective Privacy Preservation.
VLDB, 139-150.

More Related Content

PPTX
Attribute Based Encryption with Privacy Preserving In Clouds
PDF
GSK: How Knowledge Graphs Improve Clinical Reporting Workflows
PDF
Zero Knowledge Proofs: What they are and how they work
PPTX
Password cracking and brute force
PPT
Information Security & Cryptography
PDF
Offensive OSINT
PPTX
Deception technology for advanced detection
PPTX
Data Encryption Standard
Attribute Based Encryption with Privacy Preserving In Clouds
GSK: How Knowledge Graphs Improve Clinical Reporting Workflows
Zero Knowledge Proofs: What they are and how they work
Password cracking and brute force
Information Security & Cryptography
Offensive OSINT
Deception technology for advanced detection
Data Encryption Standard

What's hot (20)

PPTX
Guide to MFA
PPTX
Advanced Encryption System & Block Cipher Modes of Operations
PPTX
Hadoop Meetup Jan 2019 - HDFS Scalability and Consistent Reads from Standby Node
PPTX
Cryptography
PDF
CNIT 141: 12. Elliptic Curves
PPTX
Cryptography.ppt
PPT
Cryptography and Network Security
PPTX
How does Quest Software fit into a Microsoft hybrid environment?
PDF
InfluxDB & Grafana
PDF
Splunk workshop-Machine Data 101
PPTX
Introduction to Cryptography
PPTX
DNS Security Presentation ISSA
PDF
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
PDF
CNS - Unit - 2 - Stream Ciphers and Block Ciphers
PPTX
Cryptography
PPTX
Advanced cryptography and implementation
PPTX
Interactive Realtime Dashboards on Data Streams using Kafka, Druid and Superset
PDF
COBOL to Apache Spark
PPTX
Zero Trust Network Access
PPTX
SplunkLive! Customer Presentation--ServiceNow
Guide to MFA
Advanced Encryption System & Block Cipher Modes of Operations
Hadoop Meetup Jan 2019 - HDFS Scalability and Consistent Reads from Standby Node
Cryptography
CNIT 141: 12. Elliptic Curves
Cryptography.ppt
Cryptography and Network Security
How does Quest Software fit into a Microsoft hybrid environment?
InfluxDB & Grafana
Splunk workshop-Machine Data 101
Introduction to Cryptography
DNS Security Presentation ISSA
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
CNS - Unit - 2 - Stream Ciphers and Block Ciphers
Cryptography
Advanced cryptography and implementation
Interactive Realtime Dashboards on Data Streams using Kafka, Druid and Superset
COBOL to Apache Spark
Zero Trust Network Access
SplunkLive! Customer Presentation--ServiceNow
Ad

Viewers also liked (20)

PDF
Information Retrieval AICTE FDP at GCT Coimbatore
PDF
2014人工知能学会大会および情報処理学会EIP研究会発表資料
PDF
A Happy New Year 2016
PPTX
データ利用における個人情報の保護
PDF
k-匿名化が誘発する濡れ衣:解決編
PPTX
未出現事象の出現確率
PPTX
匿名加工情報を使えないものか?(改訂版)
PPTX
数式を使わないプライバシー保護技術
PPTX
Privacy Protection Technologies: Introductory Overview
PDF
Boundary Between Pseudonymity and Anonymity
PDF
匿名化の技術的俯瞰ー匿名加工情報の観点から
PDF
Problems in Technology to Use Anonymized Personal Data
PDF
時系列パーソナル・データの プライバシー
PDF
差分プライベート最小二乗密度比推定
PDF
プライバシー保護のためのサンプリング、k-匿名化、そして差分プライバシー
PDF
シンギュラリティ以後
PDF
シンギュラリティ以前
PDF
プライバシー保護の法制と技術課題(2014年時点)
PDF
Data anonymization
PDF
パーソナル履歴データに対する匿名化と再識別:SCIS2017
Information Retrieval AICTE FDP at GCT Coimbatore
2014人工知能学会大会および情報処理学会EIP研究会発表資料
A Happy New Year 2016
データ利用における個人情報の保護
k-匿名化が誘発する濡れ衣:解決編
未出現事象の出現確率
匿名加工情報を使えないものか?(改訂版)
数式を使わないプライバシー保護技術
Privacy Protection Technologies: Introductory Overview
Boundary Between Pseudonymity and Anonymity
匿名化の技術的俯瞰ー匿名加工情報の観点から
Problems in Technology to Use Anonymized Personal Data
時系列パーソナル・データの プライバシー
差分プライベート最小二乗密度比推定
プライバシー保護のためのサンプリング、k-匿名化、そして差分プライバシー
シンギュラリティ以後
シンギュラリティ以前
プライバシー保護の法制と技術課題(2014年時点)
Data anonymization
パーソナル履歴データに対する匿名化と再識別:SCIS2017
Ad

Similar to Privacy Protectin Models and Defamation caused by k-anonymity (20)

PPTX
Differential Privacy for Information Retrieval
PDF
Achieving Privacy in Publishing Search logs
PPTX
privacy preserving forenciscs of encyrpted data.pptx
PPTX
Privacy-preserving Information Sharing: Tools and Applications
PDF
The Constrained Method of Accessibility and Privacy Preserving Of Relational ...
PDF
Scalable and Privacy-preserving Data Integration - Part 2
PPTX
Search as-you-type (Exact search)
PDF
An Efficient User Privacy and Protecting Location Content in Location Based S...
PDF
Performance Analysis of Hybrid Approach for Privacy Preserving in Data Mining
PDF
Privacy Preserving by Anonymization Approach
PPTX
Data security refers to the practices, technologies, and policies designed to...
PDF
Privacy preserving
PDF
Privacy log files
PPTX
Patterns and Packages in PostgreSQL for Privacy Preservation
PPT
PDF
A Review on Privacy Preservation in Data Mining
PDF
A Review on Privacy Preservation in Data Mining
PDF
A Review on Privacy Preservation in Data Mining
PDF
A review on privacy preservation in data mining
Differential Privacy for Information Retrieval
Achieving Privacy in Publishing Search logs
privacy preserving forenciscs of encyrpted data.pptx
Privacy-preserving Information Sharing: Tools and Applications
The Constrained Method of Accessibility and Privacy Preserving Of Relational ...
Scalable and Privacy-preserving Data Integration - Part 2
Search as-you-type (Exact search)
An Efficient User Privacy and Protecting Location Content in Location Based S...
Performance Analysis of Hybrid Approach for Privacy Preserving in Data Mining
Privacy Preserving by Anonymization Approach
Data security refers to the practices, technologies, and policies designed to...
Privacy preserving
Privacy log files
Patterns and Packages in PostgreSQL for Privacy Preservation
A Review on Privacy Preservation in Data Mining
A Review on Privacy Preservation in Data Mining
A Review on Privacy Preservation in Data Mining
A review on privacy preservation in data mining

More from Hiroshi Nakagawa (20)

PDF
人工知能学会大会2020ーAI倫理とガバナンス
PDF
信頼できるAI評価リスト パーソナルAIエージェントへの適用例
PDF
NICT-nakagawa2019Feb12
PDF
情報ネットワーク法学会研究大会
PDF
最近のAI倫理指針からの考察
PDF
AI and Accountability
PDF
AI Forum-2019_Nakagawa
PDF
2019 3-9-nakagawa
PDF
CPDP2019 summary-report
PDF
情報法制研究所 第5回情報法セミナー:人工知能倫理と法制度、社会
PDF
Ai e-accountability
PDF
自動運転と道路沿い情報インフラ
PDF
暗号化によるデータマイニングと個人情報保護
PDF
Defamation Caused by Anonymization
PDF
人工知能と社会
PDF
人工知能学会合同研究会2017-汎用人工知能研究会(SIG-AGI)招待講演
PDF
情報ネットワーク法学会2017大会第8分科会発表資料
PPTX
学術会議 ITシンポジウム資料「プライバシー保護技術の概観と展望」
PPTX
AI社会論研究会
PDF
Social Effects by the Singularity -Pre-Singularity Era-
人工知能学会大会2020ーAI倫理とガバナンス
信頼できるAI評価リスト パーソナルAIエージェントへの適用例
NICT-nakagawa2019Feb12
情報ネットワーク法学会研究大会
最近のAI倫理指針からの考察
AI and Accountability
AI Forum-2019_Nakagawa
2019 3-9-nakagawa
CPDP2019 summary-report
情報法制研究所 第5回情報法セミナー:人工知能倫理と法制度、社会
Ai e-accountability
自動運転と道路沿い情報インフラ
暗号化によるデータマイニングと個人情報保護
Defamation Caused by Anonymization
人工知能と社会
人工知能学会合同研究会2017-汎用人工知能研究会(SIG-AGI)招待講演
情報ネットワーク法学会2017大会第8分科会発表資料
学術会議 ITシンポジウム資料「プライバシー保護技術の概観と展望」
AI社会論研究会
Social Effects by the Singularity -Pre-Singularity Era-

Recently uploaded (20)

PPTX
Basic_of_Computer_System.pptx class-8 com
PPT
chapter 5: system unit computing essentials
PPTX
using the citation of Research to create a research
PPTX
Slides World Games Great Redesign Eco Economic Epochs.pptx
PPTX
Data Flows presentation hubspot crm.pptx
PPTX
National-Historical-Commission-of-the-PhilippinesNHCP.pptx
PPTX
Introduction: Living in the IT ERA.pptx
PDF
How Technology Shapes Our Information Age
PPTX
Introduction to networking local area networking
PDF
Lesson.-Reporting-and-Sharing-of-Findings.pdf
PDF
ilide.info-huawei-odn-solution-introduction-pdf-pr_a17152ead66ea2617ffbd01e8c...
PDF
The_Decisive_Battle_of_Yarmuk,battle of yarmuk
PPTX
Digital Project Mastery using Autodesk Docs Workshops
PPTX
最新版美国埃默里大学毕业证(Emory毕业证书)原版定制文凭学历认证
PPT
Expect The Impossiblesssssssssssssss.ppt
PDF
AGENT SLOT TERPERCAYA INDONESIA – MAIN MUDAH, WD CEPAT, HANYA DI KANCA4D
DOCX
MLS 113 Medical Parasitology (LECTURE).docx
PPTX
WEEK 15.pptx WEEK 15.pptx WEEK 15.pptx WEEK 15.pptx
PDF
Testing & QA Checklist for Magento to Shopify Migration Success.pdf
PDF
healthwealthtech4all-blogspot-com-2025-08-top-5-tech-innovations-that-will-ht...
Basic_of_Computer_System.pptx class-8 com
chapter 5: system unit computing essentials
using the citation of Research to create a research
Slides World Games Great Redesign Eco Economic Epochs.pptx
Data Flows presentation hubspot crm.pptx
National-Historical-Commission-of-the-PhilippinesNHCP.pptx
Introduction: Living in the IT ERA.pptx
How Technology Shapes Our Information Age
Introduction to networking local area networking
Lesson.-Reporting-and-Sharing-of-Findings.pdf
ilide.info-huawei-odn-solution-introduction-pdf-pr_a17152ead66ea2617ffbd01e8c...
The_Decisive_Battle_of_Yarmuk,battle of yarmuk
Digital Project Mastery using Autodesk Docs Workshops
最新版美国埃默里大学毕业证(Emory毕业证书)原版定制文凭学历认证
Expect The Impossiblesssssssssssssss.ppt
AGENT SLOT TERPERCAYA INDONESIA – MAIN MUDAH, WD CEPAT, HANYA DI KANCA4D
MLS 113 Medical Parasitology (LECTURE).docx
WEEK 15.pptx WEEK 15.pptx WEEK 15.pptx WEEK 15.pptx
Testing & QA Checklist for Magento to Shopify Migration Success.pdf
healthwealthtech4all-blogspot-com-2025-08-top-5-tech-innovations-that-will-ht...

Privacy Protectin Models and Defamation caused by k-anonymity

  • 1. Introduction of Privacy Protection Mathematical Models Private Information Retrieval IR with Homomorphic Encryption k-anonymity l-diversity Defamation caused by k-Anonymity Hiroshi Nakagawa The University of Tokyo
  • 2. Overview of Privacy Protection Technologies Whose privacy? questioner Data subject whose personal data is in DB Transform query Homomorphic encryption Private IR Add dummy Semantic preserving query transform Decompose query Secure computation: Encrypt query and DB by questioner’s secret key. Then search w.o. decryption Method? What data is perturbed? DB Whether respond or not Query audit response Add noise Differential Privacy=Math. models of added noise Deterministic vs Probabilistic Transform many has the same QI k-anonym. l-diversity t-close anatomy psudonymize:randomize Personal ID by hash func. 1/k-anonym, obscurity
  • 3. Whose privacy? questioner Data subject whose personal data is in DB Transform query Homomorphic encryption Private IR Add dummy Semantic preserving query transform Decompose query Secure computation: Encrypt query and DB by questioner’s secret key. Then search w.o. decryption Method? What data is perturbed? DB Whether respond or not Query audit response Add noise Differential Privacy=Math. models of added noise Deterministic vs Probablistic Transform many has the same QI k-anonym. l-diversity t-close anatomy psudonymize:randomize Personal ID by hash func. 1/k-anonym, obscurity Overview of Privacy Protection Technologies
  • 5. Why user privacy should be protected in IR?  While we mainly focused on privacy protection of personal data in DB, user privacy of query in IR should be protected as well.  Knowledge-based scheme to create privacy-preserving but semantically-related queries for web search engines – David Sanchez, Jordi Castella-Roca, Alexandre Viejo – Information Sciences, http://dx.doi.org/10.1016/j.ins.2012.06.025
  • 6. Why user privacy should be protected in IR? • IT companies in US transfer or even sell user profile to the government authorities such as: – AOL responds more than 1000 a month, – Facebook responds 10 to 20 request a day – US Yahoo sells its members’ account, e-mail by 30$-40$ for one account • These make amount of profit for IT companies , but no return to data subjects. – Even worse, bad guy may steel them. • Then, internet search engine users should employ technologies that protect him/herself identity from search engine.
  • 7. What information a user wants keep secret from the search engine? Anonymity: a user does not want to be inferred who send a query from S.E.  Tor (onion routing)  A proxy user mixes up several users queries and send them to S.E. Obfuscation: S.E. does not know the exact query, even if the S.E. knows who sends a query. What should be kept secret is: – A set of words that consist of the query – More generally, what the user wants to search. – Majority of research is for Web search cases.
  • 8. Keep secret the location a user sends a query • A user wants to use a location based services such as showing near by good restaurants, but does not want the service provider his/her location • Using the tursted third party :TPP if exists User ID, location response TPP alters the user ID and location if necessary response The service provider using a user’s location A user TPP
  • 9. Mixing up several users’ locations • In case of no TPP, several users trusting each other make a group, and use the location based services • L(n) is a location of a user whose ID=n • Starting from ID=1, and add up each user’s location and finally k th user sends the mixed up locations and request the services • Each user only memorizes the previous user’s ID and when receives the response , return it to the previous user as shown in the figure below. • By shuffling locations, each user does not recognize which response is for whose reqest. • Similar to k-anonymization. ID=1 ID=2 ID=3 ID=4 [1,L(1)] [L(1),2,L(2)] [L(1),2,L(2),3,L(3)] Request for services [L(1),L(2),L(3),L(4),4] サービス結果リスト [Res(1),Res(2), Res(3),Res(4)] [Res(1),Res(2), Res(3),Res(4)] [Res(1),Res(2), Res(3),Res(4)] [Res(1),Res(2), Res(3),Res(4)] The service provider using a user’s location ① ② ③ ④ ⑤ ⑥⑦ ⑧
  • 10. Private Information Retrieval  Researchers in industry send queries to S.E. to search the DB. Their queries indicate the information of R&D of their company.  They want to make the queries secret from S.E. of the DB.  Ex. Query including both chemical compound A and B, which is crucial for R&D. Data Base Try to preserve the whole contents of the DB. Query Try to keep secret the query Queries are the company’s secret about their R&D.
  • 11. what should be kept secret? • Information which can identify a searcher of DB or a user of services. • Internet ID, name • Location from where a searcher send the query • Time of sending the query • Query contents • See next slide • Existence of query
  • 12. Query length and structure should be kept secret • In case of a query consists of words, – Add dummy words, or replace the word with word(s) of same meaning • A set of words having some structure such as sentences, ordered sequence of words – More complex paraphrasing is needed • Location info., numerical info. etc. – Add noise, reduce the precision or whatever
  • 13. How to make it difficult to infer the real query ?  Obfuscation • A query is divided into words. Each word is used as distinct query • Add noise, say confusing words, to the query • Replace a query word with semantically similar word(s)  When we get response( list of documents, etc.), we have to select out the originally intended answer from them.
  • 14. Outlook of PIR with obfuscation Searcher’s profile:X= multinomial distribution of 𝑝𝑖 which is the probability of i th topic Dummy Generation System: DGS Internet Semantic Classification R,R,R D,R,D,D,R R:real query D:dummy query :generated by DGS Q,Q,Q D and R are indistinguishable from S.E. Semantically classification Profile refiner X Y Dummy filter Z learned with profile and dummy Throw awayQ if regareded as dummy Revise profile by Q regarded as true query Search Engine: S.E. (possibly adversary)Questioner:A Y is the inferred value of X
  • 15. Supplemental explanation  A questioner : A makes dummy queries D by DGS(dummy generater system) based on the real query R, and send R and D to the search engine: S.E., which might be an adversary.  S.E. receives Q which actually consists of R and D. Then S.E. learns a questiner’s profile Z, and classifies Q into real query and dummy queries.  In this setting, the questioner wants Q not be classified into R and D. In addition, he/she would not like his/her profile inferred by S.E.. That is why adding D or replacing true R with other words.
  • 16. Overview of Privacy Protection Technologies Whose privacy? questioner Data subject whose personal data is in DB Transform query Homomorphic encryption Private IR Add dummy Semantic preserving query transform Decompose query Secure computation: Encrypt query and DB by questioner’s secret key. Then search w.o. decryption Method? What data is perturbed? DB Whether respond or not Query audit response Add noise Differential Privacy=Math. models of added noise Deterministic vs Probabilistic Transform many has the same QI k-anonym. l-diversity t-close anatomy psudonymize:randomize Personal ID by hash func. 1/k-anonym, obscurity
  • 17. IR with Secure Computation
  • 18. Original DB Encrypted DB Encrypted response Encrypt DB with PKq. Big DB requires big amount of time to encrypt. Questioner has both of public key:PKq and secret key:SKq Query encrypted with PKq Decrypt with SKq Searching without decryption. Questioner’s Public key: PKq Addition (and multiplication) can be done without decryption for encrypted data if homomorphic public key encryption is employed.
  • 19. N Finger print Finger print expressions of Chemical compound DB:much smaller than the original chemical compound formulaEncrypt this compound:X with additive homomorphic encryption:Enc(X) Enc(X)and public key PKq Encrypt DB with received PKq, and calculate the similarity based on Tversky values between Enc(X) and each encrypted compound. Encrypted Tversky values: Tv(X) Decrypt Tv(X) with SKq and get to know the similar compound with X Researcher in chemical industry 0 1 1 0 1 1 ・ ・ ・ 0 1 1 ・ ・ ・ 0 0 1 ・ ・ ・ 1 0 1 ・ ・ ・ Chemical Compounds IR based on Secure Computation: Developed by AIST Japan X:
  • 20. Overview of Privacy Protection Technologies Whose privacy? questioner Data subject whose personal data is in DB Transform query Homomorphic encryption Private IR Add dummy Semantic preserving query transform Decompose query Secure computation: Encrypt query and DB by questioner’s secret key. Then search w.o. decryption Method? What data is perturbed? DB Whether respond or not Query audit response Add noise Differential Privacy=Math. models of added noise Deterministic vs Probabilistic Transform many has the same QI k-anonym. l-diversity t-close anatomy psudonymize:randomize Personal ID by hash func. 1/k-anonym, obscurity
  • 22. motivation Can we anonymize personal data only by removing invididula ID such as name and exact address? No Private information can be inferred by combining the publicly open data: Link Attack Un-connetable anonymity in Japanese medicine mainly for research purpose: Pseudonymize and delete the linking data between psedonym and personal ID.  If the linking data is not deleted, we call “Connetable anonymity.” Un-connetable anonymity is thought to be protecting patients’ personal medical data because this kind of data are only confined in the medical organization. If, however, the patients’ data are used in nursing care organization or medicine related companies such as pharmaceutical companies.
  • 23. Classic Example of Link • Sweeney [S01a] said the governor of Massachusetts William Weld ‘s medical record was identified by linking his medical data which deletes his name, and the voter as shown in the figure. • Combining both database • 6 people have the same birth date of the governor • Within these 6 people, three are male. • Within these three, only one has the same ZIP code! • According to the US 1990 census data, – 87% of people are uniquely identified by zipcode, sex, and birth  K-anonymization was proposed to remedy this situation. Voter List Ethnicity Diagnosis Medication Total charge ZIP Name Birth date Adress Sex Data registered Party affiliation Medical Data
  • 25. • Two methods to protect personal data stored in databases from link attacks when this database is transferred or sold to the third party. – Method1: Only Randomly sampled personal data is transferred because whether specific person is stored in this sample DB or not is unknown. – Method2: Transform Quasi ID ( address, birthdate, sex ) less accurate ones in order that at least k people has the same less accurate Quasi ID: k-anonymization. – In the right DB of the figure below, 3 people has the same (less accurate) Quasi ID, say old lady, young girl, young boy 3-anonymity 3-anonymity DB Transform Quasi ID into less accurate ones to make DB 3- anonymity.
  • 26. Example of transforming Quasi ID less accurate • Attribute of Quasi ID – Personal ID(explicit identifiers) is deleted: anonymize – Quasi ID can be used to identify individuals – Attribute, especially sensitive attribute value should be protected Personal ID Quasi ID Sensitive info. name Birth date gender Zipcode Disease name John 21/1/79 M 53715 flu Alice 10/1/81 F 55410 pneumonia Beatrice 1/10/44 F 90210 bronchitis Jack 21/2/84 M 02174 sprain Joan 19/4/72 F 02237 AIDS The objective : Keep each individual identified by Quasi ID delete
  • 27. Terminology: identify, specify • Just the summary of basic terminology in Japanese  specify:A data record becomes known to match to the real world uniquely specified natural person by linking an anonymized personal DB and other non anonymized personal DB  identify:Data records of several DBs, are known to be the unique same person’s data record by linking Quasi ID of these DBs  Without identified, specification is generally hard  Neither identified nor specified case: Non-identify&non-specify  Identified but not specified: Identify&non-specify 27
  • 28. k-anonymization • Sweeney and Samarati [S01, S02a, S02b] • k-anonymization: transform quasi IDs to less accurate ones so that at least k people have the same quasi IDs. – By k-anonymization, the probability of being identified becomes less than 1/k against link attack. • Method – Generalization of quasi ID values, or suppress a record having a certain value of quasi ID. • Not adding noise to attribute value • Notice the tradeoff between privacy protection and data value degradation ( especially for data mining)! – Don’t transform more than necessary for k-anonymity!
  • 29. Example of k-anonymity Birth day gender Zipcode 21/1/79 M 53715 10/1/79 F 55410 1/10/44 F 90210 21/2/83 M 02274 19/4/82 M 02237 Birth day gender Zipcode group 1 */1/79 human 5**** */1/79 human 5**** suppress 1/10/44 F 90210 group 2 */*/8* M 022** */*/8* M 022** Original DB 2-anonymized DB
  • 30. Generalizations (1) • Every node of the same level of classification tree are generalized as shown in the figure below: • Global generalization  accuracy downgraded a lot – If a lawyer and an engineer are generalized as a specialist, then a musician and a painter are generalized as an artisit, too. • sepcialist artist • lawyer engineer musician painter • Only generalizing nodes in the subtree – Even if a lawyer and an engineer are generalized as a specialist, a musician and a painter are not generalized. Avoiding non-necessary generalization. • sepcialist artist • lawyer engineer musician painter • 30
  • 31. Generalizations (2) • Only one of children in a subtree is generalized • specialist artist • lawyer engineer musician painter • Local generalization : – not all records but individual records are generalized . – Good point is less accuracy reduction. • i.e. John(lawyer)  John(specialist) but Alex(lawyer) still remains a lawyer. 31
  • 32. Evaluation function in k-anonymization • K-anonymization algorithm uses the following evaluation function to control whether generalization continues or stop. • minimal distortion metric:MD – The number of lost precise data by generalization. – For example, 10 engineers are generalized into specialist, MD=10 • 𝐼𝐿𝑜𝑠𝑠 𝑣𝑔 = 𝑣 𝑔 −1 𝐷 𝐴 : The loss when more precise data than 𝑣𝑔 is generalized to 𝑣𝑔 • 𝑣𝑔 is the number of kinds of data of 𝑣𝑔’s children. 𝐷𝐴 is the number of kinds of data of 𝑣𝑔 ′ s attribute:A 32
  • 33. Math science Bio science Mathematics Statistics Chemistry Biology 𝐷𝐴 =4 𝑣𝑔 =2 𝐼𝐿𝑜𝑠𝑠 𝑣𝑔 = 𝑣𝑔 − 1 𝐷𝐴 = 2 − 1 4 = 1 4 33
  • 34. • Trade-off between information accuracy and privacy • 𝐼𝐺𝑃𝐿(𝑠) = 𝐼𝐺 𝑠 𝑃𝐿 𝑠 +1 – s means generalizing to data – 𝐼𝐺 𝑠 is the loss of information gain or MD by applying s – 𝑃𝐿 𝑠 is the degree of anonymization by applying s • If k-anonymization, the degree is k. 34
  • 35. Lattice for generalization k-anonymity zipcode Birth date sex Lattice for generalization of all quasi IDs Objective Minimum generalization Subject to k-anonymity generality less more Z0 Z1 Z2 ={53715, 53710, 53706, 53703} ={5371*, 5370*} ={537**} B0 B1 ={26/3/1979, 11/3/1980, 16/5/1978} ={*} <S0, Z0> <S1, Z0> <S0, Z1> <S1, Z1> <S1, Z2> <S0, Z2> [0, 0] [1, 0] [0, 1] [1, 1] [1, 2] [0, 2] S0 S1 ={Male, Female} ={Person}
  • 36. Use lattice for efficient generalization incognito [LDR05] Using monotonicity <S0, Z0> <S1, Z0> <S0, Z1> <S1, Z1> <S1, Z2> <S0, Z2> (I) Generalization property (~rollup) if k-anonymity at a node then nodes above the node satisfy k-anonymity (II) Subset prpperty (~apriori) if a set of quasi ID does not satisfy k-anonymity at a node then a subset of the set of quasi ID does not satisfy k- anonymity e.g., <S1, Z0> satisfies k-anonymity  <S1, Z1> and <S1, Z2> satisfy k-anonymity e.g., <S0, Z0> k-匿名性 でない  <S0, Z0, B0> and <S0, Z0, B1> k-匿名性 でない To simplify, only about <S,Z>
  • 37. Example Case: Dividing does not anonymize Example of Incognito 2 quasi ID , 7 data point zipcode sex group 1 w. 2 tuples group 2 w. 3 tuples group 3 w. 2 tuples not 2-anonymity 2-anonymity
  • 38. Examples [LDR05, LDR06] Each dimension is sequentially generalized incognito [LDR05] Each dimension is independently generalized mondrian [LDR06] All dimensions are generalized at the same time topdown [XWP+06] Strength of generalization
  • 40. Grouping by boundary length[XWP+06]: Bad generalization Long rectangle Low datamining accuracy Good generalization Rectangle near square High datamining accuracy
  • 41. Topdown [XWP+06] split algorithm Start with the most distant two data points • Heuristics • aggregate to 2 groups from seeds to The near point is to combined to the group so that the boundary length of the combined group is the minimum among cases other point is combined. The right figure shows the growing of red and green group by adding ①, ② and ③. ①② ③ ③ ② ①
  • 42. The problem of k-anonymity • 4-anonymity example • Homogeneity attack: The third group only consists of cancer patients. Then if combine other DB, the four people in the third group are known to be cancer patients. • Background knowledge attack: If it is known that in the first group is there one Japanese who has rarely cardiac disease, the Japanese person’s illness is inferred as infectious disease. id Zipcode age nationality disease 1 13053 28 Russia Cardiac disease 2 13068 29 US Cardiac disease 3 13068 21 Japan Infectious dis. 4 13053 23 US Infectious dis. 5 14853 50 India Cancer 6 14853 55 Russia Cardiac disease 7 14850 47 US Infectious dis. 8 14850 49 US Infectious dis. 9 13053 31 US Cancer 10 13053 37 India Cancer 11 13068 36 Japan Cancer 12 13068 35 US Cancer id Zipcode age nationality disease 1 130** <30 ∗ Cardiac disease 2 130** <30 ∗ Cardiac disease 3 130** <30 ∗ Infectious dis. 4 130** <30 ∗ Infectious dis. 5 1485* ≥40 ∗ Cancer 6 1485* ≥40 ∗ Cardiac disease 7 1485* ≥40 ∗ Infectious dis. 8 1485* ≥40 ∗ Infectious dis. 9 130** 3∗ ∗ Cancer 10 130** 3∗ ∗ Cancer 11 130** 3∗ ∗ Cancer 12 130** 3∗ ∗ Cancer Anonymous DB 4-anonymity DB
  • 43. l-diversity [MGK+06] • The purpose is that the sensitive information in each group is not skewed. – Prevent homogeneity attack – Prevent background knowledge attack l-diversity (intuitive definition) That a group is l-diverse is defined as at least l kinds of values in the group.
  • 44. 44 name age sex disease John 65 M flu Jack 30 M gastritis Alice 43 F pneumonia Bill 50 M flu Pat 70 F pneumonia Peter 32 M flu Joan 60 F flu Ivan 55 M pneumonia Chris 40 F rhinitis john flu Peter flu Joan flu Bill flu Alice pneumonia Pat pneumonia Ivan pneumonia Jack gastritis Chris rhinitis Divide into disease based sub Databases l-diversity algorithm part1 •DB is divided according to each value of sensitive information( disease name).
  • 45. 45 John flu Peter flu Joan flu Bill flu Alice pneumonia Pat pneumonia Ivan pneumonia Jack gastritis Chris rhinitis John flu Joan flu Alice pneumonia Ivan pneumonia Chris rhinitis Peter flu Bill flu Pat pneumonia Jack gastritis Each of these two groups contains at least 3 diseases: 3-diversity l-diversity algorithm part2 •Select records from each of left hand side date group and sequentially add each of the right hand side data group
  • 46. Anatomy [Xiaokui06] • Divide the original table( appeared in l-divesity algorithm part 1) into two tables. The left and right table are only linked by group ID, here 1 and 2. • 3-diversity 46 Group ID disease frequency 1 flu 2 1 pneumonia 2 1 rhinitis 1 2 flu 2 2 pneumonia 1 2 gastritis 1 name age sex Group ID John 65 M 1 Jack 30 M 1 Alice 43 F 1 Bill 50 M 1 Pat 70 F 1 Peter 32 M 2 Joan 60 F 2 Ivan 55 M 2 Chris 40 F 2 Data mining is done on these two tables. Since each value is not generalized, the expected accuracy is high.
  • 47. Side effects of k-anonymity Defamation
  • 48. name age sex address Location at 2016/6/6 12:00 John 35 M Bunkyo hongo 11 K consumer finance shop Dan 30 M Bunkyo Yusima 22 T University Jack 33 M Bunkyo Yayoi 33 T University Bill 39 M Bunkyo Nezu 44 Y hospital name age sex address Location at 2016/6/6 12:00 John 30’s M Bunkyo K consumer finance shop Dan 30’s M Bunkyo T University Jack 30’s M Bunkyo T University Bill 30’s M Bunkyo Y hospital 4-anonymize Dan , Jack and Bill are not recognized a person different from John by 4-anonyumity, all four persons are suspected to stay at K consumer finance shopk-anonymization provokes defamation on Dan, Jack and Bill.
  • 49. name age sex address Location at 2016/6/6 12:00 John 35 M Bunkyo hongo 11 K consumer finance shop Dan 30 M Bunkyo Yusima 22 K consumer finance shop Jack 33 M Bunkyo Yayoi 33 K consumer finance shop Bill 39 M Bunkyo Nezu 44 K consumer finance shop Exchange one person to make DB 2-diversity By 2-diversifying, Alex becomes strongly suspected to be at K consumer finance shop  l-diversity provokes defamation l-diversity makes situation worse These values shows all four is at K consumer finance shop name age sex address Location at 2016/6/6 12:00 John 30’s M Bunkyo K consumer finance shop Dan 30’s M Bunkyo K consumer finance shop Jack 30’s M Bunkyo K consumer finance shop Alex 30’s M Bunkyo T Univeristy
  • 50. k-anonymity provokes defamation in sub area aggregation k-anonymmized area : at least k people are in this area consumer finance shop This university student who is trying to find a job, is suspected to stay at consumer finance shop, and this situation is not good for his job seeking process. Defamation
  • 51. Why defamation happens? • Case study – A job candidate who is a good university student. – He is in k people group that includes at least one person who went to a consumer finance shop. – A company he tries to take entrance examination does not want hire a person who goes to a consumer finance shop. – He is suspected to go to a consumer finance shop. defamation! – At the same time he is a good university student.
  • 52. Back ground situation of defamation • Case study cont. – If the company deletes him from candidate list, it must use another time and money, say X, to check another candidate: – If the company hires a bad buy, it will suffer a certain amount of damage, say Y, by his bad behavior. – Then if the expected value of Y is more than X, the company becomes very negative, otherwise not negative about him. – This is a defamation from an economical point of view.
  • 53. Back ground situation of defamation • Case study cont. – Another factor is the probability that he actually went to a consumer finance shop. – This probability is proportional to the number of consumer finance shop visitors, say s, in k people of k-anonymity group. – Then the relation is sketched in the figure on next slide.
  • 54. 1 0 1 The subjective probability of the company suspects him The expected damage if the company hires him The money the company has to spend for checking another candidate s/k
  • 55. 1 0 1 The subjective probability of the company suspects him The expected damage if the company hires him The money the company has to spend for checking another candidate s/k In this area, the company does not pay if it suspects him In this area, the company should suspect him to avoid the expected damage C The border line between defamation or not
  • 56. Solution • Then the solution is simple: – Make the border line as small as possible. – But how? • We can revise k-anonymization algorithm in order to minimize the number of bad behavior guys in k- anonymity group. – This revision, however, reduce the accuracy of the data. – Then the problem comes to be a optimization problem: Maxmize Accuracy of data subject to number of bad guys ≤ 1 in k-anonymity group
  • 57. A consumer finance shop is devided into 4 parts to reduce # of poepole visit it is less or equal than one K-anonymity area isdevided into 4 areas A concumer finance shop
  • 58. Outline of algorithm 1. Do k-anonymization. 2. If one group includes more than one bad guys ① Then combine this and two nearest groups ② Do k-anonymization to this combined group to make two groups that includes at most one bad guys. ③ If step ② fails, ④ then go back to one step in 1. Do k-anonymization, namely try to find another generalization in k- anonymization.
  • 59. Reference • [LDR 05]LeFevre, K., DeWitt, D.J., Ramakrishnan, R. Incognito: Efficient Full-domain k-Anonymity. SIGMOD, 2005. • [LDR06]LeFevre, K., DeWitt, D.J., Ramakrishnan, R. Mondrian Multidimensional k-Anonymity. ICDE, 2006. • [XWP+06] Xu, J., Wang, W., Pei, J., Wang, X., Shi, B., Fu, A., Utility-Based Anonymization Using Local Recoding. SIGKDD, 2006. • [MGK2007]MACHANAVAJJHALA,A. KIFER,D. GEHRKE,J. and VENKITASUBRAMANIAM, U. l-Diversity: Privacy Beyond k-Anonymity. ACM Transactions on Knowledge Discovery from Data, Vol. 1, No. 1, Article 3,2007 • [S01] Samarati, P. Protecting Respondents' Identities in Microdata Release. IEEE TKDE, 13(6):1010- 1027, 2001. • [S02a] Sweeney, L. k-Anonymity: A Model for Protecting Privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 2002. • [S02b] Sweeney, L. k-Anonymity: Achieving k-Anonymity Privacy Protection using Generalization and Suppresion. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 2002. • Ninghui Li,Tiancheng Li,Venkatasubramanian, S. “t-Closeness: Privacy Beyond k-Anonymity and – Diversity”. ICDE2007, pp.106-115, 2007. • [SMP] Sacharidis, D., Mouratidis, K., Papadias, D. k-Anonymity in the Presence of External Databases(to be appeared) • [Xiaokui06] X. Xiaokui and T. Yufei. (2006). Anatomy: Simple and Effective Privacy Preservation. VLDB, 139-150.