SlideShare a Scribd company logo
TECS 2007 R. Ramakrishnan, Yahoo! Research
Data Mining
(with many slides due to Gehrke, Garofalakis, Rastogi)
Raghu Ramakrishnan
Yahoo! Research
University of Wisconsin–Madison (on leave)
Introduction
2
Definition
Data mining is the exploration and analysis of large quantities of data in order to
discover valid, novel, potentially useful, and ultimately understandable
patterns in data.
Valid: The patterns hold in general.
Novel: We did not know the pattern beforehand.
Useful: We can devise actions from the patterns.
Understandable: We can interpret and comprehend the
patterns.
3
Case Study: Bank
• Business goal: Sell more home equity loans
• Current models:
• Customers with college-age children use home equity loans to pay for tuition
• Customers with variable income use home equity loans to even out stream of income
• Data:
• Large data warehouse
• Consolidates data from 42 operational data sources
4
Case Study: Bank (Contd.)
1. Select subset of customer records who have received
home equity loan offer
• Customers who declined
• Customers who signed up
5
Income Number of
Children
Average Checking
Account Balance
… Reponse
$40,000 2 $1500 Yes
$75,000 0 $5000 No
$50,000 1 $3000 No
… … … … …
Case Study: Bank (Contd.)
2. Find rules to predict whether a customer would respond to
home equity loan offer
IF (Salary < 40k) and
(numChildren > 0) and
(ageChild1 > 18 and ageChild1 < 22)
THEN YES
…
6
Case Study: Bank (Contd.)
3. Group customers into clusters and investigate clusters
7
Group 2
Group 3
Group 4
Group 1
Case Study: Bank (Contd.)
4. Evaluate results:
• Many “uninteresting” clusters
• One interesting cluster! Customers with both
business and personal accounts; unusually high
percentage of likely respondents
8
Example: Bank
(Contd.)
Action:
• New marketing campaign
Result:
• Acceptance rate for home equity offers more than
doubled
9
Example Application: Fraud Detection
• Industries: Health care, retail, credit card services,
telecom, B2B relationships
• Approach:
• Use historical data to build models of fraudulent
behavior
• Deploy models to identify fraudulent instances
10
Fraud Detection (Contd.)
• Examples:
• Auto insurance: Detect groups of people who stage accidents to collect insurance
• Medical insurance: Fraudulent claims
• Money laundering: Detect suspicious money transactions (US Treasury's Financial Crimes
Enforcement Network)
• Telecom industry: Find calling patterns that deviate from a norm (origin and destination of
the call, duration, time of day, day of week).
11
Other Example Applications
• CPG: Promotion analysis
• Retail: Category management
• Telecom: Call usage analysis, churn
• Healthcare: Claims analysis, fraud detection
• Transportation/Distribution: Logistics management
• Financial Services: Credit analysis, fraud detection
• Data service providers: Value-added data analysis
12
What is a Data Mining Model?
A data mining model is a description of a certain aspect of a
dataset. It produces output values for an assigned set of
inputs.
Examples:
• Clustering
• Linear regression model
• Classification model
• Frequent itemsets and association rules
• Support Vector Machines
13
Data Mining Methods
14
Overview
• Several well-studied tasks
• Classification
• Clustering
• Frequent Patterns
• Many methods proposed for each
• Focus in database and data mining community:
• Scalability
• Managing the process
• Exploratory analysis
15
Classification
Goal:
Learn a function that assigns a record to one of several
predefined classes.
Requirements on the model:
• High accuracy
• Understandable by humans, interpretable
• Fast construction for very large training databases
Classification
Example application: telemarketing
17
Classification (Contd.)
• Decision trees are one approach to classification.
• Other approaches include:
• Linear Discriminant Analysis
• k-nearest neighbor methods
• Logistic regression
• Neural networks
• Support Vector Machines
Classification Example
• Training database:
• Two predictor attributes:
Age and Car-type (Sport, Minivan
and Truck)
• Age is ordered, Car-type is
categorical attribute
• Class label indicates
whether person bought
product
• Dependent attribute is categorical
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
Classification Problem
• If Y is categorical, the problem is a classification problem,
and we use C instead of Y. |dom(C)| = J, the number of
classes.
• C is the class label, d is called a classifier.
• Let r be a record randomly drawn from P.
Define the misclassification rate of d:
RT(d,P) = P(d(r.X1, …, r.Xk) != r.C)
• Problem definition: Given dataset D that is a random sample
from probability distribution P, find classifier d such that
RT(d,P) is minimized.
22
Regression Problem
• If Y is numerical, the problem is a regression problem.
• Y is called the dependent variable, d is called a regression function.
• Let r be a record randomly drawn from P.
Define mean squared error rate of d:
RT(d,P) = E(r.Y - d(r.X1, …, r.Xk))2
• Problem definition: Given dataset D that is a random sample from probability
distribution P, find regression function d such that RT(d,P) is minimized.
23
Regression Example
• Example training database
• Two predictor attributes:
Age and Car-type (Sport, Minivan
and Truck)
• Spent indicates how much person
spent during a recent visit to the
web site
• Dependent attribute is numerical
Age Car Spent
20 M $200
30 M $150
25 T $300
30 S $220
40 S $400
20 T $80
30 M $100
25 M $125
40 M $500
20 S $420
Decision Trees
25
What are Decision Trees?
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
0 30 60 Age
YES
YES
NO
Minivan
Sports,
Truck
Decision Trees
• A decision tree T encodes d (a classifier or regression function) in form
of a tree.
• A node t in T without children is called a leaf node. Otherwise t is
called an internal node.
27
Internal Nodes
• Each internal node has an associated splitting predicate. Most
common are binary predicates.
Example predicates:
• Age <= 20
• Profession in {student, teacher}
• 5000*Age + 3*Salary – 10000 > 0
28
Leaf Nodes
Consider leaf node t:
• Classification problem: Node t is labeled with one
class label c in dom(C)
• Regression problem: Two choices
• Piecewise constant model:
t is labeled with a constant y in dom(Y).
• Piecewise linear model:
t is labeled with a linear model
Y = yt + Σ aiXi
30
Example
Encoded classifier:
If (age<30 and
carType=Minivan)
Then YES
If (age <30 and
(carType=Sports or
carType=Truck))
Then NO
If (age >= 30)
Then YES
31
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
Issues in Tree Construction
• Three algorithmic components:
• Split Selection Method
• Pruning Method
• Data Access Method
Top-Down Tree Construction
BuildTree(Node n, Training database D,
Split Selection Method S)
[ (1) Apply S to D to find splitting criterion ]
(1a) for each predictor attribute X
(1b) Call S.findSplit(AVC-set of X)
(1c) endfor
(1d) S.chooseBest();
(2) if (n is not a leaf node) ...
S: C4.5, CART, CHAID, FACT, ID3, GID3, QUEST, etc.
Split Selection Method
• Numerical Attribute: Find a split point that separates
the (two) classes
(Yes: No: )
30 35
Age
Split Selection Method (Contd.)
• Categorical Attributes: How to group?
Sport: Truck: Minivan:
(Sport, Truck) -- (Minivan)
(Sport) --- (Truck, Minivan)
(Sport, Minivan) --- (Truck)
Impurity-based Split Selection
Methods
• Split selection method has two parts:
• Search space of possible splitting criteria. Example: All
splits of the form “age <= c”.
• Quality assessment of a splitting criterion
• Need to quantify the quality of a split: Impurity
function
• Example impurity functions: Entropy, gini-index, chi-
square index
Data Access Method
• Goal: Scalable decision tree construction, using the
complete training database
AVC-Sets
Training Database AVC-Sets
Age Yes No
20 1 2
25 1 1
30 3 0
40 2 0
Car Yes No
Sport 2 1
Truck 0 2
Minivan 5 0
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
Motivation for Data Access Methods
Training Database
Age
<30 >=30
Left Partition Right Partition
In principle, one pass over training database for each node.
Can we improve?
RainForest Algorithms: RF-Hybrid
First scan:
Main Memory
Database
AVC-Sets
Build AVC-sets for root
RainForest Algorithms: RF-Hybrid
Second Scan:
Main Memory
Database
AVC-Sets
Age<30
Build AVC sets for children of the root
RainForest Algorithms: RF-Hybrid
Third Scan:
Main Memory
Database
Age<30
Sal<20k Car==S
Partition 1 Partition 2 Partition 3 Partition 4
As we expand the tree, we run out
Of memory, and have to “spill”
partitions to disk, and recursively
read and process them later.
RainForest Algorithms: RF-Hybrid
Further optimization: While writing partitions, concurrently build AVC-groups of as
many nodes as possible in-memory. This should remind you of Hybrid Hash-Join!
Main Memory
Database
Age<30
Sal<20k Car==S
Partition 1 Partition 2 Partition 3 Partition 4
CLUSTERING
44
Problem
• Given points in a multidimensional space, group them into a small
number of clusters, using some measure of “nearness”
• E.g., Cluster documents by topic
• E.g., Cluster users by similar interests
45
Clustering
• Output: (k) groups of records called clusters, such that the
records within a group are more similar to records in other
groups
• Representative points for each cluster
• Labeling of each record with each cluster number
• Other description of each cluster
• This is unsupervised learning: No record labels are given to
learn from
• Usage:
• Exploratory data mining
• Preprocessing step (e.g., outlier detection)
46
Clustering (Contd.)
• Requirements: Need to define “similarity”
between records
• Important: Use the “right” similarity (distance)
function
• Scale or normalize all attributes. Example: seconds,
hours, days
• Assign different weights to reflect importance of the
attribute
• Choose appropriate measure (e.g., L1, L2)
49
Approaches
• Centroid-based: Assume we have k clusters, guess
at the centers, assign points to nearest center,
e.g., K-means; over time, centroids shift
• Hierarchical: Assume there is one cluster per
point, and repeatedly merge nearby clusters using
some distance threshold
51
Scalability: Do this with fewest number of passes
over data, ideally, sequentially
Scalable Clustering Algorithms for Numeric Attributes
CLARANS
DBSCAN
BIRCH
CLIQUE
CURE
• Above algorithms can be used to cluster documents after
reducing their dimensionality using SVD
54
…….
Birch [ZRL96]
Pre-cluster data points using “CF-tree” data structure
Clustering Feature (CF)
Allows incremental merging of clusters!
Points to Note
• Basic algorithm works in a single pass to condense metric data using
spherical summaries
• Can be incremental
• Additional passes cluster CFs to detect non-spherical clusters
• Approximates density function
• Extensions to non-metric data
60
Market Basket Analysis:
Frequent Itemsets
63
Market Basket Analysis
• Consider shopping cart filled with several items
• Market basket analysis tries to answer the following questions:
• Who makes purchases
• What do customers buy
64
Market Basket Analysis
• Given:
• A database of customer
transactions
• Each transaction is a set of
items
• Goal:
• Extract rules
65
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
Market Basket Analysis (Contd.)
• Co-occurrences
• 80% of all customers purchase items X, Y and Z together.
• Association rules
• 60% of all customers who purchase X and Y also buy Z.
• Sequential patterns
• 60% of customers who first buy X also purchase Y within
three weeks.
66
Confidence and Support
We prune the set of all possible association rules using
two interestingness measures:
• Confidence of a rule:
• X => Y has confidence c if P(Y|X) = c
• Support of a rule:
• X => Y has support s if P(XY) = s
We can also define
• Support of a co-ocurrence XY:
• XY has support s if P(XY) = s
67
Example
• Example rule:
{Pen} => {Milk}
Support: 75%
Confidence: 75%
• Another example:
{Ink} => {Pen}
Support: 100%
Confidence: 100%
68
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
Exercise
• Can you find all itemsets with
support >= 75%?
69
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
Exercise
• Can you find all association rules with
support >= 50%?
70
TID CID Date Item Qty
111 201 5/1/99 Pen 2
111 201 5/1/99 Ink 1
111 201 5/1/99 Milk 3
111 201 5/1/99 Juice 6
112 105 6/3/99 Pen 1
112 105 6/3/99 Ink 1
112 105 6/3/99 Milk 1
113 106 6/5/99 Pen 1
113 106 6/5/99 Milk 1
114 201 7/1/99 Pen 2
114 201 7/1/99 Ink 2
114 201 7/1/99 Juice 4
Extensions
• Imposing constraints
• Only find rules involving the dairy department
• Only find rules involving expensive products
• Only find rules with “whiskey” on the right hand side
• Only find rules with “milk” on the left hand side
• Hierarchies on the items
• Calendars (every Sunday, every 1st of the month)
71
Market Basket Analysis: Applications
• Sample Applications
• Direct marketing
• Fraud detection for medical insurance
• Floor/shelf planning
• Web site layout
• Cross-selling
72
DBMS Support for DM
73
Why Integrate DM into a DBMS?
74
Data
Copy
Extract
Models
Consistency?
Mine
Integration Objectives
• Avoid isolation of querying
from mining
• Difficult to do “ad-hoc”
mining
• Provide simple
programming approach to
creating and using DM
models
• Make it possible to add
new models
• Make it possible to add
new, scalable algorithms
75
Analysts (users) DM Vendors
SQL/MM: Data Mining
• A collection of classes that provide a standard interface for invoking
DM algorithms from SQL systems.
• Four data models are supported:
• Frequent itemsets, association rules
• Clusters
• Regression trees
• Classification trees
76
DATA MINING SUPPORT IN MICROSOFT SQL SERVER *
77
* Thanks to Surajit Chaudhuri for permission to use/adapt his slides
Key Design Decisions
• Adopt relational data representation
• A Data Mining Model (DMM) as a “tabular” object (externally; can
be represented differently internally)
• Language-based interface
• Extension of SQL
• Standard syntax
78
DM Concepts to Support
• Representation of input (cases)
• Representation of models
• Specification of training step
• Specification of prediction step
79
Should be independent of specific algorithms
What are “Cases”?
• DM algorithms analyze “cases”
• The “case” is the entity being categorized and classified
• Examples
• Customer credit risk analysis: Case = Customer
• Product profitability analysis: Case = Product
• Promotion success analysis: Case = Promotion
• Each case encapsulates all we know about the entity
80
Cases as Records: Examples
Age Car Class
20 M Yes
30 M Yes
25 T No
30 S Yes
40 S Yes
20 T No
30 M Yes
25 M Yes
40 M Yes
20 S No
81
Cust ID Age
Marital
Status
Wealth
1 35 M 380,000
2 20 S 50,000
3 57 M 470,000
Types of Columns
• Keys: Columns that uniquely identify a case
• Attributes: Columns that describe a case
• Value: A state associated with the attribute in a specific case
• Attribute Property: Columns that describe an attribute
• Unique for a specific attribute value (TV is always an appliance)
• Attribute Modifier: Columns that represent additional “meta” information for an
attribute
• Weight of a case, Certainty of prediction
82
Cust ID Age
Marital
Status
Wealth
Product Purchases
Product Quantity Type
1 35 M 380,000 TV 1 Appliance
Coke 6 Drink
Ham 3 Food
More on Columns
• Properties describe attributes
• Can represent generalization hierarchy
• Distribution information associated with attributes
• Discrete/Continuous
• Nature of Continuous distributions
• Normal, Log_Normal
• Other Properties (e.g., ordered, not null)
83
Representing a DMM
• Specifying a Model
• Columns to predict
• Algorithm to use
• Special parameters
• Model is represented as a (nested) table
• Specification = Create table
• Training = Inserting data into the table
• Predicting = Querying the table
84
Minivan
Age
Car Type
YES NO
YES
<30 >=30
Sports, Truck
CREATE MINING MODEL
CREATE MINING MODEL [Age Prediction]
(
[Gender] TEXT DISCRETE ATTRIBUTE,
[Hair Color] TEXT DISCRETE ATTRIBUTE,
[Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT,
)
USING [Microsoft Decision Tree]
85
Name of model
Name of algorithm
CREATE MINING MODEL
CREATE MINING MODEL [Age Prediction]
(
[Customer ID] LONG KEY,
[Gender] TEXT DISCRETE ATTRIBUTE,
[Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT,
[ProductPurchases] TABLE (
[ProductName] TEXT KEY,
[Quantity] DOUBLE NORMAL CONTINUOUS,
[ProductType] TEXT DISCRETE RELATED TO [ProductName]
)
)
USING [Microsoft Decision Tree]
86
Note that the ProductPurchases column is a nested table.
SQL Server computes this field when data is “inserted”.
Training a DMM
• Training a DMM requires passing it “known” cases
• Use an INSERT INTO in order to “insert” the data to the DMM
• The DMM will usually not retain the inserted data
• Instead it will analyze the given cases and build the DMM content
(decision tree, segmentation model)
• INSERT [INTO] <mining model name>
[(columns list)]
<source data query>
87
INSERT INTO
88
INSERT INTO [Age Prediction]
(
[Gender],[Hair Color], [Age]
)
OPENQUERY([Provider=MSOLESQL…,
‘SELECT
[Gender], [Hair Color], [Age]
FROM [Customers]’
)
Executing Insert Into
• The DMM is trained
• The model can be retrained or incrementally refined
• Content (rules, trees, formulas) can be explored
• Prediction queries can be executed
89
What are Predictions?
• Predictions apply the trained model to estimate missing
attributes in a data set
• Predictions = Queries
• Specification:
• Input data set
• A trained DMM (think of it as a truth table, with one row per
combination of predictor-attribute values; this is only conceptual)
• Binding (mapping) information between the input data and the
DMM
90
Prediction Join
SELECT [Customers].[ID],
MyDMM.[Age],
PredictProbability(MyDMM.[Age])
FROM
MyDMM PREDICTION JOIN [Customers]
ON MyDMM.[Gender] = [Customers].[Gender] AND
MyDMM.[Hair Color] =
[Customers].[Hair Color]
91
Exploratory Mining:
Combining OLAP and DM
92
Databases and Data Mining
• What can database systems offer in the grand challenge of
understanding and learning from the flood of data we’ve unleashed?
• The plumbing
• Scalability
93
Databases and Data Mining
• What can database systems offer in the grand challenge of
understanding and learning from the flood of data we’ve unleashed?
• The plumbing
• Scalability
• Ideas!
• Declarativeness
• Compositionality
• Ways to conceptualize your data
94
Multidimensional Data Model
• One fact table D=(X,M)
• X=X1, X2, ... Dimension attributes
• M=M1, M2,… Measure attributes
• Domain hierarchy for each dimension attribute:
• Collection of domains Hier(Xi)= (Di
(1),..., Di
(k))
• The extended domain: EXi = 1≤k≤t DXi
(k)
• Value mapping function: γD1D2(x)
• e.g., γmonthyear(12/2005) = 2005
• Form the value hierarchy graph
• Stored as dimension table attribute (e.g., week for a time value) or
conversion functions (e.g., month, quarter)
95
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
MA
NY
TX
CA
West
East
ALL
LOCATION
Civic Sierra
F150
Camry
Truck
Sedan
ALL
Automobile
Model
Category
Region
State
ALL
ALL
1
3 2
2
1
3
Multidimensional Data
p3
p1
p4
p2
DIMENSION
ATTRIBUTES
Cube Space
• Cube space: C = EX1EX2…EXd
• Region: Hyper rectangle in cube space
• c = (v1,v2,…,vd) , vi  EXi
• Region granularity:
• gran(c) = (d1, d2, ..., dd), di = Domain(c.vi)
• Region coverage:
• coverage(c) = all facts in c
• Region set: All regions with same granularity
97
OLAP Over Imprecise Data
with Doug Burdick, Prasad Deshpande, T.S. Jayram, and Shiv
Vaithyanathan
In VLDB 05, 06 joint work with IBM Almaden
98
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
MA
NY
TX
CA
West
East
ALL
LOCATION
Civic Sierra
F150
Camry
Truck
Sedan
ALL
Automobile
Model
Category
Region
State
ALL
ALL
1
3 2
2
1
3
p5
Imprecise Data
p3
p1
p4
p2
Querying Imprecise Facts
100
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
Truck
East Auto = F150
Loc = MA
SUM(Repair) = ??? How do we treat p5?
Allocation (1)
101
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
FactID Auto Loc Repair
p1 F150 NY 100
p2 Sierra NY 500
p3 F150 MA 100
p4 Sierra MA 200
p5 Truck MA 100
Truck
East
Allocation (2)
102
p3
p1
p4
p2
MA
NY
Sierra
F150
ID FactID Auto Loc Repair Weight
1 p1 F150 NY 100 1.0
2 p2 Sierra NY 500 1.0
3 p3 F150 MA 100 1.0
4 p4 Sierra MA 200 1.0
5 p5 F150 MA 100 0.5
6 p5 Sierra MA 100 0.5
Truck
East
p5 p5
(Huh? Why 0.5 / 0.5?
- Hold on to that thought)
Allocation (3)
103
p3
p1
p4
p2
MA
NY
Sierra
F150
ID FactID Auto Loc Repair Weight
1 p1 F150 NY 100 1.0
2 p2 Sierra NY 500 1.0
3 p3 F150 MA 100 1.0
4 p4 Sierra MA 200 1.0
5 p5 F150 MA 100 0.5
6 p5 Sierra MA 100 0.5
Truck
East
p5 p5
Auto = F150
Loc = MA
SUM(Repair) = 150 Query the Extended Data Model!
Allocation Policies
• The procedure for assigning allocation weights is referred to as an
allocation policy:
• Each allocation policy uses different information to assign allocation weights
• Reflects assumption about the correlation structure in the data
• Leads to EM-style iterative algorithms for allocating imprecise facts, maximizing
likelihood of observed data
104
Allocation Policy: Count
1, 5
2, 5
( 1) 2
( 1) ( 2) 2 1
( 2) 1
( 1) ( 2) 2 1
c p
c p
Count c
p
Count c Count c
Count c
p
Count c Count c
 
 
 
 
105
p3
p1
p4
p2
MA
NY Sierra
F150
Truck
East
p5 p5
p6
c1 c2
Allocation Policy: Measure
1, 5
2, 5
( 1) 700
( 1) ( 2) 700 200
( 2) 200
( 1) ( 2) 700 200
c p
c p
Sales c
p
Sales c Sales c
Sales c
p
Sales c Sales c
 
 
 
 
ID Sales
p1 100
p2 150
p3 300
p4 200
p5 250
p6 400
106
p3
p1
p4
p2
MA
NY Sierra
F150
Truck
East
p5 p5
p6
c1 c2
Allocation Policy Template
1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Q c
p
Q c Q c
Q c
p
Q c Q c




1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Sales c
p
Sales c Sales c
Sales c
p
Sales c Sales c




1, 5
2, 5
( 1)
( 1) ( 2)
( 2)
( 1) ( 2)
c p
c p
Count c
p
Count c Count c
Count c
p
Count c Count c




,
' ( )
( ) ( )
( ') ( )
c r
c region r
Q c Q c
p
Q c Qsum r

 
 r
MA
NY
Sierra
F150
Truck
East
c1 c2
What is a Good Allocation Policy?
108
Sierra
F150
Truck
MA
NY
East
p1
p3
p5
p4
p2
We propose desiderata that enable
appropriate definition of query
semantics for imprecise data
Query: COUNT
Desideratum I: Consistency
• Consistency specifies
the relationship
between answers to
related queries on a
fixed data set
109
Sierra
F150
Truck
MA
NY
East
p1
p3
p5
p4
p2
Desideratum II: Faithfulness
• Faithfulness specifies the relationship between answers to a
fixed query on related data sets
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Sierra
F150
MA
NY
p3
p1
p4
p2
p5
Data Set 1 Data Set 2 Data Set 3
Results on Query Semantics
• Evaluating queries over extended data model yields
expected value of the aggregation operator over all possible
worlds
• Efficient query evaluation algorithms available for SUM,
COUNT; more expensive dynamic programming algorithm
for AVERAGE
• Consistency and faithfulness for SUM, COUNT are satisfied under
appropriate conditions
• (Bound-)Consistency does not hold for AVERAGE, but holds for
E(SUM)/E(COUNT)
• Weak form of faithfulness holds
• Opinion pooling with LinOP: Similar to AVERAGE
111
113
p3
p1
p4
p2
p5
MA
NY
Sierra
F150
Sierra
F150
MA
NY
p4
p1
p3
p5
p2
p1
p3
p4
p5
p2
p4
p1
p3
p5
p2
MA
NY
MA
NY
Sierra
F150
Sierra
F150
p3
p4
p1
p5
p2
MA
NY
Sierra
F150
w1
w2 w3
w4
Imprecise facts
lead to many
possible worlds
[Kripke63, …]
Query Semantics
• Given all possible worlds together with their probabilities,
queries are easily answered using expected values
• But number of possible worlds is exponential!
• Allocation gives facts weighted assignments to possible
completions, leading to an extended version of the data
• Size increase is linear in number of (completions of) imprecise
facts
• Queries operate over this extended version
114
Exploratory Mining:
Prediction Cubes
with Beechun Chen, Lei Chen, and Yi Lin
In VLDB 05; EDAM Project
115
The Idea
• Build OLAP data cubes in which cell values represent
decision/prediction behavior
• In effect, build a tree for each cell/region in the cube—observe
that this is not the same as a collection of trees used in an
ensemble method!
• The idea is simple, but it leads to promising data mining tools
• Ultimate objective: Exploratory analysis of the entire space of
“data mining choices”
• Choice of algorithms, data conditioning parameters …
116
Example (1/7): Regular OLAP
117
Location Time # of App.
… … ...
AL, USA Dec, 04 2
… … …
WY, USA Dec, 04 3
Goal: Look for patterns of unusually
high numbers of applications:
Z: Dimensions Y: Measure
All
85 86 04
Jan., 86 Dec., 86
All
Year
Month
Location Time
All
Japan USA Norway
AL WY
All
Country
State
Example (2/7): Regular OLAP
118
Location Time # of App.
… … ...
AL, USA Dec, 04 2
… … …
WY, USA Dec, 04 3
Goal: Look for patterns of unusually
high numbers of applications:
…
…
…
…
…
…
…
…
…
…
…
10
8
2
70
USA
…
…
30
25
50
20
30
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Cell value: Number of loan applications
Z: Dimensions Y: Measure
…
…
…
…
…
90
80
USA
…
90
100
CA
…
03
04
Roll up
Coarser
regions
…
…
…
…
…
…
…
…
…
10
WY
…
…
5
…
…
…
…
55
AL
USA
…
15
3
5
YT
…
20
2
5
…
…
15
15
20
AB
CA
…
Dec
…
Jan
…
2004
Drill
down
Finer regions
Example (3/7): Decision Analysis
119
Model h(X, Z(D))
E.g., decision tree
No
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Yes
…
M
White
Dec, 04
AL, USA
Approval
…
Sex
Race
Time
Location
Goal: Analyze a bank’s loan decision process w.r.t.
two dimensions: Location and Time
All
85 86 04
Jan., 86 Dec., 86
All
Year
Month
Location Time
All
Japan USA Norway
AL WY
All
Country
State
Z: Dimensions X: Predictors Y: Class
Fact table D
Cube subset
Example (3/7): Decision Analysis
• Are there branches (and time windows) where approvals
were closely tied to sensitive attributes (e.g., race)?
• Suppose you partitioned the training data by location and time,
chose the partition for a given branch and time window, and built
a classifier. You could then ask, “Are the predictions of this
classifier closely correlated with race?”
• Are there branches and times with decision making
reminiscent of 1950s Alabama?
• Requires comparison of classifiers trained using different subsets
of data.
120
Example (4/7): Prediction Cubes
121
Model h(X, [USA, Dec 04](D))
E.g., decision tree
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.8 0.9 0.6 0.8 … …
USA 0.2 0.3 0.5 … … …
… … … … … … … …
1. Build a model using data
from USA in Dec., 1985
2. Evaluate that model
Measure in a cell:
• Accuracy of the model
• Predictiveness of Race
measured based on that
model
• Similarity between that
model and a given model
N
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Y
…
M
White
Dec, 04
AL ,USA
Approval
…
Sex
Race
Time
Location
Data [USA, Dec 04](D)
Example (5/7): Model-Similarity
122
No
…
F
Black
Dec, 04
WY, USA
…
…
…
…
…
…
Yes
…
M
White
Dec, 04
AL, USA
Approval
…
Sex
Race
Time
Location
Data table D
Given:
- Data table D
- Target model h0(X)
- Test set D w/o labels
…
M
Black
…
…
…
…
F
White
…
Sex
Race
Test set D
…
…
…
…
…
…
…
…
…
…
…
0.9
0.3
0.2
USA
…
…
0.5
0.6
0.3
0.2
0.4
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Level: [Country, Month]
The loan decision process in USA during Dec 04
was similar to a discriminatory decision model
h0(X)
Build a model
Similarity
No
…
Yes
Yes
…
Yes
Example (6/7): Predictiveness
123
Location Time Race Sex … Approval
AL, USA Dec, 04 White M … Yes
… … … … … …
WY, USA Dec, 04 Black F … No
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.2 0.3 0.6 0.5 … …
USA 0.2 0.3 0.9 … … …
… … … … … … … …
Given:
- Data table D
- Attributes V
- Test set D w/o labels
Race Sex …
White F …
… … …
Black M …
Data table D
Test set D
Level: [Country, Month]
Predictiveness of V
Race was an important predictor of loan approval
decision in USA during Dec 04
Build models
h(X) h(XV)
Yes
No
.
.
No
Yes
No
.
.
Yes
Model Accuracy
• A probabilistic view of classifiers: A dataset is a random
sample from an underlying pdf p*(X, Y), and a classifier
h(X; D) = argmax y p*(Y=y | X=x, D)
• i.e., A classifier approximates the pdf by predicting the “most
likely” y value
• Model Accuracy:
• Ex,y[ I( h(x; D) = y ) ], where (x, y) is drawn from p*(X, Y | D), and
I() = 1 if the statement  is true; I() = 0, otherwise
• In practice, since p* is an unknown distribution, we use a set-
aside test set or cross-validation to estimate model accuracy.
124
Model Similarity
• The prediction similarity between two models, h1(X) and
h2(X), on test set D is
• The KL-distance between two models, h1(X) and h2(X),
on test set D is
 

D
D x
x
x ))
(
)
(
(
|
|
1
2
1 h
h
I
 
D
D x y
h
h
h
x
y
p
x
y
p
x
y
p
)
|
(
)
|
(
log
)
|
(
|
|
1
2
1
1
Attribute Predictiveness
• Intuition: V  X is not predictive if and only if V is
independent of Y given the other attributes X – V; i.e.,
p*(Y | X – V, D) = p*(Y | X, D)
• In practice, we can use the distance between h(X; D) and
h(X – V; D)
• Alternative approach: Test if h(X; D) is more accurate than
h(X – V; D) (e.g., by using cross-validation to estimate the
two model accuracies involved)
126
Example (7/7): Prediction Cube
127
2004 2003 …
Jan … Dec Jan … Dec …
CA 0.4 0.1 0.3 0.6 0.8 … …
USA 0.7 0.4 0.3 0.3 … … …
… … … … … … … …
…
…
…
…
…
…
…
…
…
…
…
…
…
0.8
0.7
0.9
WY
…
…
…
0.1
0.1
0.3
…
…
…
…
…
0.2
0.1
0.2
AL
USA
…
…
…
0.2
0.1
0.2
0.3
YT
…
…
…
0.3
0.3
0.1
0.1
…
…
…
0.2
0.1
0.1
0.2
0.4
AB
CA
…
Dec
…
Jan
Dec
…
Jan
…
2003
2004
Drill down
…
…
…
…
…
0.3
0.2
USA
…
0.2
0.3
CA
…
03
04
Roll up
Cell value: Predictiveness of Race
Efficient Computation
• Reduce prediction cube computation to data cube
computation
• Represent a data-mining model as a distributive or
algebraic (bottom-up computable) aggregate function, so
that data-cube techniques can be directly applied
128
Bottom-Up Data Cube
Computation
129
1985 1986 1987 1988
Norway 10 30 20 24
… 23 45 14 32
USA 14 32 42 11
1985 1986 1987 1988
All 47 107 76 67
All
Norway 84
… 114
USA 99
All
All 297
Cell Values: Numbers of loan applications
Scoring Function
• Represent a model as a function of sets
• Conceptually, a machine-learning model h(X; Z(D)) is a
scoring function Score(y, x; Z(D)) that gives each class y a
score on test example x
• h(x; Z(D)) = argmax y Score(y, x; Z(D))
• Score(y, x; Z(D))  p(y | x, Z(D))
• Z(D): The set of training examples (a cube subset of D)
130
Machine-Learning Models
• Naïve Bayes:
• Scoring function: algebraic
• Kernel-density-based classifier:
• Scoring function: distributive
• Decision tree, random forest:
• Neither distributive, nor algebraic
• PBE: Probability-based ensemble (new)
• To make any machine-learning model distributive
• Approximation
131
Efficiency Comparison
132
0
500
1000
1500
2000
2500
40K 80K 120K 160K 200K
RFex
KDCex
NBex
J48ex
NB
KDC
RF-
PBE
J48-
PBE
Using exhaustive
method
Using bottom-up
score computation
# of Records
Execution
Time
(sec)
Bellwether Analysis:
Global Aggregates from Local Regions
with Beechun Chen, Jude Shavlik, and Pradeep Tamma
In VLDB 06
133
Motivating Example
• A company wants to predict the first year worldwide profit of a
new item (e.g., a new movie)
• By looking at features and profits of previous (similar) movies, we
predict expected total profit (1-year US sales) for new movie
• Wait a year and write a query! If you can’t wait, stay awake …
• The most predictive “features” may be based on sales data gathered by
releasing the new movie in many “regions” (different locations over
different time periods).
• Example “region-based” features: 1st week sales in Peoria, week-to-week
sales growth in Wisconsin, etc.
• Gathering this data has a cost (e.g., marketing expenses, waiting time)
• Problem statement: Find the most predictive region features
that can be obtained within a given “cost budget”
134
Key Ideas
• Large datasets are rarely labeled with the targets that we wish
to learn to predict
• But for the tasks we address, we can readily use OLAP
queries to generate features (e.g., 1st week sales in Peoria)
and even targets (e.g., profit) for mining
• We use data-mining models as building blocks in the
mining process, rather than thinking of them as the
end result
• The central problem is to find data subsets (“bellwether
regions”) that lead to predictive features which can be
gathered at low cost for a new case
135
Motivating Example
• A company wants to predict the first year’s worldwide profit for a new
item, by using its historical database
• Database Schema:
136
Profit Table
Time
Location
CustID
ItemID
Profit
Item Table
ItemID
Category
R&D Expense
Ad Table
Time
Location
ItemID
AdExpense
AdSize
• The combination of the underlined attributes forms a key
A Straightforward Approach
• Build a regression model to predict item profit
• There is much room for accuracy improvement!
137
Profit Table
Time
Location
CustID
ItemID
Profit
Item Table
ItemID
Category
R&D Expense
Ad Table
Time
Location
ItemID
AdExpense
AdSize
ItemID Category R&D Expense Profit
1 Laptop 500K 12,000K
2 Desktop 100K 8,000K
… … … …
By joining and aggregating tables in
the historical database
we can create a training set:
Item-table features Target
An Example regression model:
Profit = 0 + 1 Laptop + 2 Desktop +
3 RdExpense
Using Regional Features
• Example region: [1st week, HK]
• Regional features:
• Regional Profit: The 1st week profit in HK
• Regional Ad Expense: The 1st week ad expense in HK
• A possibly more accurate model:
Profit[1yr, All] = 0 + 1 Laptop + 2 Desktop + 3 RdExpense +
4 Profit[1wk, KR] + 5 AdExpense[1wk, KR]
• Problem: Which region should we use?
• The smallest region that improves the accuracy the most
• We give each candidate region a cost
• The most “cost-effective” region is the bellwether region
138
Basic Bellwether Problem
• Historical database: DB
• Training item set: I
• Candidate region set: R
• E.g., { [1-n week, Location] }
• Target generation query: i(DB) returns the target value of item i  I
• E.g., sum(Profit) i, [1-52, All] ProfitTable
• Feature generation query: i,r(DB), i  Ir and r  R
• Ir: The set of items in region r
• E.g., [ Categoryi, RdExpensei, Profiti, [1-n, Loc], AdExpensei, [1-n, Loc] ]
• Cost query: r(DB), r  R, the cost of collecting data from r
• Predictive model: hr(x), r  R, trained on {(i,r(DB), i(DB)) : i  Ir}
• E.g., linear regression model
139
All
CA US KR
AL WI
All
Country
State
Location domain hierarchy
Basic Bellwether Problem
140
1 2 3 4 5 … 52
KR
USA
…
WI
WY
... …
ItemID Category … Profit[1-2,USA] …
… … … … …
i Desktop 45K
… … … … …
Aggregate over data records
in region r = [1-2, USA]
Features i,r(DB)
ItemID Total Profit
… …
i 2,000K
… …
Target i(DB)
Total Profit
in [1-52, All]
For each region r, build a predictive model hr(x); and then
choose bellwether region:
• Coverage(r) fraction of all items in region  minimum coverage
support
• Cost(r, DB) cost threshold
• Error(hr) is minimized
r
Experiment on a Mail Order Dataset
141
0
5000
10000
15000
20000
25000
30000
5 25 45 65 85
Budget
RMSE
Bel Err Avg Err
Smp Err
• Bel Err: The error of the
bellwether region found using a
given budget
• Avg Err: The average error of all
the cube regions with costs
under a given budget
• Smp Err: The error of a set of
randomly sampled (non-cube)
regions with costs under a given
budget
[1-8 month, MD]
Error-vs-Budget Plot
(RMSE: Root Mean Square Error)
Experiment on a Mail Order Dataset
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
5 25 45 65 85
Budget
Fraction
of
indistinguisables
142
Uniqueness Plot
• Y-axis: Fraction of regions
that are as good as the
bellwether region
– The fraction of regions that
satisfy the constraints and
have errors within the 99%
confidence interval of the
error of the bellwether region
• We have 99% confidence that
that [1-8 month, MD] is a quite
unusual bellwether region
[1-8 month, MD]
Subset-Based Bellwether Prediction
• Motivation: Different subsets of items may have different bellwether
regions
• E.g., The bellwether region for laptops may be different from the bellwether
region for clothes
• Two approaches:
143
R&D Expense  50K
Yes
No
Category
Desktop Laptop
[1-2, WI] [1-3, MD]
[1-1, NY]
Bellwether Tree Bellwether Cube
Low Medium High
Software OS [1-3,CA] [1-1,NY] [1-2,CA]
… ... … …
Hardware Laptop [1-4,MD] [1-1, NY] [1-3,WI]
… … … …
… … … … …
R&D Expenses
Category
TECS 2007 R. Ramakrishnan, Yahoo! Research
Conclusions
Related Work: Building models
on OLAP Results
• Multi-dimensional regression [Chen, VLDB 02]
• Goal: Detect changes of trends
• Build linear regression models for cube cells
• Step-by-step regression in stream cubes [Liu, PAKDD 03]
• Loglinear-based quasi cubes [Barbara, J. IIS 01]
• Use loglinear model to approximately compress dense regions of a data
cube
• NetCube [Margaritis, VLDB 01]
• Build Bayes Net on the entire dataset of approximate answer count
queries
145
Related Work (Contd.)
• Cubegrades [Imielinski, J. DMKD 02]
• Extend cubes with ideas from association rules
• How does the measure change when we rollup or drill down?
• Constrained gradients [Dong, VLDB 01]
• Find pairs of similar cell characteristics associated with big changes in
measure
• User-cognizant multidimensional analysis [Sarawagi, VLDBJ 01]
• Help users find the most informative unvisited regions in a data cube
using max entropy principle
• Multi-Structural DBs [Fagin et al., PODS 05, VLDB 05]
146
Take-Home Messages
• Promising exploratory data analysis paradigm:
• Can use models to identify interesting subsets
• Concentrate only on subsets in cube space
• Those are meaningful subsets, tractable
• Precompute results and provide the users with an interactive tool
• A simple way to plug “something” into cube-style analysis:
• Try to describe/approximate “something” by a distributive or
algebraic function
147
Big Picture
• Why stop with decision behavior? Can apply to other kinds of
analyses too
• Why stop at browsing? Can mine prediction cubes in their own
right
• Exploratory analysis of mining space:
• Dimension attributes can be parameters related to algorithm, data
conditioning, etc.
• Tractable evaluation is a challenge:
• Large number of “dimensions”, real-valued dimension attributes,
difficulties in compositional evaluation
• Active learning for experiment design, extending compositional
methods
148

More Related Content

PPT
Data science: DATA MINING AND DATA WHEREHOUSE.ppt
shubhanshussm10
 
PPT
Data Mining.ppt
Rvishnupriya2
 
PPTX
Data Science and Machine Learning with Tensorflow
Shubham Sharma
 
PPT
Machine learning and deep learning algorithms
KannanA29
 
PDF
DWDM-AG-day-1-2023-SEC A plus Half B--.pdf
ChristinaGayenMondal
 
PPT
3. mining frequent patterns
Azad public school
 
PPTX
Dwd mdatamining intro-iep
Ashish Kumar Thakur
 
Data science: DATA MINING AND DATA WHEREHOUSE.ppt
shubhanshussm10
 
Data Mining.ppt
Rvishnupriya2
 
Data Science and Machine Learning with Tensorflow
Shubham Sharma
 
Machine learning and deep learning algorithms
KannanA29
 
DWDM-AG-day-1-2023-SEC A plus Half B--.pdf
ChristinaGayenMondal
 
3. mining frequent patterns
Azad public school
 
Dwd mdatamining intro-iep
Ashish Kumar Thakur
 

Similar to DataMining dgfg dfg fg dsfg dfg- Copy.ppt (20)

PPT
Datamining intro-iep
aaryarun1999
 
PPT
Dwdm ppt for the btech student contain basis
nivatripathy93
 
PPTX
Hadoop PDF
1904saikrishna
 
PPTX
Big data
Harshit Namdev
 
PPTX
Introduction to machine learning and model building using linear regression
Girish Gore
 
PPT
MSC-DFIS-17 supervised-learning Algorithms.ppt
apsapssingh9
 
PPTX
Introduction to Datamining Concept and Techniques
Sơn Còm Nhom
 
PDF
Characterization
Aiswaryadevi Jaganmohan
 
PPT
Information Retrieval 08
Jeet Das
 
PDF
Default Credit Card Prediction
Alexandre Pinto
 
PPTX
IBANK - Big data www.ibank.uk.com 07474222079
ibankuk
 
PPTX
Unsupervised Learning: Clustering
Experfy
 
PPTX
Big data
Zeeshan Khan
 
PPT
Unit-1.ppt
ASrinivasReddy3
 
PPT
Cssu dw dm
sumit621
 
PPTX
Skillwise Big data
Skillwise Group
 
PDF
Matrix Factorization In Recommender Systems
YONG ZHENG
 
PPT
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
Salah Amean
 
Datamining intro-iep
aaryarun1999
 
Dwdm ppt for the btech student contain basis
nivatripathy93
 
Hadoop PDF
1904saikrishna
 
Big data
Harshit Namdev
 
Introduction to machine learning and model building using linear regression
Girish Gore
 
MSC-DFIS-17 supervised-learning Algorithms.ppt
apsapssingh9
 
Introduction to Datamining Concept and Techniques
Sơn Còm Nhom
 
Characterization
Aiswaryadevi Jaganmohan
 
Information Retrieval 08
Jeet Das
 
Default Credit Card Prediction
Alexandre Pinto
 
IBANK - Big data www.ibank.uk.com 07474222079
ibankuk
 
Unsupervised Learning: Clustering
Experfy
 
Big data
Zeeshan Khan
 
Unit-1.ppt
ASrinivasReddy3
 
Cssu dw dm
sumit621
 
Skillwise Big data
Skillwise Group
 
Matrix Factorization In Recommender Systems
YONG ZHENG
 
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
Salah Amean
 
Ad

More from JITENDER773791 (20)

PPTX
jkthsjlfd lectsdfdsfdsfdsfsdfdssfsure.pptx
JITENDER773791
 
PPTX
Lectureerdjkldfgjkkjkjkjdfgjlmfdgdfgker.pptx
JITENDER773791
 
PPTX
Lecturekjkljkljlkjknklnjkghvblkbbkbkjb.pptx
JITENDER773791
 
PPTX
Lecturedsfndskfjdsklfjldsdsfdsgmjdflgmdflmg.pptx
JITENDER773791
 
PPTX
Lecture (Additional)sdfjksjfkldsfsdf.pptx
JITENDER773791
 
PPTX
Lecture (Additional)fghgfhdfghgfhgfhgfh.pptx
JITENDER773791
 
PPTX
Analysdsdsdfgdfgdfgdfsgdfis of Data_2.pptx
JITENDER773791
 
PPTX
Analysis of hgfhgfhgfjgfjmghjghjghData_1.pptx
JITENDER773791
 
PPT
VR_Unit-1_Lec(9)_B_3D_sdfdsfsdfScanner.ppt
JITENDER773791
 
PPT
nkllml;m;llkmlmljkjiuhihkjnklnjkhjgjk.ppt
JITENDER773791
 
PDF
Unit-4.-Chi-squjkljl;jj;ljl;jlm;lml;mare.pdf
JITENDER773791
 
PPT
15hjkljklj'jklj'kljkjkljkljkljkl95867.ppt
JITENDER773791
 
PPTX
Lecture dsfgidsjfhjknflkdnkldnklnfklfndls.pptx
JITENDER773791
 
PPT
Chghjgkgyhbygukbhyvuhbbubnubuyuvyyvivlh06.ppt
JITENDER773791
 
PPT
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
JITENDER773791
 
PPT
lghjghgggkgjhgjghhjgjhgkhjghjghjghjghect1.ppt
JITENDER773791
 
PPT
inmlk;lklkjlk;lklkjlklkojhhkljkbjlkjhbtroDM.ppt
JITENDER773791
 
PPT
sequf;lds,g;'dsg;dlld'g;;gldgence - Copy.ppt
JITENDER773791
 
PPTX
howweveautosdfdgdsfmateddatamininig-140715072229-phpapp01.pptx
JITENDER773791
 
PPT
Cfbcgdhfghdfhghggfhghghgfhgfhgfhhapter11.PPT
JITENDER773791
 
jkthsjlfd lectsdfdsfdsfdsfsdfdssfsure.pptx
JITENDER773791
 
Lectureerdjkldfgjkkjkjkjdfgjlmfdgdfgker.pptx
JITENDER773791
 
Lecturekjkljkljlkjknklnjkghvblkbbkbkjb.pptx
JITENDER773791
 
Lecturedsfndskfjdsklfjldsdsfdsgmjdflgmdflmg.pptx
JITENDER773791
 
Lecture (Additional)sdfjksjfkldsfsdf.pptx
JITENDER773791
 
Lecture (Additional)fghgfhdfghgfhgfhgfh.pptx
JITENDER773791
 
Analysdsdsdfgdfgdfgdfsgdfis of Data_2.pptx
JITENDER773791
 
Analysis of hgfhgfhgfjgfjmghjghjghData_1.pptx
JITENDER773791
 
VR_Unit-1_Lec(9)_B_3D_sdfdsfsdfScanner.ppt
JITENDER773791
 
nkllml;m;llkmlmljkjiuhihkjnklnjkhjgjk.ppt
JITENDER773791
 
Unit-4.-Chi-squjkljl;jj;ljl;jlm;lml;mare.pdf
JITENDER773791
 
15hjkljklj'jklj'kljkjkljkljkljkl95867.ppt
JITENDER773791
 
Lecture dsfgidsjfhjknflkdnkldnklnfklfndls.pptx
JITENDER773791
 
Chghjgkgyhbygukbhyvuhbbubnubuyuvyyvivlh06.ppt
JITENDER773791
 
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
JITENDER773791
 
lghjghgggkgjhgjghhjgjhgkhjghjghjghjghect1.ppt
JITENDER773791
 
inmlk;lklkjlk;lklkjlklkojhhkljkbjlkjhbtroDM.ppt
JITENDER773791
 
sequf;lds,g;'dsg;dlld'g;;gldgence - Copy.ppt
JITENDER773791
 
howweveautosdfdgdsfmateddatamininig-140715072229-phpapp01.pptx
JITENDER773791
 
Cfbcgdhfghdfhghggfhghghgfhgfhgfhhapter11.PPT
JITENDER773791
 
Ad

Recently uploaded (20)

PDF
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PPTX
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
PPTX
Kanban Cards _ Mass Action in Odoo 18.2 - Odoo Slides
Celine George
 
PDF
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
PDF
Antianginal agents, Definition, Classification, MOA.pdf
Prerana Jadhav
 
PPTX
Command Palatte in Odoo 18.1 Spreadsheet - Odoo Slides
Celine George
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
DOCX
Modul Ajar Deep Learning Bahasa Inggris Kelas 11 Terbaru 2025
wahyurestu63
 
PDF
The-Invisible-Living-World-Beyond-Our-Naked-Eye chapter 2.pdf/8th science cur...
Sandeep Swamy
 
PPTX
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 
PDF
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
PPTX
Basics and rules of probability with real-life uses
ravatkaran694
 
PPTX
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
PPTX
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
DOCX
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
PDF
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
PPTX
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
PPTX
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
PPTX
How to Track Skills & Contracts Using Odoo 18 Employee
Celine George
 
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
Kanban Cards _ Mass Action in Odoo 18.2 - Odoo Slides
Celine George
 
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
Antianginal agents, Definition, Classification, MOA.pdf
Prerana Jadhav
 
Command Palatte in Odoo 18.1 Spreadsheet - Odoo Slides
Celine George
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
Modul Ajar Deep Learning Bahasa Inggris Kelas 11 Terbaru 2025
wahyurestu63
 
The-Invisible-Living-World-Beyond-Our-Naked-Eye chapter 2.pdf/8th science cur...
Sandeep Swamy
 
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
Basics and rules of probability with real-life uses
ravatkaran694
 
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
How to Track Skills & Contracts Using Odoo 18 Employee
Celine George
 

DataMining dgfg dfg fg dsfg dfg- Copy.ppt

  • 1. TECS 2007 R. Ramakrishnan, Yahoo! Research Data Mining (with many slides due to Gehrke, Garofalakis, Rastogi) Raghu Ramakrishnan Yahoo! Research University of Wisconsin–Madison (on leave)
  • 3. Definition Data mining is the exploration and analysis of large quantities of data in order to discover valid, novel, potentially useful, and ultimately understandable patterns in data. Valid: The patterns hold in general. Novel: We did not know the pattern beforehand. Useful: We can devise actions from the patterns. Understandable: We can interpret and comprehend the patterns. 3
  • 4. Case Study: Bank • Business goal: Sell more home equity loans • Current models: • Customers with college-age children use home equity loans to pay for tuition • Customers with variable income use home equity loans to even out stream of income • Data: • Large data warehouse • Consolidates data from 42 operational data sources 4
  • 5. Case Study: Bank (Contd.) 1. Select subset of customer records who have received home equity loan offer • Customers who declined • Customers who signed up 5 Income Number of Children Average Checking Account Balance … Reponse $40,000 2 $1500 Yes $75,000 0 $5000 No $50,000 1 $3000 No … … … … …
  • 6. Case Study: Bank (Contd.) 2. Find rules to predict whether a customer would respond to home equity loan offer IF (Salary < 40k) and (numChildren > 0) and (ageChild1 > 18 and ageChild1 < 22) THEN YES … 6
  • 7. Case Study: Bank (Contd.) 3. Group customers into clusters and investigate clusters 7 Group 2 Group 3 Group 4 Group 1
  • 8. Case Study: Bank (Contd.) 4. Evaluate results: • Many “uninteresting” clusters • One interesting cluster! Customers with both business and personal accounts; unusually high percentage of likely respondents 8
  • 9. Example: Bank (Contd.) Action: • New marketing campaign Result: • Acceptance rate for home equity offers more than doubled 9
  • 10. Example Application: Fraud Detection • Industries: Health care, retail, credit card services, telecom, B2B relationships • Approach: • Use historical data to build models of fraudulent behavior • Deploy models to identify fraudulent instances 10
  • 11. Fraud Detection (Contd.) • Examples: • Auto insurance: Detect groups of people who stage accidents to collect insurance • Medical insurance: Fraudulent claims • Money laundering: Detect suspicious money transactions (US Treasury's Financial Crimes Enforcement Network) • Telecom industry: Find calling patterns that deviate from a norm (origin and destination of the call, duration, time of day, day of week). 11
  • 12. Other Example Applications • CPG: Promotion analysis • Retail: Category management • Telecom: Call usage analysis, churn • Healthcare: Claims analysis, fraud detection • Transportation/Distribution: Logistics management • Financial Services: Credit analysis, fraud detection • Data service providers: Value-added data analysis 12
  • 13. What is a Data Mining Model? A data mining model is a description of a certain aspect of a dataset. It produces output values for an assigned set of inputs. Examples: • Clustering • Linear regression model • Classification model • Frequent itemsets and association rules • Support Vector Machines 13
  • 15. Overview • Several well-studied tasks • Classification • Clustering • Frequent Patterns • Many methods proposed for each • Focus in database and data mining community: • Scalability • Managing the process • Exploratory analysis 15
  • 16. Classification Goal: Learn a function that assigns a record to one of several predefined classes. Requirements on the model: • High accuracy • Understandable by humans, interpretable • Fast construction for very large training databases
  • 18. Classification (Contd.) • Decision trees are one approach to classification. • Other approaches include: • Linear Discriminant Analysis • k-nearest neighbor methods • Logistic regression • Neural networks • Support Vector Machines
  • 19. Classification Example • Training database: • Two predictor attributes: Age and Car-type (Sport, Minivan and Truck) • Age is ordered, Car-type is categorical attribute • Class label indicates whether person bought product • Dependent attribute is categorical Age Car Class 20 M Yes 30 M Yes 25 T No 30 S Yes 40 S Yes 20 T No 30 M Yes 25 M Yes 40 M Yes 20 S No
  • 20. Classification Problem • If Y is categorical, the problem is a classification problem, and we use C instead of Y. |dom(C)| = J, the number of classes. • C is the class label, d is called a classifier. • Let r be a record randomly drawn from P. Define the misclassification rate of d: RT(d,P) = P(d(r.X1, …, r.Xk) != r.C) • Problem definition: Given dataset D that is a random sample from probability distribution P, find classifier d such that RT(d,P) is minimized. 22
  • 21. Regression Problem • If Y is numerical, the problem is a regression problem. • Y is called the dependent variable, d is called a regression function. • Let r be a record randomly drawn from P. Define mean squared error rate of d: RT(d,P) = E(r.Y - d(r.X1, …, r.Xk))2 • Problem definition: Given dataset D that is a random sample from probability distribution P, find regression function d such that RT(d,P) is minimized. 23
  • 22. Regression Example • Example training database • Two predictor attributes: Age and Car-type (Sport, Minivan and Truck) • Spent indicates how much person spent during a recent visit to the web site • Dependent attribute is numerical Age Car Spent 20 M $200 30 M $150 25 T $300 30 S $220 40 S $400 20 T $80 30 M $100 25 M $125 40 M $500 20 S $420
  • 24. What are Decision Trees? Minivan Age Car Type YES NO YES <30 >=30 Sports, Truck 0 30 60 Age YES YES NO Minivan Sports, Truck
  • 25. Decision Trees • A decision tree T encodes d (a classifier or regression function) in form of a tree. • A node t in T without children is called a leaf node. Otherwise t is called an internal node. 27
  • 26. Internal Nodes • Each internal node has an associated splitting predicate. Most common are binary predicates. Example predicates: • Age <= 20 • Profession in {student, teacher} • 5000*Age + 3*Salary – 10000 > 0 28
  • 27. Leaf Nodes Consider leaf node t: • Classification problem: Node t is labeled with one class label c in dom(C) • Regression problem: Two choices • Piecewise constant model: t is labeled with a constant y in dom(Y). • Piecewise linear model: t is labeled with a linear model Y = yt + Σ aiXi 30
  • 28. Example Encoded classifier: If (age<30 and carType=Minivan) Then YES If (age <30 and (carType=Sports or carType=Truck)) Then NO If (age >= 30) Then YES 31 Minivan Age Car Type YES NO YES <30 >=30 Sports, Truck
  • 29. Issues in Tree Construction • Three algorithmic components: • Split Selection Method • Pruning Method • Data Access Method
  • 30. Top-Down Tree Construction BuildTree(Node n, Training database D, Split Selection Method S) [ (1) Apply S to D to find splitting criterion ] (1a) for each predictor attribute X (1b) Call S.findSplit(AVC-set of X) (1c) endfor (1d) S.chooseBest(); (2) if (n is not a leaf node) ... S: C4.5, CART, CHAID, FACT, ID3, GID3, QUEST, etc.
  • 31. Split Selection Method • Numerical Attribute: Find a split point that separates the (two) classes (Yes: No: ) 30 35 Age
  • 32. Split Selection Method (Contd.) • Categorical Attributes: How to group? Sport: Truck: Minivan: (Sport, Truck) -- (Minivan) (Sport) --- (Truck, Minivan) (Sport, Minivan) --- (Truck)
  • 33. Impurity-based Split Selection Methods • Split selection method has two parts: • Search space of possible splitting criteria. Example: All splits of the form “age <= c”. • Quality assessment of a splitting criterion • Need to quantify the quality of a split: Impurity function • Example impurity functions: Entropy, gini-index, chi- square index
  • 34. Data Access Method • Goal: Scalable decision tree construction, using the complete training database
  • 35. AVC-Sets Training Database AVC-Sets Age Yes No 20 1 2 25 1 1 30 3 0 40 2 0 Car Yes No Sport 2 1 Truck 0 2 Minivan 5 0 Age Car Class 20 M Yes 30 M Yes 25 T No 30 S Yes 40 S Yes 20 T No 30 M Yes 25 M Yes 40 M Yes 20 S No
  • 36. Motivation for Data Access Methods Training Database Age <30 >=30 Left Partition Right Partition In principle, one pass over training database for each node. Can we improve?
  • 37. RainForest Algorithms: RF-Hybrid First scan: Main Memory Database AVC-Sets Build AVC-sets for root
  • 38. RainForest Algorithms: RF-Hybrid Second Scan: Main Memory Database AVC-Sets Age<30 Build AVC sets for children of the root
  • 39. RainForest Algorithms: RF-Hybrid Third Scan: Main Memory Database Age<30 Sal<20k Car==S Partition 1 Partition 2 Partition 3 Partition 4 As we expand the tree, we run out Of memory, and have to “spill” partitions to disk, and recursively read and process them later.
  • 40. RainForest Algorithms: RF-Hybrid Further optimization: While writing partitions, concurrently build AVC-groups of as many nodes as possible in-memory. This should remind you of Hybrid Hash-Join! Main Memory Database Age<30 Sal<20k Car==S Partition 1 Partition 2 Partition 3 Partition 4
  • 42. Problem • Given points in a multidimensional space, group them into a small number of clusters, using some measure of “nearness” • E.g., Cluster documents by topic • E.g., Cluster users by similar interests 45
  • 43. Clustering • Output: (k) groups of records called clusters, such that the records within a group are more similar to records in other groups • Representative points for each cluster • Labeling of each record with each cluster number • Other description of each cluster • This is unsupervised learning: No record labels are given to learn from • Usage: • Exploratory data mining • Preprocessing step (e.g., outlier detection) 46
  • 44. Clustering (Contd.) • Requirements: Need to define “similarity” between records • Important: Use the “right” similarity (distance) function • Scale or normalize all attributes. Example: seconds, hours, days • Assign different weights to reflect importance of the attribute • Choose appropriate measure (e.g., L1, L2) 49
  • 45. Approaches • Centroid-based: Assume we have k clusters, guess at the centers, assign points to nearest center, e.g., K-means; over time, centroids shift • Hierarchical: Assume there is one cluster per point, and repeatedly merge nearby clusters using some distance threshold 51 Scalability: Do this with fewest number of passes over data, ideally, sequentially
  • 46. Scalable Clustering Algorithms for Numeric Attributes CLARANS DBSCAN BIRCH CLIQUE CURE • Above algorithms can be used to cluster documents after reducing their dimensionality using SVD 54 …….
  • 47. Birch [ZRL96] Pre-cluster data points using “CF-tree” data structure
  • 48. Clustering Feature (CF) Allows incremental merging of clusters!
  • 49. Points to Note • Basic algorithm works in a single pass to condense metric data using spherical summaries • Can be incremental • Additional passes cluster CFs to detect non-spherical clusters • Approximates density function • Extensions to non-metric data 60
  • 51. Market Basket Analysis • Consider shopping cart filled with several items • Market basket analysis tries to answer the following questions: • Who makes purchases • What do customers buy 64
  • 52. Market Basket Analysis • Given: • A database of customer transactions • Each transaction is a set of items • Goal: • Extract rules 65 TID CID Date Item Qty 111 201 5/1/99 Pen 2 111 201 5/1/99 Ink 1 111 201 5/1/99 Milk 3 111 201 5/1/99 Juice 6 112 105 6/3/99 Pen 1 112 105 6/3/99 Ink 1 112 105 6/3/99 Milk 1 113 106 6/5/99 Pen 1 113 106 6/5/99 Milk 1 114 201 7/1/99 Pen 2 114 201 7/1/99 Ink 2 114 201 7/1/99 Juice 4
  • 53. Market Basket Analysis (Contd.) • Co-occurrences • 80% of all customers purchase items X, Y and Z together. • Association rules • 60% of all customers who purchase X and Y also buy Z. • Sequential patterns • 60% of customers who first buy X also purchase Y within three weeks. 66
  • 54. Confidence and Support We prune the set of all possible association rules using two interestingness measures: • Confidence of a rule: • X => Y has confidence c if P(Y|X) = c • Support of a rule: • X => Y has support s if P(XY) = s We can also define • Support of a co-ocurrence XY: • XY has support s if P(XY) = s 67
  • 55. Example • Example rule: {Pen} => {Milk} Support: 75% Confidence: 75% • Another example: {Ink} => {Pen} Support: 100% Confidence: 100% 68 TID CID Date Item Qty 111 201 5/1/99 Pen 2 111 201 5/1/99 Ink 1 111 201 5/1/99 Milk 3 111 201 5/1/99 Juice 6 112 105 6/3/99 Pen 1 112 105 6/3/99 Ink 1 112 105 6/3/99 Milk 1 113 106 6/5/99 Pen 1 113 106 6/5/99 Milk 1 114 201 7/1/99 Pen 2 114 201 7/1/99 Ink 2 114 201 7/1/99 Juice 4
  • 56. Exercise • Can you find all itemsets with support >= 75%? 69 TID CID Date Item Qty 111 201 5/1/99 Pen 2 111 201 5/1/99 Ink 1 111 201 5/1/99 Milk 3 111 201 5/1/99 Juice 6 112 105 6/3/99 Pen 1 112 105 6/3/99 Ink 1 112 105 6/3/99 Milk 1 113 106 6/5/99 Pen 1 113 106 6/5/99 Milk 1 114 201 7/1/99 Pen 2 114 201 7/1/99 Ink 2 114 201 7/1/99 Juice 4
  • 57. Exercise • Can you find all association rules with support >= 50%? 70 TID CID Date Item Qty 111 201 5/1/99 Pen 2 111 201 5/1/99 Ink 1 111 201 5/1/99 Milk 3 111 201 5/1/99 Juice 6 112 105 6/3/99 Pen 1 112 105 6/3/99 Ink 1 112 105 6/3/99 Milk 1 113 106 6/5/99 Pen 1 113 106 6/5/99 Milk 1 114 201 7/1/99 Pen 2 114 201 7/1/99 Ink 2 114 201 7/1/99 Juice 4
  • 58. Extensions • Imposing constraints • Only find rules involving the dairy department • Only find rules involving expensive products • Only find rules with “whiskey” on the right hand side • Only find rules with “milk” on the left hand side • Hierarchies on the items • Calendars (every Sunday, every 1st of the month) 71
  • 59. Market Basket Analysis: Applications • Sample Applications • Direct marketing • Fraud detection for medical insurance • Floor/shelf planning • Web site layout • Cross-selling 72
  • 61. Why Integrate DM into a DBMS? 74 Data Copy Extract Models Consistency? Mine
  • 62. Integration Objectives • Avoid isolation of querying from mining • Difficult to do “ad-hoc” mining • Provide simple programming approach to creating and using DM models • Make it possible to add new models • Make it possible to add new, scalable algorithms 75 Analysts (users) DM Vendors
  • 63. SQL/MM: Data Mining • A collection of classes that provide a standard interface for invoking DM algorithms from SQL systems. • Four data models are supported: • Frequent itemsets, association rules • Clusters • Regression trees • Classification trees 76
  • 64. DATA MINING SUPPORT IN MICROSOFT SQL SERVER * 77 * Thanks to Surajit Chaudhuri for permission to use/adapt his slides
  • 65. Key Design Decisions • Adopt relational data representation • A Data Mining Model (DMM) as a “tabular” object (externally; can be represented differently internally) • Language-based interface • Extension of SQL • Standard syntax 78
  • 66. DM Concepts to Support • Representation of input (cases) • Representation of models • Specification of training step • Specification of prediction step 79 Should be independent of specific algorithms
  • 67. What are “Cases”? • DM algorithms analyze “cases” • The “case” is the entity being categorized and classified • Examples • Customer credit risk analysis: Case = Customer • Product profitability analysis: Case = Product • Promotion success analysis: Case = Promotion • Each case encapsulates all we know about the entity 80
  • 68. Cases as Records: Examples Age Car Class 20 M Yes 30 M Yes 25 T No 30 S Yes 40 S Yes 20 T No 30 M Yes 25 M Yes 40 M Yes 20 S No 81 Cust ID Age Marital Status Wealth 1 35 M 380,000 2 20 S 50,000 3 57 M 470,000
  • 69. Types of Columns • Keys: Columns that uniquely identify a case • Attributes: Columns that describe a case • Value: A state associated with the attribute in a specific case • Attribute Property: Columns that describe an attribute • Unique for a specific attribute value (TV is always an appliance) • Attribute Modifier: Columns that represent additional “meta” information for an attribute • Weight of a case, Certainty of prediction 82 Cust ID Age Marital Status Wealth Product Purchases Product Quantity Type 1 35 M 380,000 TV 1 Appliance Coke 6 Drink Ham 3 Food
  • 70. More on Columns • Properties describe attributes • Can represent generalization hierarchy • Distribution information associated with attributes • Discrete/Continuous • Nature of Continuous distributions • Normal, Log_Normal • Other Properties (e.g., ordered, not null) 83
  • 71. Representing a DMM • Specifying a Model • Columns to predict • Algorithm to use • Special parameters • Model is represented as a (nested) table • Specification = Create table • Training = Inserting data into the table • Predicting = Querying the table 84 Minivan Age Car Type YES NO YES <30 >=30 Sports, Truck
  • 72. CREATE MINING MODEL CREATE MINING MODEL [Age Prediction] ( [Gender] TEXT DISCRETE ATTRIBUTE, [Hair Color] TEXT DISCRETE ATTRIBUTE, [Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT, ) USING [Microsoft Decision Tree] 85 Name of model Name of algorithm
  • 73. CREATE MINING MODEL CREATE MINING MODEL [Age Prediction] ( [Customer ID] LONG KEY, [Gender] TEXT DISCRETE ATTRIBUTE, [Age] DOUBLE CONTINUOUS ATTRIBUTE PREDICT, [ProductPurchases] TABLE ( [ProductName] TEXT KEY, [Quantity] DOUBLE NORMAL CONTINUOUS, [ProductType] TEXT DISCRETE RELATED TO [ProductName] ) ) USING [Microsoft Decision Tree] 86 Note that the ProductPurchases column is a nested table. SQL Server computes this field when data is “inserted”.
  • 74. Training a DMM • Training a DMM requires passing it “known” cases • Use an INSERT INTO in order to “insert” the data to the DMM • The DMM will usually not retain the inserted data • Instead it will analyze the given cases and build the DMM content (decision tree, segmentation model) • INSERT [INTO] <mining model name> [(columns list)] <source data query> 87
  • 75. INSERT INTO 88 INSERT INTO [Age Prediction] ( [Gender],[Hair Color], [Age] ) OPENQUERY([Provider=MSOLESQL…, ‘SELECT [Gender], [Hair Color], [Age] FROM [Customers]’ )
  • 76. Executing Insert Into • The DMM is trained • The model can be retrained or incrementally refined • Content (rules, trees, formulas) can be explored • Prediction queries can be executed 89
  • 77. What are Predictions? • Predictions apply the trained model to estimate missing attributes in a data set • Predictions = Queries • Specification: • Input data set • A trained DMM (think of it as a truth table, with one row per combination of predictor-attribute values; this is only conceptual) • Binding (mapping) information between the input data and the DMM 90
  • 78. Prediction Join SELECT [Customers].[ID], MyDMM.[Age], PredictProbability(MyDMM.[Age]) FROM MyDMM PREDICTION JOIN [Customers] ON MyDMM.[Gender] = [Customers].[Gender] AND MyDMM.[Hair Color] = [Customers].[Hair Color] 91
  • 80. Databases and Data Mining • What can database systems offer in the grand challenge of understanding and learning from the flood of data we’ve unleashed? • The plumbing • Scalability 93
  • 81. Databases and Data Mining • What can database systems offer in the grand challenge of understanding and learning from the flood of data we’ve unleashed? • The plumbing • Scalability • Ideas! • Declarativeness • Compositionality • Ways to conceptualize your data 94
  • 82. Multidimensional Data Model • One fact table D=(X,M) • X=X1, X2, ... Dimension attributes • M=M1, M2,… Measure attributes • Domain hierarchy for each dimension attribute: • Collection of domains Hier(Xi)= (Di (1),..., Di (k)) • The extended domain: EXi = 1≤k≤t DXi (k) • Value mapping function: γD1D2(x) • e.g., γmonthyear(12/2005) = 2005 • Form the value hierarchy graph • Stored as dimension table attribute (e.g., week for a time value) or conversion functions (e.g., month, quarter) 95
  • 83. FactID Auto Loc Repair p1 F150 NY 100 p2 Sierra NY 500 p3 F150 MA 100 p4 Sierra MA 200 MA NY TX CA West East ALL LOCATION Civic Sierra F150 Camry Truck Sedan ALL Automobile Model Category Region State ALL ALL 1 3 2 2 1 3 Multidimensional Data p3 p1 p4 p2 DIMENSION ATTRIBUTES
  • 84. Cube Space • Cube space: C = EX1EX2…EXd • Region: Hyper rectangle in cube space • c = (v1,v2,…,vd) , vi  EXi • Region granularity: • gran(c) = (d1, d2, ..., dd), di = Domain(c.vi) • Region coverage: • coverage(c) = all facts in c • Region set: All regions with same granularity 97
  • 85. OLAP Over Imprecise Data with Doug Burdick, Prasad Deshpande, T.S. Jayram, and Shiv Vaithyanathan In VLDB 05, 06 joint work with IBM Almaden 98
  • 86. FactID Auto Loc Repair p1 F150 NY 100 p2 Sierra NY 500 p3 F150 MA 100 p4 Sierra MA 200 p5 Truck MA 100 MA NY TX CA West East ALL LOCATION Civic Sierra F150 Camry Truck Sedan ALL Automobile Model Category Region State ALL ALL 1 3 2 2 1 3 p5 Imprecise Data p3 p1 p4 p2
  • 87. Querying Imprecise Facts 100 p3 p1 p4 p2 p5 MA NY Sierra F150 FactID Auto Loc Repair p1 F150 NY 100 p2 Sierra NY 500 p3 F150 MA 100 p4 Sierra MA 200 p5 Truck MA 100 Truck East Auto = F150 Loc = MA SUM(Repair) = ??? How do we treat p5?
  • 88. Allocation (1) 101 p3 p1 p4 p2 p5 MA NY Sierra F150 FactID Auto Loc Repair p1 F150 NY 100 p2 Sierra NY 500 p3 F150 MA 100 p4 Sierra MA 200 p5 Truck MA 100 Truck East
  • 89. Allocation (2) 102 p3 p1 p4 p2 MA NY Sierra F150 ID FactID Auto Loc Repair Weight 1 p1 F150 NY 100 1.0 2 p2 Sierra NY 500 1.0 3 p3 F150 MA 100 1.0 4 p4 Sierra MA 200 1.0 5 p5 F150 MA 100 0.5 6 p5 Sierra MA 100 0.5 Truck East p5 p5 (Huh? Why 0.5 / 0.5? - Hold on to that thought)
  • 90. Allocation (3) 103 p3 p1 p4 p2 MA NY Sierra F150 ID FactID Auto Loc Repair Weight 1 p1 F150 NY 100 1.0 2 p2 Sierra NY 500 1.0 3 p3 F150 MA 100 1.0 4 p4 Sierra MA 200 1.0 5 p5 F150 MA 100 0.5 6 p5 Sierra MA 100 0.5 Truck East p5 p5 Auto = F150 Loc = MA SUM(Repair) = 150 Query the Extended Data Model!
  • 91. Allocation Policies • The procedure for assigning allocation weights is referred to as an allocation policy: • Each allocation policy uses different information to assign allocation weights • Reflects assumption about the correlation structure in the data • Leads to EM-style iterative algorithms for allocating imprecise facts, maximizing likelihood of observed data 104
  • 92. Allocation Policy: Count 1, 5 2, 5 ( 1) 2 ( 1) ( 2) 2 1 ( 2) 1 ( 1) ( 2) 2 1 c p c p Count c p Count c Count c Count c p Count c Count c         105 p3 p1 p4 p2 MA NY Sierra F150 Truck East p5 p5 p6 c1 c2
  • 93. Allocation Policy: Measure 1, 5 2, 5 ( 1) 700 ( 1) ( 2) 700 200 ( 2) 200 ( 1) ( 2) 700 200 c p c p Sales c p Sales c Sales c Sales c p Sales c Sales c         ID Sales p1 100 p2 150 p3 300 p4 200 p5 250 p6 400 106 p3 p1 p4 p2 MA NY Sierra F150 Truck East p5 p5 p6 c1 c2
  • 94. Allocation Policy Template 1, 5 2, 5 ( 1) ( 1) ( 2) ( 2) ( 1) ( 2) c p c p Q c p Q c Q c Q c p Q c Q c     1, 5 2, 5 ( 1) ( 1) ( 2) ( 2) ( 1) ( 2) c p c p Sales c p Sales c Sales c Sales c p Sales c Sales c     1, 5 2, 5 ( 1) ( 1) ( 2) ( 2) ( 1) ( 2) c p c p Count c p Count c Count c Count c p Count c Count c     , ' ( ) ( ) ( ) ( ') ( ) c r c region r Q c Q c p Q c Qsum r     r MA NY Sierra F150 Truck East c1 c2
  • 95. What is a Good Allocation Policy? 108 Sierra F150 Truck MA NY East p1 p3 p5 p4 p2 We propose desiderata that enable appropriate definition of query semantics for imprecise data Query: COUNT
  • 96. Desideratum I: Consistency • Consistency specifies the relationship between answers to related queries on a fixed data set 109 Sierra F150 Truck MA NY East p1 p3 p5 p4 p2
  • 97. Desideratum II: Faithfulness • Faithfulness specifies the relationship between answers to a fixed query on related data sets Sierra F150 MA NY p3 p1 p4 p2 p5 Sierra F150 MA NY p3 p1 p4 p2 p5 Sierra F150 MA NY p3 p1 p4 p2 p5 Data Set 1 Data Set 2 Data Set 3
  • 98. Results on Query Semantics • Evaluating queries over extended data model yields expected value of the aggregation operator over all possible worlds • Efficient query evaluation algorithms available for SUM, COUNT; more expensive dynamic programming algorithm for AVERAGE • Consistency and faithfulness for SUM, COUNT are satisfied under appropriate conditions • (Bound-)Consistency does not hold for AVERAGE, but holds for E(SUM)/E(COUNT) • Weak form of faithfulness holds • Opinion pooling with LinOP: Similar to AVERAGE 111
  • 100. Query Semantics • Given all possible worlds together with their probabilities, queries are easily answered using expected values • But number of possible worlds is exponential! • Allocation gives facts weighted assignments to possible completions, leading to an extended version of the data • Size increase is linear in number of (completions of) imprecise facts • Queries operate over this extended version 114
  • 101. Exploratory Mining: Prediction Cubes with Beechun Chen, Lei Chen, and Yi Lin In VLDB 05; EDAM Project 115
  • 102. The Idea • Build OLAP data cubes in which cell values represent decision/prediction behavior • In effect, build a tree for each cell/region in the cube—observe that this is not the same as a collection of trees used in an ensemble method! • The idea is simple, but it leads to promising data mining tools • Ultimate objective: Exploratory analysis of the entire space of “data mining choices” • Choice of algorithms, data conditioning parameters … 116
  • 103. Example (1/7): Regular OLAP 117 Location Time # of App. … … ... AL, USA Dec, 04 2 … … … WY, USA Dec, 04 3 Goal: Look for patterns of unusually high numbers of applications: Z: Dimensions Y: Measure All 85 86 04 Jan., 86 Dec., 86 All Year Month Location Time All Japan USA Norway AL WY All Country State
  • 104. Example (2/7): Regular OLAP 118 Location Time # of App. … … ... AL, USA Dec, 04 2 … … … WY, USA Dec, 04 3 Goal: Look for patterns of unusually high numbers of applications: … … … … … … … … … … … 10 8 2 70 USA … … 30 25 50 20 30 CA … Dec … Jan Dec … Jan … 2003 2004 Cell value: Number of loan applications Z: Dimensions Y: Measure … … … … … 90 80 USA … 90 100 CA … 03 04 Roll up Coarser regions … … … … … … … … … 10 WY … … 5 … … … … 55 AL USA … 15 3 5 YT … 20 2 5 … … 15 15 20 AB CA … Dec … Jan … 2004 Drill down Finer regions
  • 105. Example (3/7): Decision Analysis 119 Model h(X, Z(D)) E.g., decision tree No … F Black Dec, 04 WY, USA … … … … … … Yes … M White Dec, 04 AL, USA Approval … Sex Race Time Location Goal: Analyze a bank’s loan decision process w.r.t. two dimensions: Location and Time All 85 86 04 Jan., 86 Dec., 86 All Year Month Location Time All Japan USA Norway AL WY All Country State Z: Dimensions X: Predictors Y: Class Fact table D Cube subset
  • 106. Example (3/7): Decision Analysis • Are there branches (and time windows) where approvals were closely tied to sensitive attributes (e.g., race)? • Suppose you partitioned the training data by location and time, chose the partition for a given branch and time window, and built a classifier. You could then ask, “Are the predictions of this classifier closely correlated with race?” • Are there branches and times with decision making reminiscent of 1950s Alabama? • Requires comparison of classifiers trained using different subsets of data. 120
  • 107. Example (4/7): Prediction Cubes 121 Model h(X, [USA, Dec 04](D)) E.g., decision tree 2004 2003 … Jan … Dec Jan … Dec … CA 0.4 0.8 0.9 0.6 0.8 … … USA 0.2 0.3 0.5 … … … … … … … … … … … 1. Build a model using data from USA in Dec., 1985 2. Evaluate that model Measure in a cell: • Accuracy of the model • Predictiveness of Race measured based on that model • Similarity between that model and a given model N … F Black Dec, 04 WY, USA … … … … … … Y … M White Dec, 04 AL ,USA Approval … Sex Race Time Location Data [USA, Dec 04](D)
  • 108. Example (5/7): Model-Similarity 122 No … F Black Dec, 04 WY, USA … … … … … … Yes … M White Dec, 04 AL, USA Approval … Sex Race Time Location Data table D Given: - Data table D - Target model h0(X) - Test set D w/o labels … M Black … … … … F White … Sex Race Test set D … … … … … … … … … … … 0.9 0.3 0.2 USA … … 0.5 0.6 0.3 0.2 0.4 CA … Dec … Jan Dec … Jan … 2003 2004 Level: [Country, Month] The loan decision process in USA during Dec 04 was similar to a discriminatory decision model h0(X) Build a model Similarity No … Yes Yes … Yes
  • 109. Example (6/7): Predictiveness 123 Location Time Race Sex … Approval AL, USA Dec, 04 White M … Yes … … … … … … WY, USA Dec, 04 Black F … No 2004 2003 … Jan … Dec Jan … Dec … CA 0.4 0.2 0.3 0.6 0.5 … … USA 0.2 0.3 0.9 … … … … … … … … … … … Given: - Data table D - Attributes V - Test set D w/o labels Race Sex … White F … … … … Black M … Data table D Test set D Level: [Country, Month] Predictiveness of V Race was an important predictor of loan approval decision in USA during Dec 04 Build models h(X) h(XV) Yes No . . No Yes No . . Yes
  • 110. Model Accuracy • A probabilistic view of classifiers: A dataset is a random sample from an underlying pdf p*(X, Y), and a classifier h(X; D) = argmax y p*(Y=y | X=x, D) • i.e., A classifier approximates the pdf by predicting the “most likely” y value • Model Accuracy: • Ex,y[ I( h(x; D) = y ) ], where (x, y) is drawn from p*(X, Y | D), and I() = 1 if the statement  is true; I() = 0, otherwise • In practice, since p* is an unknown distribution, we use a set- aside test set or cross-validation to estimate model accuracy. 124
  • 111. Model Similarity • The prediction similarity between two models, h1(X) and h2(X), on test set D is • The KL-distance between two models, h1(X) and h2(X), on test set D is    D D x x x )) ( ) ( ( | | 1 2 1 h h I   D D x y h h h x y p x y p x y p ) | ( ) | ( log ) | ( | | 1 2 1 1
  • 112. Attribute Predictiveness • Intuition: V  X is not predictive if and only if V is independent of Y given the other attributes X – V; i.e., p*(Y | X – V, D) = p*(Y | X, D) • In practice, we can use the distance between h(X; D) and h(X – V; D) • Alternative approach: Test if h(X; D) is more accurate than h(X – V; D) (e.g., by using cross-validation to estimate the two model accuracies involved) 126
  • 113. Example (7/7): Prediction Cube 127 2004 2003 … Jan … Dec Jan … Dec … CA 0.4 0.1 0.3 0.6 0.8 … … USA 0.7 0.4 0.3 0.3 … … … … … … … … … … … … … … … … … … … … … … … … 0.8 0.7 0.9 WY … … … 0.1 0.1 0.3 … … … … … 0.2 0.1 0.2 AL USA … … … 0.2 0.1 0.2 0.3 YT … … … 0.3 0.3 0.1 0.1 … … … 0.2 0.1 0.1 0.2 0.4 AB CA … Dec … Jan Dec … Jan … 2003 2004 Drill down … … … … … 0.3 0.2 USA … 0.2 0.3 CA … 03 04 Roll up Cell value: Predictiveness of Race
  • 114. Efficient Computation • Reduce prediction cube computation to data cube computation • Represent a data-mining model as a distributive or algebraic (bottom-up computable) aggregate function, so that data-cube techniques can be directly applied 128
  • 115. Bottom-Up Data Cube Computation 129 1985 1986 1987 1988 Norway 10 30 20 24 … 23 45 14 32 USA 14 32 42 11 1985 1986 1987 1988 All 47 107 76 67 All Norway 84 … 114 USA 99 All All 297 Cell Values: Numbers of loan applications
  • 116. Scoring Function • Represent a model as a function of sets • Conceptually, a machine-learning model h(X; Z(D)) is a scoring function Score(y, x; Z(D)) that gives each class y a score on test example x • h(x; Z(D)) = argmax y Score(y, x; Z(D)) • Score(y, x; Z(D))  p(y | x, Z(D)) • Z(D): The set of training examples (a cube subset of D) 130
  • 117. Machine-Learning Models • Naïve Bayes: • Scoring function: algebraic • Kernel-density-based classifier: • Scoring function: distributive • Decision tree, random forest: • Neither distributive, nor algebraic • PBE: Probability-based ensemble (new) • To make any machine-learning model distributive • Approximation 131
  • 118. Efficiency Comparison 132 0 500 1000 1500 2000 2500 40K 80K 120K 160K 200K RFex KDCex NBex J48ex NB KDC RF- PBE J48- PBE Using exhaustive method Using bottom-up score computation # of Records Execution Time (sec)
  • 119. Bellwether Analysis: Global Aggregates from Local Regions with Beechun Chen, Jude Shavlik, and Pradeep Tamma In VLDB 06 133
  • 120. Motivating Example • A company wants to predict the first year worldwide profit of a new item (e.g., a new movie) • By looking at features and profits of previous (similar) movies, we predict expected total profit (1-year US sales) for new movie • Wait a year and write a query! If you can’t wait, stay awake … • The most predictive “features” may be based on sales data gathered by releasing the new movie in many “regions” (different locations over different time periods). • Example “region-based” features: 1st week sales in Peoria, week-to-week sales growth in Wisconsin, etc. • Gathering this data has a cost (e.g., marketing expenses, waiting time) • Problem statement: Find the most predictive region features that can be obtained within a given “cost budget” 134
  • 121. Key Ideas • Large datasets are rarely labeled with the targets that we wish to learn to predict • But for the tasks we address, we can readily use OLAP queries to generate features (e.g., 1st week sales in Peoria) and even targets (e.g., profit) for mining • We use data-mining models as building blocks in the mining process, rather than thinking of them as the end result • The central problem is to find data subsets (“bellwether regions”) that lead to predictive features which can be gathered at low cost for a new case 135
  • 122. Motivating Example • A company wants to predict the first year’s worldwide profit for a new item, by using its historical database • Database Schema: 136 Profit Table Time Location CustID ItemID Profit Item Table ItemID Category R&D Expense Ad Table Time Location ItemID AdExpense AdSize • The combination of the underlined attributes forms a key
  • 123. A Straightforward Approach • Build a regression model to predict item profit • There is much room for accuracy improvement! 137 Profit Table Time Location CustID ItemID Profit Item Table ItemID Category R&D Expense Ad Table Time Location ItemID AdExpense AdSize ItemID Category R&D Expense Profit 1 Laptop 500K 12,000K 2 Desktop 100K 8,000K … … … … By joining and aggregating tables in the historical database we can create a training set: Item-table features Target An Example regression model: Profit = 0 + 1 Laptop + 2 Desktop + 3 RdExpense
  • 124. Using Regional Features • Example region: [1st week, HK] • Regional features: • Regional Profit: The 1st week profit in HK • Regional Ad Expense: The 1st week ad expense in HK • A possibly more accurate model: Profit[1yr, All] = 0 + 1 Laptop + 2 Desktop + 3 RdExpense + 4 Profit[1wk, KR] + 5 AdExpense[1wk, KR] • Problem: Which region should we use? • The smallest region that improves the accuracy the most • We give each candidate region a cost • The most “cost-effective” region is the bellwether region 138
  • 125. Basic Bellwether Problem • Historical database: DB • Training item set: I • Candidate region set: R • E.g., { [1-n week, Location] } • Target generation query: i(DB) returns the target value of item i  I • E.g., sum(Profit) i, [1-52, All] ProfitTable • Feature generation query: i,r(DB), i  Ir and r  R • Ir: The set of items in region r • E.g., [ Categoryi, RdExpensei, Profiti, [1-n, Loc], AdExpensei, [1-n, Loc] ] • Cost query: r(DB), r  R, the cost of collecting data from r • Predictive model: hr(x), r  R, trained on {(i,r(DB), i(DB)) : i  Ir} • E.g., linear regression model 139 All CA US KR AL WI All Country State Location domain hierarchy
  • 126. Basic Bellwether Problem 140 1 2 3 4 5 … 52 KR USA … WI WY ... … ItemID Category … Profit[1-2,USA] … … … … … … i Desktop 45K … … … … … Aggregate over data records in region r = [1-2, USA] Features i,r(DB) ItemID Total Profit … … i 2,000K … … Target i(DB) Total Profit in [1-52, All] For each region r, build a predictive model hr(x); and then choose bellwether region: • Coverage(r) fraction of all items in region  minimum coverage support • Cost(r, DB) cost threshold • Error(hr) is minimized r
  • 127. Experiment on a Mail Order Dataset 141 0 5000 10000 15000 20000 25000 30000 5 25 45 65 85 Budget RMSE Bel Err Avg Err Smp Err • Bel Err: The error of the bellwether region found using a given budget • Avg Err: The average error of all the cube regions with costs under a given budget • Smp Err: The error of a set of randomly sampled (non-cube) regions with costs under a given budget [1-8 month, MD] Error-vs-Budget Plot (RMSE: Root Mean Square Error)
  • 128. Experiment on a Mail Order Dataset 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 5 25 45 65 85 Budget Fraction of indistinguisables 142 Uniqueness Plot • Y-axis: Fraction of regions that are as good as the bellwether region – The fraction of regions that satisfy the constraints and have errors within the 99% confidence interval of the error of the bellwether region • We have 99% confidence that that [1-8 month, MD] is a quite unusual bellwether region [1-8 month, MD]
  • 129. Subset-Based Bellwether Prediction • Motivation: Different subsets of items may have different bellwether regions • E.g., The bellwether region for laptops may be different from the bellwether region for clothes • Two approaches: 143 R&D Expense  50K Yes No Category Desktop Laptop [1-2, WI] [1-3, MD] [1-1, NY] Bellwether Tree Bellwether Cube Low Medium High Software OS [1-3,CA] [1-1,NY] [1-2,CA] … ... … … Hardware Laptop [1-4,MD] [1-1, NY] [1-3,WI] … … … … … … … … … R&D Expenses Category
  • 130. TECS 2007 R. Ramakrishnan, Yahoo! Research Conclusions
  • 131. Related Work: Building models on OLAP Results • Multi-dimensional regression [Chen, VLDB 02] • Goal: Detect changes of trends • Build linear regression models for cube cells • Step-by-step regression in stream cubes [Liu, PAKDD 03] • Loglinear-based quasi cubes [Barbara, J. IIS 01] • Use loglinear model to approximately compress dense regions of a data cube • NetCube [Margaritis, VLDB 01] • Build Bayes Net on the entire dataset of approximate answer count queries 145
  • 132. Related Work (Contd.) • Cubegrades [Imielinski, J. DMKD 02] • Extend cubes with ideas from association rules • How does the measure change when we rollup or drill down? • Constrained gradients [Dong, VLDB 01] • Find pairs of similar cell characteristics associated with big changes in measure • User-cognizant multidimensional analysis [Sarawagi, VLDBJ 01] • Help users find the most informative unvisited regions in a data cube using max entropy principle • Multi-Structural DBs [Fagin et al., PODS 05, VLDB 05] 146
  • 133. Take-Home Messages • Promising exploratory data analysis paradigm: • Can use models to identify interesting subsets • Concentrate only on subsets in cube space • Those are meaningful subsets, tractable • Precompute results and provide the users with an interactive tool • A simple way to plug “something” into cube-style analysis: • Try to describe/approximate “something” by a distributive or algebraic function 147
  • 134. Big Picture • Why stop with decision behavior? Can apply to other kinds of analyses too • Why stop at browsing? Can mine prediction cubes in their own right • Exploratory analysis of mining space: • Dimension attributes can be parameters related to algorithm, data conditioning, etc. • Tractable evaluation is a challenge: • Large number of “dimensions”, real-valued dimension attributes, difficulties in compositional evaluation • Active learning for experiment design, extending compositional methods 148