SlideShare a Scribd company logo
An Introduction to Data Mining
ASHISH KUMAR THAKUR
16951A0522
Why Data Mining
 Credit ratings/targeted marketing:
Given a database of 100,000 names, which persons are the
least likely to default on their credit cards?
Identify likely responders to sales promotions
 Fraud detection
Which types of transactions are likely to be fraudulent, given the
demographics and transactional history of a particular
customer?
 Customer relationship management:
Which of my customers are likely to be the most loyal, and
which are most likely to leave for a competitor? :
Data Mining helps extract such
information
Data mining
Process of semi-automatically analyzing
large databases to find patterns that are:
valid: hold on new data with some certainity
novel: non-obvious to the system
useful: should be possible to act on the item
understandable: humans should be able to
interpret the pattern
Also known as Knowledge Discovery in
Databases (KDD)
Applications
Banking: loan/credit card approval
predict good customers based on old customers
Customer relationship management:
identify those who are likely to leave for a competitor.
Targeted marketing:
identify likely responders to promotions
Fraud detection: telecommunications, financial
transactions
from an online stream of event identify fraudulent events
Manufacturing and production:
automatically adjust knobs when process parameter changes
Applications (continued)
Medicine: disease outcome, effectiveness of
treatments
analyze patient disease history: find relationship
between diseases
Molecular/Pharmaceutical: identify new drugs
Scientific data analysis:
identify new galaxies by searching for sub clusters
Web site/store design and promotion:
find affinity of visitor to pages and modify layout
The KDD process
 Problem fomulation
 Data collection
subset data: sampling might hurt if highly skewed data
feature selection: principal component analysis, heuristic
search
 Pre-processing: cleaning
name/address cleaning, different meanings (annual, yearly),
duplicate removal, supplying missing values
 Transformation:
map complex objects e.g. time series data to features e.g.
frequency
 Choosing mining task and mining method:
 Result evaluation and Visualization:
Knowledge discovery is an iterative process
Relationship with other
fields
Overlaps with machine learning, statistics,
artificial intelligence, databases, visualization
but more stress on
scalability of number of features and instances
stress on algorithms and architectures whereas
foundations of methods and formulations provided
by statistics and machine learning.
automation for handling large, heterogeneous data
Some basic operations
Predictive:
Regression
Classification
Collaborative Filtering
Descriptive:
Clustering / similarity matching
Association rules and variants
Deviation detection
Classification
(Supervised learning)
Classification
Given old data about customers and
payments, predict new applicant’s loan
eligibility.
Age
Salary
Profession
Location
Customer type
Previous customers Classifier Decision rules
Salary > 5 L
Prof. = Exec
New applicant’s data
Good/
bad
Classification methods
Goal: Predict class Ci = f(x1, x2, .. Xn)
Regression: (linear or any other polynomial)
a*x1 + b*x2 + c = Ci.
Nearest neighour
Decision tree classifier: divide decision space
into piecewise constant regions.
Probabilistic/generative models
Neural networks: partition by non-linear
boundaries
Define proximity between instances, find
neighbors of new instance and assign
majority class
Case based reasoning: when attributes
are more complicated than real-valued.
Nearest neighbor
• Cons
– Slow during application.
– No feature selection.
– Notion of proximity vague
• Pros
+ Fast training
Tree where internal nodes
are simple decision rules on
one or more attributes and
leaf nodes are predicted
class labels.
Decision trees
Salary < 1 M
Prof = teacher
Good
Age < 30
BadBad
Good
Decision tree classifiers
Widely used learning method
Easy to interpret: can be re-represented as if-
then-else rules
Approximates function by piece wise constant
regions
Does not require any prior knowledge of data
distribution, works well on noisy data.
Has been applied to:
classify medical patients based on the disease,
 equipment malfunction by cause,
loan applicant by likelihood of payment.
Pros and Cons of decision
trees
· Cons
- Cannot handle complicated
relationship between features
- simple decision boundaries
- problems with lots of missing
data
· Pros
+ Reasonable training
time
+ Fast application
+ Easy to interpret
+ Easy to implement
+ Can handle large
number of features
More information:
http://www.stat.wisc.edu/~limt/treeprogs.html
Neural network
Set of nodes connected by directed
weighted edges
Hidden nodes
Output nodes
x1
x2
x3
x1
x2
x3
w1
w2
w3
y
n
i
ii
e
y
xwo




 
1
1
)(
)(
1


Basic NN unit
A more typical NN
Neural networks
Useful for learning complex data like
handwriting, speech and image
recognition
Neural networkClassification tree
Decision boundaries:
Linear regression
Pros and Cons of Neural
Network
· Cons
- Slow training time
- Hard to interpret
- Hard to implement: trial
and error for choosing
number of nodes
· Pros
+ Can learn more complicated
class boundaries
+ Fast application
+ Can handle large number of
features
Conclusion: Use neural nets only if
decision-trees/NN fail.
Bayesian learning
Assume a probability model on generation of data.

Apply bayes theorem to find most likely class as:
Naïve bayes: Assume attributes conditionally
independent given class value
Easy to learn probabilities by counting,
Useful in some domains e.g. text
)(
)()|(
max)|(max:classpredicted
dp
cpcdp
dcpc jj
c
j
c jj



n
i
ji
j
c
cap
dp
cp
c
j
1
)|(
)(
)(
max
Clustering or
Unsupervised Learning
Clustering
Unsupervised learning when old data with class
labels not available e.g. when introducing a new
product.
Group/cluster existing customers based on time
series of payment history such that similar
customers in same cluster.
Key requirement: Need a good measure of
similarity between instances.
Identify micro-markets and develop policies for
each
Applications
Customer segmentation e.g. for targeted
marketing
Group/cluster existing customers based on time
series of payment history such that similar customers
in same cluster.
Identify micro-markets and develop policies for each
Collaborative filtering:
group based on common items purchased
Text clustering
Compression
Distance functions
Numeric data: euclidean, manhattan distances
Categorical data: 0/1 to indicate
presence/absence followed by
Hamming distance (# dissimilarity)
Jaccard coefficients: #similarity in 1s/(# of 1s)
data dependent measures: similarity of A and B
depends on co-occurance with C.
Combined numeric and categorical data:
weighted normalized distance:
Clustering methods
Hierarchical clustering
agglomerative Vs divisive
single link Vs complete link
Partitional clustering
distance-based: K-means
model-based: EM
density-based:
Partitional methods: K-
means
Criteria: minimize sum of square of distance
Between each point and centroid of the
cluster.
Between each pair of points in the cluster
Algorithm:
Select initial partition with K clusters:
random, first K, K separated points
Repeat until stabilization:
Assign each point to closest cluster center
Generate new cluster centers
Adjust clusters by merging/splitting
Collaborative Filtering
Given database of user preferences, predict
preference of new user
Example: predict what new movies you will like
based on
your past preferences
others with similar past preferences
their preferences for the new movies
Example: predict what books/CDs a person may
want to buy
(and suggest it, or give discounts to tempt
customer)
Collaborative
recommendation
RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo
Smita
Vijay
Mohan
Rajesh
Nina
Nitin ? ? ? ? ? ?
•Possible approaches:
• Average vote along columns [Same prediction for all]
• Weight vote based on similarity of likings [GroupLens]
RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo
Smita
Vijay
Mohan
Rajesh
Nina
Nitin ? ? ? ? ? ?
Cluster-based approaches
External attributes of people and movies to
cluster
age, gender of people
actors and directors of movies.
[ May not be available]
Cluster people based on movie preferences
misses information about similarity of movies
Repeated clustering:
cluster movies based on people, then people based on
movies, and repeat
ad hoc, might smear out groups
Example of clustering
RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo
Smita
Vijay
Mohan
Rajesh
Nina
Nitin ? ? ? ? ? ?
Anand QSQT Rangeela 100 days Vertigo Deewar Sholay
Vijay
Rajesh
Mohan
Nina
Smita
Nitin ? ? ? ? ? ?
Model-based approach
People and movies belong to unknown classes
Pk = probability a random person is in class k
Pl = probability a random movie is in class l
Pkl = probability of a class-k person liking a
class-l movie
Gibbs sampling: iterate
Pick a person or movie at random and assign to a
class with probability proportional to Pk or Pl
Estimate new parameters
Need statistics background to understand details
Association Rules
Association rules
Given set T of groups of items
Example: set of item sets purchased
Goal: find all rules on itemsets of the
form a-->b such that
 support of a and b > user threshold s
conditional probability (confidence) of b
given a > user threshold c
Example: Milk --> bread
Purchase of product A --> service B
Milk, cereal
Tea, milk
Tea, rice, bread
cereal
T
Variants
High confidence may not imply high
correlation
Use correlations. Find expected support
and large departures from that
interesting..
see statistical literature on contingency tables.
Still too many rules, need to prune...
Prevalent  Interesting
Analysts already
know about prevalent
rules
Interesting rules are
those that deviate
from prior
expectation
Mining’s payoff is in
finding surprising
phenomena
1995
1998
Milk and
cereal sell
together!
Zzzz... Milk and
cereal sell
together!
What makes a rule
surprising?
Does not match prior
expectation
Correlation between
milk and cereal remains
roughly constant over
time
Cannot be trivially
derived from simpler
rules
Milk 10%, cereal 10%
Milk and cereal 10% …
surprising
Eggs 10%
Milk, cereal and eggs
0.1% … surprising!
Expected 1%
Applications of fast
itemset counting
Find correlated events:
Applications in medicine: find redundant
tests
Cross selling in retail, banking
Improve predictive capability of classifiers
that assume attribute independence
 New similarity measures of categorical
attributes [Mannila et al, KDD 98]
Data Mining in Practice
Application Areas
Industry Application
Finance Credit Card Analysis
Insurance Claims, Fraud Analysis
Telecommunication Call record analysis
Transport Logistics management
Consumer goods promotion analysis
Data Service providers Value added data
Utilities Power usage analysis
Why Now?
Data is being produced
Data is being warehoused
The computing power is available
The computing power is affordable
The competitive pressures are strong
Commercial products are available
Data Mining works with
Warehouse Data
Data Warehousing provides the
Enterprise with a memory
ÑData Mining provides the
Enterprise with intelligence
Usage scenarios
Data warehouse mining:
assimilate data from operational sources
mine static data
Mining log data
Continuous mining: example in process control
Stages in mining:
 data selection  pre-processing: cleaning
 transformation  mining  result
evaluation  visualization
Mining market
Around 20 to 30 mining tool vendors
Major tool players:
Clementine,
IBM’s Intelligent Miner,
SGI’s MineSet,
SAS’s Enterprise Miner.
All pretty much the same set of tools
Many embedded products:
fraud detection:
electronic commerce applications,
health care,
customer relationship management: Epiphany
Vertical integration:
Mining on the web
Web log analysis for site design:
what are popular pages,
what links are hard to find.
Electronic stores sales enhancements:
recommendations, advertisement:
Collaborative filtering: Net perception, Wisewire
Inventory control: what was a shopper
looking for and could not find..
OLAP Mining integration
OLAP (On Line Analytical Processing)
Fast interactive exploration of multidim.
aggregates.
Heavy reliance on manual operations for
analysis:
Tedious and error-prone on large
multidimensional data
Ideal platform for vertical integration of mining
but needs to be interactive instead of batch.
State of art in mining OLAP
integration
Decision trees [Information discovery, Cognos]
find factors influencing high profits
Clustering [Pilot software]
segment customers to define hierarchy on that dimension
Time series analysis: [Seagate’s Holos]
Query for various shapes along time: eg. spikes, outliers
Multi-level Associations [Han et al.]
find association between members of dimensions
Sarawagi [VLDB2000]
Data Mining in Use
The US Government uses Data Mining to track
fraud
A Supermarket becomes an information broker
Basketball teams use it to track game strategy
Cross Selling
Target Marketing
Holding on to Good Customers
Weeding out Bad Customers
Some success stories
 Network intrusion detection using a combination of
sequential rule discovery and classification tree on 4 GB
DARPA data
Won over (manual) knowledge engineering approach
http://www.cs.columbia.edu/~sal/JAM/PROJECT/ provides good
detailed description of the entire process
 Major US bank: customer attrition prediction
First segment customers based on financial behavior: found 3
segments
Build attrition models for each of the 3 segments
40-50% of attritions were predicted == factor of 18 increase
 Targeted credit marketing: major US banks
find customer segments based on 13 months credit balances
build another response model based on surveys
increased response 4 times -- 2%

More Related Content

PPT
Datamining intro-iep
aaryarun1999
 
PPTX
PhD Consortium ADBIS presetation.
Giuseppe Ricci
 
PPTX
PhD defense
Giuseppe Ricci
 
PDF
Tutorial Knowledge Discovery
SSSW
 
PPTX
The 8 Step Data Mining Process
Marc Berman
 
PDF
Machine Learning - Algorithms and simple business cases
Claudio Mirti
 
PDF
Predictive modeling
Prashant Mudgal
 
PPTX
01 Introduction to Data Mining
Valerii Klymchuk
 
Datamining intro-iep
aaryarun1999
 
PhD Consortium ADBIS presetation.
Giuseppe Ricci
 
PhD defense
Giuseppe Ricci
 
Tutorial Knowledge Discovery
SSSW
 
The 8 Step Data Mining Process
Marc Berman
 
Machine Learning - Algorithms and simple business cases
Claudio Mirti
 
Predictive modeling
Prashant Mudgal
 
01 Introduction to Data Mining
Valerii Klymchuk
 

What's hot (17)

PDF
Introduction to Data Mining
Kai Koenig
 
PPT
Data mining-2
Arun Verma
 
DOC
Figure 1
butest
 
PPTX
Delayed Rewards in the context of Reinforcement Learning based Recommender ...
Debmalya Biswas
 
PDF
Applications: Prediction
NBER
 
PPTX
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Kishor Datta Gupta
 
DOC
BW article on professional respondents 2-23 (1)
Brett Watkins
 
PDF
The Use of Genetic Algorithm, Clustering and Feature Selection Techniques in ...
IJMIT JOURNAL
 
PDF
A Novel Hybrid Classification Approach for Sentiment Analysis of Text Document
IJECEIAES
 
PDF
USING NLP APPROACH FOR ANALYZING CUSTOMER REVIEWS
csandit
 
PDF
Applying Convolutional-GRU for Term Deposit Likelihood Prediction
VandanaSharma356
 
PDF
Controlling informative features for improved accuracy and faster predictions...
Damian R. Mingle, MBA
 
PPT
PhD defense - Exploiting distributional semantics for content-based and conte...
Victor Codina
 
PPT
Malhotra04
Uzair Javed Siddiqui
 
PDF
Distributed Representation-based Recommender Systems in E-commerce
Rakuten Group, Inc.
 
PPTX
Stock prediction using social network
Chanon Hongsirikulkit
 
PDF
Intent-Aware Temporal Query Modeling for Keyword Suggestion
Findwise
 
Introduction to Data Mining
Kai Koenig
 
Data mining-2
Arun Verma
 
Figure 1
butest
 
Delayed Rewards in the context of Reinforcement Learning based Recommender ...
Debmalya Biswas
 
Applications: Prediction
NBER
 
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Kishor Datta Gupta
 
BW article on professional respondents 2-23 (1)
Brett Watkins
 
The Use of Genetic Algorithm, Clustering and Feature Selection Techniques in ...
IJMIT JOURNAL
 
A Novel Hybrid Classification Approach for Sentiment Analysis of Text Document
IJECEIAES
 
USING NLP APPROACH FOR ANALYZING CUSTOMER REVIEWS
csandit
 
Applying Convolutional-GRU for Term Deposit Likelihood Prediction
VandanaSharma356
 
Controlling informative features for improved accuracy and faster predictions...
Damian R. Mingle, MBA
 
PhD defense - Exploiting distributional semantics for content-based and conte...
Victor Codina
 
Distributed Representation-based Recommender Systems in E-commerce
Rakuten Group, Inc.
 
Stock prediction using social network
Chanon Hongsirikulkit
 
Intent-Aware Temporal Query Modeling for Keyword Suggestion
Findwise
 
Ad

Similar to Dwd mdatamining intro-iep (20)

PPT
Dwdm ppt for the btech student contain basis
nivatripathy93
 
PPTX
1. Introduction to Data Mining (12).pptx
Kiran119578
 
PPT
Data mining
pradeepa n
 
PPT
Cluster2
work
 
PDF
Data Mining algorithms PPT with Overview explanation.
promptitude123456789
 
PPTX
Customer Profiling using Data Mining
Suman Chatterjee
 
DOCX
Imtiaz khan data_science_analytics
imtiaz khan
 
PPT
Data Mining
shrapb
 
PPT
Lecture1
sumit621
 
PPT
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
JITENDER773791
 
PPTX
5. Machine Learning.pptx
ssuser6654de1
 
PPT
Machine-Learning-Algorithms- A Overview.ppt
Prabu P
 
PPT
Machine-Learning-Algorithms- A Overview.ppt
Anusha10399
 
PDF
Chapter 4 Classification in data sience .pdf
AschalewAyele2
 
PPTX
Solving churn challenge in Big Data environment - Jelena Pekez
Institute of Contemporary Sciences
 
PPTX
Machine Learning in e commerce - Reboot
Marion DE SOUSA
 
PPTX
Data Mining
SHIKHA GAUTAM
 
PDF
BI Chapter 04.pdf business business business business
JawaherAlbaddawi
 
PPTX
Scikit - Algorithms98888888888888888888888.pptx
zxi09062025
 
PPTX
Introduction to data mining
Datamining Tools
 
Dwdm ppt for the btech student contain basis
nivatripathy93
 
1. Introduction to Data Mining (12).pptx
Kiran119578
 
Data mining
pradeepa n
 
Cluster2
work
 
Data Mining algorithms PPT with Overview explanation.
promptitude123456789
 
Customer Profiling using Data Mining
Suman Chatterjee
 
Imtiaz khan data_science_analytics
imtiaz khan
 
Data Mining
shrapb
 
Lecture1
sumit621
 
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
JITENDER773791
 
5. Machine Learning.pptx
ssuser6654de1
 
Machine-Learning-Algorithms- A Overview.ppt
Prabu P
 
Machine-Learning-Algorithms- A Overview.ppt
Anusha10399
 
Chapter 4 Classification in data sience .pdf
AschalewAyele2
 
Solving churn challenge in Big Data environment - Jelena Pekez
Institute of Contemporary Sciences
 
Machine Learning in e commerce - Reboot
Marion DE SOUSA
 
Data Mining
SHIKHA GAUTAM
 
BI Chapter 04.pdf business business business business
JawaherAlbaddawi
 
Scikit - Algorithms98888888888888888888888.pptx
zxi09062025
 
Introduction to data mining
Datamining Tools
 
Ad

More from Ashish Kumar Thakur (13)

PPT
No sql databases
Ashish Kumar Thakur
 
PDF
Home automation using bluetooth - Aurdino BASED
Ashish Kumar Thakur
 
PDF
APRIORI Algorithm
Ashish Kumar Thakur
 
PDF
Digital logic degin, Number system
Ashish Kumar Thakur
 
DOC
Traveling salesman problem
Ashish Kumar Thakur
 
PPTX
Cse image processing ppt
Ashish Kumar Thakur
 
PPTX
A survey on artificial neural networks in cyber world
Ashish Kumar Thakur
 
PPTX
An event driven campus navigation system on andriod121
Ashish Kumar Thakur
 
PPTX
Ram ppt
Ashish Kumar Thakur
 
PDF
Number system
Ashish Kumar Thakur
 
PPTX
Data warehousing ppt
Ashish Kumar Thakur
 
PPTX
Objec oriented Analysis and design Pattern
Ashish Kumar Thakur
 
PPTX
Biomass conversion technologies
Ashish Kumar Thakur
 
No sql databases
Ashish Kumar Thakur
 
Home automation using bluetooth - Aurdino BASED
Ashish Kumar Thakur
 
APRIORI Algorithm
Ashish Kumar Thakur
 
Digital logic degin, Number system
Ashish Kumar Thakur
 
Traveling salesman problem
Ashish Kumar Thakur
 
Cse image processing ppt
Ashish Kumar Thakur
 
A survey on artificial neural networks in cyber world
Ashish Kumar Thakur
 
An event driven campus navigation system on andriod121
Ashish Kumar Thakur
 
Number system
Ashish Kumar Thakur
 
Data warehousing ppt
Ashish Kumar Thakur
 
Objec oriented Analysis and design Pattern
Ashish Kumar Thakur
 
Biomass conversion technologies
Ashish Kumar Thakur
 

Recently uploaded (20)

PDF
AI-Driven IoT-Enabled UAV Inspection Framework for Predictive Maintenance and...
ijcncjournal019
 
PDF
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
PPTX
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 
DOCX
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
PPTX
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
PPT
Understanding the Key Components and Parts of a Drone System.ppt
Siva Reddy
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PDF
LEAP-1B presedntation xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
hatem173148
 
PDF
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
PDF
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
PDF
All chapters of Strength of materials.ppt
girmabiniyam1234
 
PPTX
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
PDF
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
PPTX
Tunnel Ventilation System in Kanpur Metro
220105053
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PPTX
Information Retrieval and Extraction - Module 7
premSankar19
 
PDF
Machine Learning All topics Covers In This Single Slides
AmritTiwari19
 
PPTX
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
AI-Driven IoT-Enabled UAV Inspection Framework for Predictive Maintenance and...
ijcncjournal019
 
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
Victory Precisions_Supplier Profile.pptx
victoryprecisions199
 
SAR - EEEfdfdsdasdsdasdasdasdasdasdasdasda.docx
Kanimozhi676285
 
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
Understanding the Key Components and Parts of a Drone System.ppt
Siva Reddy
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
LEAP-1B presedntation xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
hatem173148
 
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
All chapters of Strength of materials.ppt
girmabiniyam1234
 
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
Tunnel Ventilation System in Kanpur Metro
220105053
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
Information Retrieval and Extraction - Module 7
premSankar19
 
Machine Learning All topics Covers In This Single Slides
AmritTiwari19
 
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 

Dwd mdatamining intro-iep

  • 1. An Introduction to Data Mining ASHISH KUMAR THAKUR 16951A0522
  • 2. Why Data Mining  Credit ratings/targeted marketing: Given a database of 100,000 names, which persons are the least likely to default on their credit cards? Identify likely responders to sales promotions  Fraud detection Which types of transactions are likely to be fraudulent, given the demographics and transactional history of a particular customer?  Customer relationship management: Which of my customers are likely to be the most loyal, and which are most likely to leave for a competitor? : Data Mining helps extract such information
  • 3. Data mining Process of semi-automatically analyzing large databases to find patterns that are: valid: hold on new data with some certainity novel: non-obvious to the system useful: should be possible to act on the item understandable: humans should be able to interpret the pattern Also known as Knowledge Discovery in Databases (KDD)
  • 4. Applications Banking: loan/credit card approval predict good customers based on old customers Customer relationship management: identify those who are likely to leave for a competitor. Targeted marketing: identify likely responders to promotions Fraud detection: telecommunications, financial transactions from an online stream of event identify fraudulent events Manufacturing and production: automatically adjust knobs when process parameter changes
  • 5. Applications (continued) Medicine: disease outcome, effectiveness of treatments analyze patient disease history: find relationship between diseases Molecular/Pharmaceutical: identify new drugs Scientific data analysis: identify new galaxies by searching for sub clusters Web site/store design and promotion: find affinity of visitor to pages and modify layout
  • 6. The KDD process  Problem fomulation  Data collection subset data: sampling might hurt if highly skewed data feature selection: principal component analysis, heuristic search  Pre-processing: cleaning name/address cleaning, different meanings (annual, yearly), duplicate removal, supplying missing values  Transformation: map complex objects e.g. time series data to features e.g. frequency  Choosing mining task and mining method:  Result evaluation and Visualization: Knowledge discovery is an iterative process
  • 7. Relationship with other fields Overlaps with machine learning, statistics, artificial intelligence, databases, visualization but more stress on scalability of number of features and instances stress on algorithms and architectures whereas foundations of methods and formulations provided by statistics and machine learning. automation for handling large, heterogeneous data
  • 8. Some basic operations Predictive: Regression Classification Collaborative Filtering Descriptive: Clustering / similarity matching Association rules and variants Deviation detection
  • 10. Classification Given old data about customers and payments, predict new applicant’s loan eligibility. Age Salary Profession Location Customer type Previous customers Classifier Decision rules Salary > 5 L Prof. = Exec New applicant’s data Good/ bad
  • 11. Classification methods Goal: Predict class Ci = f(x1, x2, .. Xn) Regression: (linear or any other polynomial) a*x1 + b*x2 + c = Ci. Nearest neighour Decision tree classifier: divide decision space into piecewise constant regions. Probabilistic/generative models Neural networks: partition by non-linear boundaries
  • 12. Define proximity between instances, find neighbors of new instance and assign majority class Case based reasoning: when attributes are more complicated than real-valued. Nearest neighbor • Cons – Slow during application. – No feature selection. – Notion of proximity vague • Pros + Fast training
  • 13. Tree where internal nodes are simple decision rules on one or more attributes and leaf nodes are predicted class labels. Decision trees Salary < 1 M Prof = teacher Good Age < 30 BadBad Good
  • 14. Decision tree classifiers Widely used learning method Easy to interpret: can be re-represented as if- then-else rules Approximates function by piece wise constant regions Does not require any prior knowledge of data distribution, works well on noisy data. Has been applied to: classify medical patients based on the disease,  equipment malfunction by cause, loan applicant by likelihood of payment.
  • 15. Pros and Cons of decision trees · Cons - Cannot handle complicated relationship between features - simple decision boundaries - problems with lots of missing data · Pros + Reasonable training time + Fast application + Easy to interpret + Easy to implement + Can handle large number of features More information: http://www.stat.wisc.edu/~limt/treeprogs.html
  • 16. Neural network Set of nodes connected by directed weighted edges Hidden nodes Output nodes x1 x2 x3 x1 x2 x3 w1 w2 w3 y n i ii e y xwo       1 1 )( )( 1   Basic NN unit A more typical NN
  • 17. Neural networks Useful for learning complex data like handwriting, speech and image recognition Neural networkClassification tree Decision boundaries: Linear regression
  • 18. Pros and Cons of Neural Network · Cons - Slow training time - Hard to interpret - Hard to implement: trial and error for choosing number of nodes · Pros + Can learn more complicated class boundaries + Fast application + Can handle large number of features Conclusion: Use neural nets only if decision-trees/NN fail.
  • 19. Bayesian learning Assume a probability model on generation of data.  Apply bayes theorem to find most likely class as: Naïve bayes: Assume attributes conditionally independent given class value Easy to learn probabilities by counting, Useful in some domains e.g. text )( )()|( max)|(max:classpredicted dp cpcdp dcpc jj c j c jj    n i ji j c cap dp cp c j 1 )|( )( )( max
  • 21. Clustering Unsupervised learning when old data with class labels not available e.g. when introducing a new product. Group/cluster existing customers based on time series of payment history such that similar customers in same cluster. Key requirement: Need a good measure of similarity between instances. Identify micro-markets and develop policies for each
  • 22. Applications Customer segmentation e.g. for targeted marketing Group/cluster existing customers based on time series of payment history such that similar customers in same cluster. Identify micro-markets and develop policies for each Collaborative filtering: group based on common items purchased Text clustering Compression
  • 23. Distance functions Numeric data: euclidean, manhattan distances Categorical data: 0/1 to indicate presence/absence followed by Hamming distance (# dissimilarity) Jaccard coefficients: #similarity in 1s/(# of 1s) data dependent measures: similarity of A and B depends on co-occurance with C. Combined numeric and categorical data: weighted normalized distance:
  • 24. Clustering methods Hierarchical clustering agglomerative Vs divisive single link Vs complete link Partitional clustering distance-based: K-means model-based: EM density-based:
  • 25. Partitional methods: K- means Criteria: minimize sum of square of distance Between each point and centroid of the cluster. Between each pair of points in the cluster Algorithm: Select initial partition with K clusters: random, first K, K separated points Repeat until stabilization: Assign each point to closest cluster center Generate new cluster centers Adjust clusters by merging/splitting
  • 26. Collaborative Filtering Given database of user preferences, predict preference of new user Example: predict what new movies you will like based on your past preferences others with similar past preferences their preferences for the new movies Example: predict what books/CDs a person may want to buy (and suggest it, or give discounts to tempt customer)
  • 27. Collaborative recommendation RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo Smita Vijay Mohan Rajesh Nina Nitin ? ? ? ? ? ? •Possible approaches: • Average vote along columns [Same prediction for all] • Weight vote based on similarity of likings [GroupLens] RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo Smita Vijay Mohan Rajesh Nina Nitin ? ? ? ? ? ?
  • 28. Cluster-based approaches External attributes of people and movies to cluster age, gender of people actors and directors of movies. [ May not be available] Cluster people based on movie preferences misses information about similarity of movies Repeated clustering: cluster movies based on people, then people based on movies, and repeat ad hoc, might smear out groups
  • 29. Example of clustering RangeelaQSQT 100 daysAnand Sholay Deewar Vertigo Smita Vijay Mohan Rajesh Nina Nitin ? ? ? ? ? ? Anand QSQT Rangeela 100 days Vertigo Deewar Sholay Vijay Rajesh Mohan Nina Smita Nitin ? ? ? ? ? ?
  • 30. Model-based approach People and movies belong to unknown classes Pk = probability a random person is in class k Pl = probability a random movie is in class l Pkl = probability of a class-k person liking a class-l movie Gibbs sampling: iterate Pick a person or movie at random and assign to a class with probability proportional to Pk or Pl Estimate new parameters Need statistics background to understand details
  • 32. Association rules Given set T of groups of items Example: set of item sets purchased Goal: find all rules on itemsets of the form a-->b such that  support of a and b > user threshold s conditional probability (confidence) of b given a > user threshold c Example: Milk --> bread Purchase of product A --> service B Milk, cereal Tea, milk Tea, rice, bread cereal T
  • 33. Variants High confidence may not imply high correlation Use correlations. Find expected support and large departures from that interesting.. see statistical literature on contingency tables. Still too many rules, need to prune...
  • 34. Prevalent  Interesting Analysts already know about prevalent rules Interesting rules are those that deviate from prior expectation Mining’s payoff is in finding surprising phenomena 1995 1998 Milk and cereal sell together! Zzzz... Milk and cereal sell together!
  • 35. What makes a rule surprising? Does not match prior expectation Correlation between milk and cereal remains roughly constant over time Cannot be trivially derived from simpler rules Milk 10%, cereal 10% Milk and cereal 10% … surprising Eggs 10% Milk, cereal and eggs 0.1% … surprising! Expected 1%
  • 36. Applications of fast itemset counting Find correlated events: Applications in medicine: find redundant tests Cross selling in retail, banking Improve predictive capability of classifiers that assume attribute independence  New similarity measures of categorical attributes [Mannila et al, KDD 98]
  • 37. Data Mining in Practice
  • 38. Application Areas Industry Application Finance Credit Card Analysis Insurance Claims, Fraud Analysis Telecommunication Call record analysis Transport Logistics management Consumer goods promotion analysis Data Service providers Value added data Utilities Power usage analysis
  • 39. Why Now? Data is being produced Data is being warehoused The computing power is available The computing power is affordable The competitive pressures are strong Commercial products are available
  • 40. Data Mining works with Warehouse Data Data Warehousing provides the Enterprise with a memory ÑData Mining provides the Enterprise with intelligence
  • 41. Usage scenarios Data warehouse mining: assimilate data from operational sources mine static data Mining log data Continuous mining: example in process control Stages in mining:  data selection  pre-processing: cleaning  transformation  mining  result evaluation  visualization
  • 42. Mining market Around 20 to 30 mining tool vendors Major tool players: Clementine, IBM’s Intelligent Miner, SGI’s MineSet, SAS’s Enterprise Miner. All pretty much the same set of tools Many embedded products: fraud detection: electronic commerce applications, health care, customer relationship management: Epiphany
  • 43. Vertical integration: Mining on the web Web log analysis for site design: what are popular pages, what links are hard to find. Electronic stores sales enhancements: recommendations, advertisement: Collaborative filtering: Net perception, Wisewire Inventory control: what was a shopper looking for and could not find..
  • 44. OLAP Mining integration OLAP (On Line Analytical Processing) Fast interactive exploration of multidim. aggregates. Heavy reliance on manual operations for analysis: Tedious and error-prone on large multidimensional data Ideal platform for vertical integration of mining but needs to be interactive instead of batch.
  • 45. State of art in mining OLAP integration Decision trees [Information discovery, Cognos] find factors influencing high profits Clustering [Pilot software] segment customers to define hierarchy on that dimension Time series analysis: [Seagate’s Holos] Query for various shapes along time: eg. spikes, outliers Multi-level Associations [Han et al.] find association between members of dimensions Sarawagi [VLDB2000]
  • 46. Data Mining in Use The US Government uses Data Mining to track fraud A Supermarket becomes an information broker Basketball teams use it to track game strategy Cross Selling Target Marketing Holding on to Good Customers Weeding out Bad Customers
  • 47. Some success stories  Network intrusion detection using a combination of sequential rule discovery and classification tree on 4 GB DARPA data Won over (manual) knowledge engineering approach http://www.cs.columbia.edu/~sal/JAM/PROJECT/ provides good detailed description of the entire process  Major US bank: customer attrition prediction First segment customers based on financial behavior: found 3 segments Build attrition models for each of the 3 segments 40-50% of attritions were predicted == factor of 18 increase  Targeted credit marketing: major US banks find customer segments based on 13 months credit balances build another response model based on surveys increased response 4 times -- 2%

Editor's Notes

  • #5: Any area where large amounts of historic data that if understood better can help shape future decisions.
  • #9: Each topic is a talk..
  • #44: Absolute: 40 M$ 40M$, expected to grow 10 times by 2000 --Forrester research
  • #46: OLAP refers to Online Analytical Processing. I always wondered what was the analytical part in olap products? OLAP -- bunch of aggregates and simple group-bys on sums and average is not analysis. Interactive speed for selects/drill-downs/rollups/ no joins “analysis” manually and the products meet the “Online” part of the promise by pre-computing the aggregates. They offer a bare-bones RISC like functionality using which analysts do most of the work manually. This talk is about investigating if we can do some of the analysis too? When you have 5 dimensions, with avg. 3 levels hierarchy on each aggregating more than a million rows, manual exploration can get tedious. Goal is to add more complex operations although called mining think of them more like CISC functionalities.. Mining products provide the analysis part but they do it batched rather than online. Greater success of OLAP means people find this form of interactive analysis quite attractive.
  • #47: Littl e integration: here are few exceptions --- People are starting to wake up to this possibility and here are some examples I have found by web-surfing. . decision tree most common. Information Discovery claimed to be only serious integrator [DBMS Ap ‘98] Clustering used by some to define new product hierarchies. Of course, rich set of time-series functions especially for forecasting was always there New charting software: 80/20, A-B-C analysis, quadrant plotting. Univ. Jiawen Han. Previous approach has been to bring in mining operations in olap. Look at mining operations and choose what fits. My approach has been to reflect on what people do with cube metaphor and the drill-down, roll-up, based exploration and see if there is anything there that can be automated. Discuss my work first.