SlideShare a Scribd company logo
Text Categorization as a Graph Classification
Problem
1
Outline
Section 1 Introduction
Section 2 Review of the related work
Section 3 Preliminary concepts
Section 4 Proposed approaches
Section 5 Experimental evaluation
Section 6 Conclusion
References
2
1. What is text mining ?
2. Bag-of-words and its issues
3. Graph-of-words - A new approach
Introduction
3
Introduction
What is Text mining?
Search engines
Understand user’s queries. E.g. What is Google?
Find matching websites or documents (ranking).
Product recommendation
Understand product description.
Understand product reviews. 4
Introduction
Bag-of-words and its issues
Definition
A text (such as a sentence or a document) is represented as the bag (multiset)
of its words.
5
Introduction
Bag-of-words and its issues
Example
“He likes watching action movies, she likes watching romantic movies”
⇒ [ “He”, “likes”, “watching”, “action”, “movies”, “she”, “likes”, “watching”,
“romantic”, “movies” ].
The sentence has 10 distinct words, by using indexes of the list, it can be
represented by a 10-entry vector: [ 1, 2, 2, 1, 2, 1, 2, 2, 1, 2 ]
6
Introduction
Bag-of-words and its issues
Problems
There are millions of n-gram features when dealing with thousands of news
articles, but only a few hundreds actually present in each article and tens
of class labels.
N-gram fails to capture word inversion and subset matching (e.g., “article
about news” vs. “news article”).
7
Introduction
Graph-of-words - A new approach
8
Consider the task of text categorization as a graph
classification problem.
Represent textual documents as graph-of-words
instead of traditional n-gram bag-of-words.
Extract more discriminative features that
correspond to long-distance n-grams through
frequent subgraph mining.
Introduction
Graph-of-words - A new approach
9
Summary:
1. Constructs a graph-of-words for each document
in the set
2. For each graphs from step 1 , extract its main
core (for cost-effective)
3. Find all frequent subgraphs size n in the
obtained set of graphs from step 2
4. Remove isomorphic subgraphs to reduce the
total number of features
5. Finally, extract n-gram features on the
remaining text
● Subgraph feature mining on graph-of-words representations by Markov et
al. (2007)
Kudo and Matsumoto (2004), Matsumoto et al. (2005), Jiang et al. (2010) and
Arora et al. (2010) suggested using parse and dependency trees
representation for text categorization, but the support value (i.e. the total
number of features) was not discussed and can potentially lead to millions
of subgraphs on standard datasets.
Review of the related works
10
1. Graph-of-words model
2. Subgraph isomorphism
3. K-core and main core
Preliminary Concepts
11
Definition
An undirected graph G = (V, E) , where
V is the set of vertices, which represents unique terms of the document
E is the set of edges, which represents co-occurrences between the terms
within a fixed-size sliding window
12
Preliminary Concepts
Graph-of-words model
Definition
Given two graphs G and H, an isomorphism of G and H is a bijection between the
vertex sets of G and H such that any two vertices u and v of G are adjacent in G if
and only if f(u) and f(v) are adjacent in H.
Example
13
Preliminary Concepts
Subgraph isomorphism
Definition
A subgraph H = (V’, E’) induced by the subset of vertices V’ ⊆ V and the subset of
edges E’ ⊆ E of graph G = (V, E) is called a k-core, where k is an integer, if and
only if: H is the maximal subgraph holds the property ∀ v ∈ V’, deg(v) >= k.
k-core: a maximal connected subgraph whose vertices are at least of degree k
within that subgraph.
main core: the k-core with the largest k.
Preliminary Concepts
K-core and main core
14
Example
Fig. Two 3-cores of a graph
Preliminary Concepts
K-core and main core
15
1. Unsupervised feature mining using gSpan
2. Find frequent subgraphs using gSpan
3. Unsupervised support selection
4. Considered classifiers
5. Multiclass scenario
6. Main core mining using gSpan
Proposed approaches
16
Idea
● Considered the task of text categorization as a graph classification problem
● Representing textual documents as graph-of-words and then extracting
subgraph features to train a graph classifier.
● Each document is a separate graph-of-words and the collection of
documents thus corresponds to a set of graphs.
Proposed approaches
Unsupervised feature mining using gSpan
17
Given
● D = {G0
, G1
, G2
, ..., GN
} a graph dataset
● Support(g) the number of graphs (in D) in which g is a subgraph
● minSup minimum support threshold
Problem
Find any subgraph so that support(g) >= minSup
Proposed approaches
Find frequent subgraphs using gSpan
18
Frequent subgraph : a subgraph of multiple graph in D
Proposed approaches
Find frequent subgraphs using gSpan
19
Baseline solution
● Enumerate all the subgraphs and testing for isomorphism throughout the
collection => very expensive
Propose solution
● Use gSpan (graph-based Substructure pattern mining )
Proposed approaches
Find frequent subgraphs using gSpan
20
gSpan Idea:
1. For each graph, build a lexicographic order of all the edges using depth-first-
search (DFS) traversal
2. Assign to each of them a unique minimum DFS code.
3. Based on all these DFS codes, a hierarchical search tree is constructed at the
collection-level.
4. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs
with required support.
Proposed approaches
Find frequent subgraphs using gSpan
21
Note :
● Given two graphs G and G’
G is isomorphic to G’ if and only if minDFS(G) = minDFS(G’)
The lower the support will result in:
1. more features
2. longer the mining
3. longer feature vector generation
4. longer learning .
Proposed approaches
Find frequent subgraphs using gSpan
22
Given
D = {G0
, G1
, G2
,... ,GN
} a graph dataset
Support(g) denotes the number of graphs (in D) in which g is a subgraph
minSup denotes the minimum support threshold
Proposed approaches
Unsupervised support selection (Select best minSup)
23
Situation
The classifier can only improve its goodness of fit with more features
=> It is likely that the lowest support will lead to the best test accuracy
As the support decreases, the number of features increases slightly up until a
point where it increases exponentially
=> This makes both the feature vector generation and the learning expensive,
especially with multiple classes.
Proposed approaches
Unsupervised support selection (Select best minSup)
24
Problem
Select best minSup
Solution
Use the Elbow method
Proposed approaches
Unsupervised support selection (Select best minSup)
25
Elbow method
Example: selecting the number of clusters in k-means clustering
Choose a number of clusters so that adding
another cluster doesn't give much better
modeling of the data
Proposed approaches
Unsupervised support selection (Select best minSup)
26
Elbow method
In our case :
Choose a minSup so that decreasing this value by a unit will :
not give much better accuracy
but increase the number of features significantly
Proposed approaches
Unsupervised support selection (Select best minSup)
27
Standard baseline classifiers
K-nearest neighbors (kNN) (Larkey and Croft, 1996)
Naive Bayes (NB) (McCallum and Nigam, 1998)
Linear Support Vector Machines (SVM) (Joachims, 1998)
Proposed approaches
Considered classifiers
28
Problem
Single support value might lead to some classes generating a tremendous
number of features ( hundreds of thousands ) and some others only a few (a few
hundreds subgraphs)
⇒ Need an extremely low support to include discriminative features for
these minority classes
⇒ Resulting in an exponential number of features because of the majority
classes.
Proposed approaches
Multiclass scenario
29
Solution
Mine frequent subgraphs per class using the same relative support (in %)
Then aggregate each feature set into a global one at the cost of a supervised
process (but still avoids cross validating).
Proposed approaches
Multiclass scenario
30
Problem
The number of features (subgraphs) to be extracted is very large when mining
frequent subgraphs directly !
How to extract discriminative features while maintaining word dependence
and retaining as much classification information as possible ?
Solution
Reduce the graphs’ size by keeping the densest subgraphs.
Proposed approaches
Main core using gSpan
31
Implementation
Batagelj-Zaveršnik algorithm, which is optimally implemented (in C++
language) by gSpan.
Proposed approaches
Main core using gSpan
32
1. Datasets
2. Results
3. Unsupervised support selection
4. Distributions of mined n-grams
Experimental evaluation
33
Experimental evaluation
Datasets
34
● WebKB: 4 most frequent categories among labeled web pages from
various CS departments
(2,803 for training and 1,396 for test )
● R8: 8 most frequent categories of Reuters- 21578, a set of labeled news
articles from the 1987 Reuters newswire
(5,485 for training and 2,189 for test )
● LingSpam: 2,893 emails classified as spam or legitimate messages
(10 sets for 10-fold cross validation )
● Amazon: 8,000 product reviews over four different sub-collections
(books, DVDs, electronics and kitchen appliances) classified as positive
or negative
(1,600 for training and 400 for test )
Experimental evaluation
Datasets
35
● Multi-class document categorization : WebKB and R8
● Spam detection (Ling-Spam)
● Opinion mining (Amazon) so as to cover all the main subtasks of text
categorization
Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main
cores along with the reduction of the dimension of the feature space on all four datasets.
36
Experimental evaluation
Results
Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best
performance in a column * indicates statistical significance at p < 0.05 using micro sign test with regards
to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the
main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support
values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon).
37
Experimental evaluation
Results
Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection
(main core retention) on R8 dataset. 38
Experimental evaluation
Results
Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right)
datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in
test for the SVM baseline.
Experimental evaluation
Unsupervised support selection
39
Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB
dataset.
Experimental evaluation
Distribution of mined n-grams
40
Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most
discriminative features for SVM on WebKB dataset.
Experimental evaluation
Distribution of mined n-grams
41
Conclusion
New graph-of-words approach for text mining.
Consider the problem as a graph classification
Achieved:
Extract more discriminative features that correspond to long-distance n-grams
through frequent subgraph mining
42
References
Text Categorization as a Graph Classification Problem (François Rousseau, Emmanouil Kiagias ,Michalis Vazirgiannis )
http://www.aclweb.org/anthology/P15-1164
gSpan: Graph-Based Substructure Pattern Mining (Xifeng Yan and Jiawei Han )
http://cs.ucsb.edu/~xyan/papers/gSpan-short.pdf
Determining the number of clusters in a data set - The Elbow Method
https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
Graph isomorphism
https://en.wikipedia.org/wiki/Graph_isomorphism
43
44

More Related Content

What's hot (15)

PPT
Topic Models
Claudia Wagner
 
PDF
SEGAN: Speech Enhancement Generative Adversarial Network
Universitat Politècnica de Catalunya
 
PPT
3.5 model based clustering
Krish_ver2
 
PDF
Cluster Analysis for Dummies
Venkata Reddy Konasani
 
PPT
Chapter8
akhila chilukuri
 
PPTX
SVM - Functional Verification
Sai Kiran Kadam
 
PPT
3.2 partitioning methods
Krish_ver2
 
PDF
K-Means, its Variants and its Applications
Varad Meru
 
PPT
IR-ranking
FELIX75
 
PPT
Lect4
sumit621
 
PPT
Chapter 11 cluster advanced : web and text mining
Houw Liong The
 
PPTX
Neural Learning to Rank
Bhaskar Mitra
 
PPTX
Grid based method & model based clustering method
rajshreemuthiah
 
PPTX
Machine Learning Algorithms (Part 1)
Zihui Li
 
PDF
Text Classification/Categorization
Oswal Abhishek
 
Topic Models
Claudia Wagner
 
SEGAN: Speech Enhancement Generative Adversarial Network
Universitat Politècnica de Catalunya
 
3.5 model based clustering
Krish_ver2
 
Cluster Analysis for Dummies
Venkata Reddy Konasani
 
SVM - Functional Verification
Sai Kiran Kadam
 
3.2 partitioning methods
Krish_ver2
 
K-Means, its Variants and its Applications
Varad Meru
 
IR-ranking
FELIX75
 
Lect4
sumit621
 
Chapter 11 cluster advanced : web and text mining
Houw Liong The
 
Neural Learning to Rank
Bhaskar Mitra
 
Grid based method & model based clustering method
rajshreemuthiah
 
Machine Learning Algorithms (Part 1)
Zihui Li
 
Text Classification/Categorization
Oswal Abhishek
 

Similar to Text categorization as a graph (20)

PPT
Lect12 graph mining
Houw Liong The
 
PDF
Graph Machine Learning - Past, Present, and Future -
kashipong
 
PPT
gSpan algorithm
Sadik Mussah
 
PPT
gSpan algorithm
Sadik Mussah
 
PPT
5.5 graph mining
Krish_ver2
 
PPTX
Lgm saarbrucken
Yasuo Tabei
 
PDF
FREQUENT SUBGRAPH MINING ALGORITHMS - A SURVEY AND FRAMEWORK FOR CLASSIFICATION
cscpconf
 
PPTX
A survey on graph kernels
vincyy
 
PDF
Hc3612711275
IJERA Editor
 
PDF
Feature selection, optimization and clustering strategies of text documents
IJECEIAES
 
PPT
sequencea.ppt
olusolaogunyewo1
 
PPT
sequenckjkojkjhguignmpojihiubgijnkompoje.ppt
JITENDER773791
 
PPT
sequf;lds,g;'dsg;dlld'g;;gldgence - Copy.ppt
JITENDER773791
 
PPT
Trends In Graph Data Management And Mining
Srinath Srinivasa
 
PPT
Survey on Frequent Pattern Mining on Graph Data - Slides
Kasun Gajasinghe
 
PDF
call for papers, research paper publishing, where to publish research paper, ...
International Journal of Engineering Inventions www.ijeijournal.com
 
PDF
A systematic study of text mining techniques
ijnlc
 
PDF
Mapping Subsets of Scholarly Information
Paul Houle
 
PDF
Relevance feature discovery for text mining
redpel dot com
 
PPTX
Close Graph
Sayeed Mahmud
 
Lect12 graph mining
Houw Liong The
 
Graph Machine Learning - Past, Present, and Future -
kashipong
 
gSpan algorithm
Sadik Mussah
 
gSpan algorithm
Sadik Mussah
 
5.5 graph mining
Krish_ver2
 
Lgm saarbrucken
Yasuo Tabei
 
FREQUENT SUBGRAPH MINING ALGORITHMS - A SURVEY AND FRAMEWORK FOR CLASSIFICATION
cscpconf
 
A survey on graph kernels
vincyy
 
Hc3612711275
IJERA Editor
 
Feature selection, optimization and clustering strategies of text documents
IJECEIAES
 
sequencea.ppt
olusolaogunyewo1
 
sequenckjkojkjhguignmpojihiubgijnkompoje.ppt
JITENDER773791
 
sequf;lds,g;'dsg;dlld'g;;gldgence - Copy.ppt
JITENDER773791
 
Trends In Graph Data Management And Mining
Srinath Srinivasa
 
Survey on Frequent Pattern Mining on Graph Data - Slides
Kasun Gajasinghe
 
call for papers, research paper publishing, where to publish research paper, ...
International Journal of Engineering Inventions www.ijeijournal.com
 
A systematic study of text mining techniques
ijnlc
 
Mapping Subsets of Scholarly Information
Paul Houle
 
Relevance feature discovery for text mining
redpel dot com
 
Close Graph
Sayeed Mahmud
 
Ad

More from David Hoen (20)

PPT
Computer security
David Hoen
 
PPT
Introduction to prolog
David Hoen
 
PPT
Database introduction
David Hoen
 
PPTX
Building a-database
David Hoen
 
PPTX
Decision tree
David Hoen
 
PPT
Database constraints
David Hoen
 
PPT
Prolog programming
David Hoen
 
PPT
Hash crypto
David Hoen
 
PPTX
Introduction to security_and_crypto
David Hoen
 
PPTX
Key exchange in crypto
David Hoen
 
PPTX
Nlp naive bayes
David Hoen
 
PPT
Prolog resume
David Hoen
 
PPT
Access data connection
David Hoen
 
PPT
Basic dns-mod
David Hoen
 
PPT
Database concepts
David Hoen
 
PPTX
Hashfunction
David Hoen
 
PPTX
Datamining with nb
David Hoen
 
PPT
Xml schema
David Hoen
 
PPT
Text classification
David Hoen
 
PPT
Text classification methods
David Hoen
 
Computer security
David Hoen
 
Introduction to prolog
David Hoen
 
Database introduction
David Hoen
 
Building a-database
David Hoen
 
Decision tree
David Hoen
 
Database constraints
David Hoen
 
Prolog programming
David Hoen
 
Hash crypto
David Hoen
 
Introduction to security_and_crypto
David Hoen
 
Key exchange in crypto
David Hoen
 
Nlp naive bayes
David Hoen
 
Prolog resume
David Hoen
 
Access data connection
David Hoen
 
Basic dns-mod
David Hoen
 
Database concepts
David Hoen
 
Hashfunction
David Hoen
 
Datamining with nb
David Hoen
 
Xml schema
David Hoen
 
Text classification
David Hoen
 
Text classification methods
David Hoen
 
Ad

Recently uploaded (20)

PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PPTX
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PPTX
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 

Text categorization as a graph

  • 1. Text Categorization as a Graph Classification Problem 1
  • 2. Outline Section 1 Introduction Section 2 Review of the related work Section 3 Preliminary concepts Section 4 Proposed approaches Section 5 Experimental evaluation Section 6 Conclusion References 2
  • 3. 1. What is text mining ? 2. Bag-of-words and its issues 3. Graph-of-words - A new approach Introduction 3
  • 4. Introduction What is Text mining? Search engines Understand user’s queries. E.g. What is Google? Find matching websites or documents (ranking). Product recommendation Understand product description. Understand product reviews. 4
  • 5. Introduction Bag-of-words and its issues Definition A text (such as a sentence or a document) is represented as the bag (multiset) of its words. 5
  • 6. Introduction Bag-of-words and its issues Example “He likes watching action movies, she likes watching romantic movies” ⇒ [ “He”, “likes”, “watching”, “action”, “movies”, “she”, “likes”, “watching”, “romantic”, “movies” ]. The sentence has 10 distinct words, by using indexes of the list, it can be represented by a 10-entry vector: [ 1, 2, 2, 1, 2, 1, 2, 2, 1, 2 ] 6
  • 7. Introduction Bag-of-words and its issues Problems There are millions of n-gram features when dealing with thousands of news articles, but only a few hundreds actually present in each article and tens of class labels. N-gram fails to capture word inversion and subset matching (e.g., “article about news” vs. “news article”). 7
  • 8. Introduction Graph-of-words - A new approach 8 Consider the task of text categorization as a graph classification problem. Represent textual documents as graph-of-words instead of traditional n-gram bag-of-words. Extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining.
  • 9. Introduction Graph-of-words - A new approach 9 Summary: 1. Constructs a graph-of-words for each document in the set 2. For each graphs from step 1 , extract its main core (for cost-effective) 3. Find all frequent subgraphs size n in the obtained set of graphs from step 2 4. Remove isomorphic subgraphs to reduce the total number of features 5. Finally, extract n-gram features on the remaining text
  • 10. ● Subgraph feature mining on graph-of-words representations by Markov et al. (2007) Kudo and Matsumoto (2004), Matsumoto et al. (2005), Jiang et al. (2010) and Arora et al. (2010) suggested using parse and dependency trees representation for text categorization, but the support value (i.e. the total number of features) was not discussed and can potentially lead to millions of subgraphs on standard datasets. Review of the related works 10
  • 11. 1. Graph-of-words model 2. Subgraph isomorphism 3. K-core and main core Preliminary Concepts 11
  • 12. Definition An undirected graph G = (V, E) , where V is the set of vertices, which represents unique terms of the document E is the set of edges, which represents co-occurrences between the terms within a fixed-size sliding window 12 Preliminary Concepts Graph-of-words model
  • 13. Definition Given two graphs G and H, an isomorphism of G and H is a bijection between the vertex sets of G and H such that any two vertices u and v of G are adjacent in G if and only if f(u) and f(v) are adjacent in H. Example 13 Preliminary Concepts Subgraph isomorphism
  • 14. Definition A subgraph H = (V’, E’) induced by the subset of vertices V’ ⊆ V and the subset of edges E’ ⊆ E of graph G = (V, E) is called a k-core, where k is an integer, if and only if: H is the maximal subgraph holds the property ∀ v ∈ V’, deg(v) >= k. k-core: a maximal connected subgraph whose vertices are at least of degree k within that subgraph. main core: the k-core with the largest k. Preliminary Concepts K-core and main core 14
  • 15. Example Fig. Two 3-cores of a graph Preliminary Concepts K-core and main core 15
  • 16. 1. Unsupervised feature mining using gSpan 2. Find frequent subgraphs using gSpan 3. Unsupervised support selection 4. Considered classifiers 5. Multiclass scenario 6. Main core mining using gSpan Proposed approaches 16
  • 17. Idea ● Considered the task of text categorization as a graph classification problem ● Representing textual documents as graph-of-words and then extracting subgraph features to train a graph classifier. ● Each document is a separate graph-of-words and the collection of documents thus corresponds to a set of graphs. Proposed approaches Unsupervised feature mining using gSpan 17
  • 18. Given ● D = {G0 , G1 , G2 , ..., GN } a graph dataset ● Support(g) the number of graphs (in D) in which g is a subgraph ● minSup minimum support threshold Problem Find any subgraph so that support(g) >= minSup Proposed approaches Find frequent subgraphs using gSpan 18
  • 19. Frequent subgraph : a subgraph of multiple graph in D Proposed approaches Find frequent subgraphs using gSpan 19
  • 20. Baseline solution ● Enumerate all the subgraphs and testing for isomorphism throughout the collection => very expensive Propose solution ● Use gSpan (graph-based Substructure pattern mining ) Proposed approaches Find frequent subgraphs using gSpan 20
  • 21. gSpan Idea: 1. For each graph, build a lexicographic order of all the edges using depth-first- search (DFS) traversal 2. Assign to each of them a unique minimum DFS code. 3. Based on all these DFS codes, a hierarchical search tree is constructed at the collection-level. 4. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs with required support. Proposed approaches Find frequent subgraphs using gSpan 21
  • 22. Note : ● Given two graphs G and G’ G is isomorphic to G’ if and only if minDFS(G) = minDFS(G’) The lower the support will result in: 1. more features 2. longer the mining 3. longer feature vector generation 4. longer learning . Proposed approaches Find frequent subgraphs using gSpan 22
  • 23. Given D = {G0 , G1 , G2 ,... ,GN } a graph dataset Support(g) denotes the number of graphs (in D) in which g is a subgraph minSup denotes the minimum support threshold Proposed approaches Unsupervised support selection (Select best minSup) 23
  • 24. Situation The classifier can only improve its goodness of fit with more features => It is likely that the lowest support will lead to the best test accuracy As the support decreases, the number of features increases slightly up until a point where it increases exponentially => This makes both the feature vector generation and the learning expensive, especially with multiple classes. Proposed approaches Unsupervised support selection (Select best minSup) 24
  • 25. Problem Select best minSup Solution Use the Elbow method Proposed approaches Unsupervised support selection (Select best minSup) 25
  • 26. Elbow method Example: selecting the number of clusters in k-means clustering Choose a number of clusters so that adding another cluster doesn't give much better modeling of the data Proposed approaches Unsupervised support selection (Select best minSup) 26
  • 27. Elbow method In our case : Choose a minSup so that decreasing this value by a unit will : not give much better accuracy but increase the number of features significantly Proposed approaches Unsupervised support selection (Select best minSup) 27
  • 28. Standard baseline classifiers K-nearest neighbors (kNN) (Larkey and Croft, 1996) Naive Bayes (NB) (McCallum and Nigam, 1998) Linear Support Vector Machines (SVM) (Joachims, 1998) Proposed approaches Considered classifiers 28
  • 29. Problem Single support value might lead to some classes generating a tremendous number of features ( hundreds of thousands ) and some others only a few (a few hundreds subgraphs) ⇒ Need an extremely low support to include discriminative features for these minority classes ⇒ Resulting in an exponential number of features because of the majority classes. Proposed approaches Multiclass scenario 29
  • 30. Solution Mine frequent subgraphs per class using the same relative support (in %) Then aggregate each feature set into a global one at the cost of a supervised process (but still avoids cross validating). Proposed approaches Multiclass scenario 30
  • 31. Problem The number of features (subgraphs) to be extracted is very large when mining frequent subgraphs directly ! How to extract discriminative features while maintaining word dependence and retaining as much classification information as possible ? Solution Reduce the graphs’ size by keeping the densest subgraphs. Proposed approaches Main core using gSpan 31
  • 32. Implementation Batagelj-Zaveršnik algorithm, which is optimally implemented (in C++ language) by gSpan. Proposed approaches Main core using gSpan 32
  • 33. 1. Datasets 2. Results 3. Unsupervised support selection 4. Distributions of mined n-grams Experimental evaluation 33
  • 34. Experimental evaluation Datasets 34 ● WebKB: 4 most frequent categories among labeled web pages from various CS departments (2,803 for training and 1,396 for test ) ● R8: 8 most frequent categories of Reuters- 21578, a set of labeled news articles from the 1987 Reuters newswire (5,485 for training and 2,189 for test ) ● LingSpam: 2,893 emails classified as spam or legitimate messages (10 sets for 10-fold cross validation ) ● Amazon: 8,000 product reviews over four different sub-collections (books, DVDs, electronics and kitchen appliances) classified as positive or negative (1,600 for training and 400 for test )
  • 35. Experimental evaluation Datasets 35 ● Multi-class document categorization : WebKB and R8 ● Spam detection (Ling-Spam) ● Opinion mining (Amazon) so as to cover all the main subtasks of text categorization
  • 36. Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main cores along with the reduction of the dimension of the feature space on all four datasets. 36 Experimental evaluation Results
  • 37. Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best performance in a column * indicates statistical significance at p < 0.05 using micro sign test with regards to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon). 37 Experimental evaluation Results
  • 38. Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection (main core retention) on R8 dataset. 38 Experimental evaluation Results
  • 39. Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right) datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in test for the SVM baseline. Experimental evaluation Unsupervised support selection 39
  • 40. Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB dataset. Experimental evaluation Distribution of mined n-grams 40
  • 41. Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most discriminative features for SVM on WebKB dataset. Experimental evaluation Distribution of mined n-grams 41
  • 42. Conclusion New graph-of-words approach for text mining. Consider the problem as a graph classification Achieved: Extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining 42
  • 43. References Text Categorization as a Graph Classification Problem (François Rousseau, Emmanouil Kiagias ,Michalis Vazirgiannis ) http://www.aclweb.org/anthology/P15-1164 gSpan: Graph-Based Substructure Pattern Mining (Xifeng Yan and Jiawei Han ) http://cs.ucsb.edu/~xyan/papers/gSpan-short.pdf Determining the number of clusters in a data set - The Elbow Method https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set Graph isomorphism https://en.wikipedia.org/wiki/Graph_isomorphism 43
  • 44. 44