SlideShare a Scribd company logo
Text classification in PHP
Who am I ?
Glenn De Backer (twitter: @glenndebacker)
Web developer @ Dx-Solutions
32 years old originally from Bruges, now
living in Meulebeke
Interested in machine learning, (board) games,
electronics and have a bit of a creative bone…
Blog: http://www.simplicity.be
What will we cover today ?
What is text classification
NLP terminology
Bayes theorem
Some PHP code
What is text classification ?
Text classification is the process of
assigning classes to documents
This can be done manually or by using
machine learning (algorithmically)
Today`s talk will be about classifying text
using a supervised machine learning
algorithm: Naive bayes
Supervised vs unsupervised
machine learning ?
Supervised means in simple terms
that we need to feed our
algorithm examples of data and
what they represent



Free gift card -> spam

The server is down -> ham
Unsupervised means that we work
with algorithms that finds hidden
structure in unlabelled data. For
example clustering documents
Some possible use cases
Spam detection (classic)
Assigning categories, topics, genres, subjects, …
Determine authorship
Gender classification
Sentiment analysis
Identifying languages
…
Personal project

Nieuws zonder politiek
Personal project

Nieuws zonder politiek
Fun project from 2010
Related to the 589 days with no elected government.
We had a lot of political related non-news items that
I wanted to filter out as an experiment.
News aggregator that fetched news from different
flemish newspapers
Classified those items into political and non political
news
Personal project

Wuk zeg je ?
Personal project

Wuk zeg je ?
Fun project released at the end of 2015
Inspired by a contest of the province of
West Flanders to find foreign words that
sounded West-Flemish
Can recognise the West-Flemish dialect… but
also Dutch, French and English
Uses character n-grams instead of words
NLP terminology
Tokenization
Before any real text processing can be done we need to
execute the task of tokenization.
Tokenisation is the task of dividing text into words,
sentences, symbols or other elements called tokens.
They often talk about features instead of tokens.
N-grams
N-gram are sequences of tokens of
length N
Can be words, combination of words,
characters, … .
Depending on the size it also sometimes
called a unigram (1 item), bigram (2
items) or a trigram (3 items).
Character n-grams are very suited for
language classification
Stop words
Are words (or features) that
are particular common in a text
corpus
for example the, and, on, in, …
Are considered uninformative
A list of stopwords is used to
remove or ignore words from
the document we are processing
Optional but recommended
Stemming
Stemming is the process of reducing words to their word stem,
base or root.
Not a required step but it can certainly help in reducing the
number of features and improving the task of classifying text
(e.g. speed or quality)
The most used is the Porter stemmer which contains support for
English, French, Dutch, …
Bag Of Words (BOW) model
Is a simple representation
of text features
Can be words, combination
of words, sounds, … .
A Bow model contains a
vocabulary including a
vocabulary count
Training / test set
A training set is just a collection of a
labeled data used for classifying data.



Free gift card -> spam

The server is down -> ham
A test set is simply to test the accuracy
of our classifier
A typical flow
PHP is a server-side
scripting language designed
for web development
A typical flow
PHP | is | a | server-side |
scripting | language | designed
| for | web | development
A typical flow
PHP | is | a | server-side |
scripting | language | designed
| for | web | development
A typical flow
PHP | server-side | scripting 

| language | designed | web |
development
A typical flow
PHP : 1
server-side : 1
scripting : 1

language : 1
designed : 1
web : 1
development : 1
Bayes theorem
Some history trivia
Discovered by a British
minister Thomas Bayes in
1740.
Rediscovered independently
by a French scholar Piere
Simon Laplace who gave it
its modern mathematical
form.
Alan Turing used it to decode
the German Enigma Cipher
which had a big influence on
the outcome of World War 2.
Bayes theorem
In probability theory or statistics Bayes
theorem describes the probability of an
event based on conditions that might
relate to that event.
E.g. how probable it is that an article is
about sports (and that based on certain
words that the article contains).
Naive Bayes
Naive Bayes classifiers are a family of
simple probabilistic classifiers based on
applying Bayes theorem
The naive part is the fact that it
strongly assume independence between
features (words in our case)
Bayes and text classification
We can modify the standard Bayes formule as:





Where C is the class…
and D is the document
We can drop P(D) as this is a constant in this
case. This is a very common thing to do when
using Naive Bayes for classification problems.
Probability of a class
Where Dc is the number of documents in
our training set that have this class…
and Dt is the total number of documents
in our training set
Probability of a class
given a document
Where wx are the words of our text
What is the (joint) probability of word 1,
word 2, word 3, … given our class
Enough abstract
formulas for today,
2 simplified examples
We have the following data*
word good bad total
server 5 6 11
crashed 2 14 16
updated 9 1 10
new 8 1 9
total 24 22 46
* in reality your data will contain a lot more words and higher counts
word good bad total
server 5 6 11
crashed 2 14 16
… … … …
total 24 22 46
The server has crashed
(We applied a stopword filter that removes the words “the” and “has”)
word good bad total
server 5 6 11
updated 9 1 10
new 8 1 9
… … … …
total 24 22 46
The new server is updated
(We applied a stopword filter that removes the words “the” and “is”)
NLP in PHP
NlpTools
NlpTools is a library for natural language
processing written in PHP
Classes for classifying, tokenizing,
stemming, clustering, topic modeling, … .
Released under the WTFL license (Do
what you want)
Tokenizing a sentence
// text we will be converting into tokens
$text = "PHP is a server side scripting language.";
// initialize Whitespace and punctuation tokenizer
$tokenizer = new WhitespaceTokenizer();
// print array of tokens
print_r($tokenizer->tokenize($text));
Dealing with stop words
// text we will be converting into tokens
$text = "PHP is a server side scripting language.";
// define a list of stop words
$stop = new StopWords(array("is", "a", "as"));
// initialize Whitespace tokenizer
$tokenizer = new WhitespaceTokenizer();
// init token document
$doc = new TokensDocument($tokenizer->tokenize($text));
// apply our stopwords
$doc->applyTransformation($stop);
// print filtered tokens
print_r($doc->getDocumentData());
Dealing with stop words
Stemming words
// init PorterStemmer
$stemmer = new PorterStemmer();
// stemming variants of upload
printf("%sn", $stemmer->stem("uploading"));
printf("%sn", $stemmer->stem("uploaded"));
printf("%sn", $stemmer->stem("uploads"));
// stemming variants of delete
printf("%sn", $stemmer->stem("delete"));
printf("%sn", $stemmer->stem("deleted"));
printf("%sn", $stemmer->stem("deleting"));
Stemming words
Classification (training 1/2)
$training = array(
array('us','new york is a hell of a town'),
array('us','the statue of liberty'),
array('us','new york is in the united states'),
array('uk','london is in the uk'),
array('uk','the big ben is in london’),
…
);
// hold our training documents
$trainingSet = new TrainingSet();
// our tokenizer
$tokenizer = new WhitespaceTokenizer();
// will hold the features we will be working
$features = new DataAsFeatures();
Classification (training 2/2)
// iterate over training array
foreach ($training as $trainingDocument){
// add to our training set
$trainingSet->addDocument(
// class
$trainingDocument[0],
// document
new TokensDocument($tokenizer->tokenize($trainingDocument[1]))
);
}
// train our Naive Bayes Model
$bayesModel = new FeatureBasedNB();
$bayesModel->train($features, $trainingSet);
Classification (classifying)
$testSet = array(
array('us','i want to see the statue of liberty'),
array('uk','i saw the big ben yesterday’),
…
);
// init our Naive Bayes Class using the features and our model
$classifier = new MultinomialNBClassifier($features, $bayesModel);
// iterate over our test set
foreach ($testSet as $testDocument){
// predict our sentence
$prediction = $classifier->classify(
array('new york','us'), // the classes that can be predicted
new TokensDocument($tokenizer->tokenize($testDocument[1])) // the sentence
);
printf("sentence: %s | class: %s | predicted: %sn”,
$testDocument[1], $testDocument[0], $prediction );
}
Classification
Some tips
It is a best practice to split your data in a training and test
set instead of training on your whole dataset!
If you train your classifier against the whole dataset it can
happen that it will be very accurate on the dataset but
performs badly on unseen data, this is also called overfitting
in machine learning.
There isn’t a best split but 80-20 (Pareto principle) or 70-30
are safe ratio’s.
The numbers tells the tale! There are multiple ways of telling
how accurate your classifier performs but precision and recall
are a good start ! - http://www.kdnuggets.com/faq/
precision-recall.html

Some online PHP resources
http://www.php-nlp-tools.com/ - The
homepage of NlpTools
http://www.phpir.com - Contains a lot of
tutorials regarding information retrieval in
PHP
https://github.com/camspiers/statistical-
classifier - An alternative Bayes Classifier but
also supports SVM
Reading material
Code examples written in Java and Python but concepts
can easily be applied in other languages…
PHP NLP projects released
as open source
php-dutch-stemmer: is a PHP class that stems Dutch
words. Based on Porters algorithm. 



https://github.com/simplicitylab/php-dutch-stemmer
php-luhn-summarize: is a class that provides a basic
implementation of Luhn’s algorithm. This algorithm
can automatically create a summary of a given text. 



https://github.com/simplicitylab/php-luhn-summarize

http://www.slideshare.net/GlennDeBacker
https://github.com/simplicitylab/Talks
https://joind.in/talk/0d9b0
Thank you !

More Related Content

What's hot (20)

PPTX
Apriori algorithm
DHIVYADEVAKI
 
PDF
Recurrent Neural Networks, LSTM and GRU
ananth
 
PPTX
Cocomo model
Baskarkncet
 
PDF
CS8592-OOAD Lecture Notes Unit-1
Gobinath Subramaniam
 
PPT
Divide and Conquer
Dr Shashikant Athawale
 
PPTX
Greedy Algorithm - Knapsack Problem
Madhu Bala
 
PPTX
Presentation on Text Classification
Sai Srinivas Kotni
 
PPTX
The n Queen Problem
Sukrit Gupta
 
KEY
Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees
Lorenzo Alberton
 
PPTX
Greedy algorithms
sandeep54552
 
PPTX
Ooad unit – 1 introduction
Babeetha Muruganantham
 
PPT
2 3 tree
Anju Kanjirathingal
 
PDF
Matrix Factorization Techniques For Recommender Systems
Lei Guo
 
POTX
LDA Beginner's Tutorial
Wayne Lee
 
PDF
Artificial Intelligence - Hill climbing.
StephenTec
 
PDF
Ch3 4 regular expression and grammar
meresie tesfay
 
PPT
Divide and conquer
Dr Shashikant Athawale
 
PPTX
Design and Analysis of Algorithms.pptx
Syed Zaid Irshad
 
PPT
Binary Search
kunj desai
 
PDF
Information Extraction
Rubén Izquierdo Beviá
 
Apriori algorithm
DHIVYADEVAKI
 
Recurrent Neural Networks, LSTM and GRU
ananth
 
Cocomo model
Baskarkncet
 
CS8592-OOAD Lecture Notes Unit-1
Gobinath Subramaniam
 
Divide and Conquer
Dr Shashikant Athawale
 
Greedy Algorithm - Knapsack Problem
Madhu Bala
 
Presentation on Text Classification
Sai Srinivas Kotni
 
The n Queen Problem
Sukrit Gupta
 
Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees
Lorenzo Alberton
 
Greedy algorithms
sandeep54552
 
Ooad unit – 1 introduction
Babeetha Muruganantham
 
Matrix Factorization Techniques For Recommender Systems
Lei Guo
 
LDA Beginner's Tutorial
Wayne Lee
 
Artificial Intelligence - Hill climbing.
StephenTec
 
Ch3 4 regular expression and grammar
meresie tesfay
 
Divide and conquer
Dr Shashikant Athawale
 
Design and Analysis of Algorithms.pptx
Syed Zaid Irshad
 
Binary Search
kunj desai
 
Information Extraction
Rubén Izquierdo Beviá
 

Similar to Text classification-php-v4 (20)

PDF
Punjabi Text Classification using Naïve Bayes, Centroid and Hybrid Approach
cscpconf
 
PDF
HackYale - Natural Language Processing (Week 1)
Nick Hathaway
 
PDF
Statistical Learning and Text Classification with NLTK and scikit-learn
Olivier Grisel
 
PPTX
Learning with classification and clustering, neural networks
Shaun D'Souza
 
PDF
Text Classification, Sentiment Analysis, and Opinion Mining
Fabrizio Sebastiani
 
PDF
An Overview of Naïve Bayes Classifier
ananth
 
PPTX
NLP WITH NAÏVE BAYES CLASSIFIER (1).pptx
rohithprabhas1
 
PPTX
Nave Bias algorithm in Nature language processing
attaurahman
 
PPT
lecture15-supervised.ppt
Indra Hermawan
 
PDF
Naive bayes
Learnbay Datascience
 
PPTX
NLP.pptx
Rahul Borate
 
PDF
Sentiment Analysis
Data Science Society
 
PDF
Ai group-seminar-2013 nbc
Gen Aloys Ochola Badde
 
DOCX
Text Categorizationof Multi-Label Documents For Text Mining
IIRindia
 
PPTX
Text Classification
RAX Automation Suite
 
PDF
NLP Project Full Cycle
Vsevolod Dyomkin
 
PPT
lecture13-nbbbbb. Bbnnndnjdjdjbayes.ppt
joyaluca2
 
PDF
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
AbdurrahimDerric
 
PPTX
Natural Language Processing (NLP) in ai.pptx
DrNSumathiN
 
PDF
NLP Msc Computer science S2 Kerala University
vineethpradeep50
 
Punjabi Text Classification using Naïve Bayes, Centroid and Hybrid Approach
cscpconf
 
HackYale - Natural Language Processing (Week 1)
Nick Hathaway
 
Statistical Learning and Text Classification with NLTK and scikit-learn
Olivier Grisel
 
Learning with classification and clustering, neural networks
Shaun D'Souza
 
Text Classification, Sentiment Analysis, and Opinion Mining
Fabrizio Sebastiani
 
An Overview of Naïve Bayes Classifier
ananth
 
NLP WITH NAÏVE BAYES CLASSIFIER (1).pptx
rohithprabhas1
 
Nave Bias algorithm in Nature language processing
attaurahman
 
lecture15-supervised.ppt
Indra Hermawan
 
NLP.pptx
Rahul Borate
 
Sentiment Analysis
Data Science Society
 
Ai group-seminar-2013 nbc
Gen Aloys Ochola Badde
 
Text Categorizationof Multi-Label Documents For Text Mining
IIRindia
 
Text Classification
RAX Automation Suite
 
NLP Project Full Cycle
Vsevolod Dyomkin
 
lecture13-nbbbbb. Bbnnndnjdjdjbayes.ppt
joyaluca2
 
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
AbdurrahimDerric
 
Natural Language Processing (NLP) in ai.pptx
DrNSumathiN
 
NLP Msc Computer science S2 Kerala University
vineethpradeep50
 
Ad

Recently uploaded (20)

PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PDF
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
Biography of Daniel Podor.pdf
Daniel Podor
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
Ad

Text classification-php-v4

  • 2. Who am I ? Glenn De Backer (twitter: @glenndebacker) Web developer @ Dx-Solutions 32 years old originally from Bruges, now living in Meulebeke Interested in machine learning, (board) games, electronics and have a bit of a creative bone… Blog: http://www.simplicity.be
  • 3. What will we cover today ? What is text classification NLP terminology Bayes theorem Some PHP code
  • 4. What is text classification ? Text classification is the process of assigning classes to documents This can be done manually or by using machine learning (algorithmically) Today`s talk will be about classifying text using a supervised machine learning algorithm: Naive bayes
  • 5. Supervised vs unsupervised machine learning ? Supervised means in simple terms that we need to feed our algorithm examples of data and what they represent
 
 Free gift card -> spam
 The server is down -> ham Unsupervised means that we work with algorithms that finds hidden structure in unlabelled data. For example clustering documents
  • 6. Some possible use cases Spam detection (classic) Assigning categories, topics, genres, subjects, … Determine authorship Gender classification Sentiment analysis Identifying languages …
  • 8. Personal project
 Nieuws zonder politiek Fun project from 2010 Related to the 589 days with no elected government. We had a lot of political related non-news items that I wanted to filter out as an experiment. News aggregator that fetched news from different flemish newspapers Classified those items into political and non political news
  • 10. Personal project
 Wuk zeg je ? Fun project released at the end of 2015 Inspired by a contest of the province of West Flanders to find foreign words that sounded West-Flemish Can recognise the West-Flemish dialect… but also Dutch, French and English Uses character n-grams instead of words
  • 12. Tokenization Before any real text processing can be done we need to execute the task of tokenization. Tokenisation is the task of dividing text into words, sentences, symbols or other elements called tokens. They often talk about features instead of tokens.
  • 13. N-grams N-gram are sequences of tokens of length N Can be words, combination of words, characters, … . Depending on the size it also sometimes called a unigram (1 item), bigram (2 items) or a trigram (3 items). Character n-grams are very suited for language classification
  • 14. Stop words Are words (or features) that are particular common in a text corpus for example the, and, on, in, … Are considered uninformative A list of stopwords is used to remove or ignore words from the document we are processing Optional but recommended
  • 15. Stemming Stemming is the process of reducing words to their word stem, base or root. Not a required step but it can certainly help in reducing the number of features and improving the task of classifying text (e.g. speed or quality) The most used is the Porter stemmer which contains support for English, French, Dutch, …
  • 16. Bag Of Words (BOW) model Is a simple representation of text features Can be words, combination of words, sounds, … . A Bow model contains a vocabulary including a vocabulary count
  • 17. Training / test set A training set is just a collection of a labeled data used for classifying data.
 
 Free gift card -> spam
 The server is down -> ham A test set is simply to test the accuracy of our classifier
  • 18. A typical flow PHP is a server-side scripting language designed for web development
  • 19. A typical flow PHP | is | a | server-side | scripting | language | designed | for | web | development
  • 20. A typical flow PHP | is | a | server-side | scripting | language | designed | for | web | development
  • 21. A typical flow PHP | server-side | scripting 
 | language | designed | web | development
  • 22. A typical flow PHP : 1 server-side : 1 scripting : 1
 language : 1 designed : 1 web : 1 development : 1
  • 24. Some history trivia Discovered by a British minister Thomas Bayes in 1740. Rediscovered independently by a French scholar Piere Simon Laplace who gave it its modern mathematical form. Alan Turing used it to decode the German Enigma Cipher which had a big influence on the outcome of World War 2.
  • 25. Bayes theorem In probability theory or statistics Bayes theorem describes the probability of an event based on conditions that might relate to that event. E.g. how probable it is that an article is about sports (and that based on certain words that the article contains).
  • 26. Naive Bayes Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes theorem The naive part is the fact that it strongly assume independence between features (words in our case)
  • 27. Bayes and text classification We can modify the standard Bayes formule as:
 
 
 Where C is the class… and D is the document We can drop P(D) as this is a constant in this case. This is a very common thing to do when using Naive Bayes for classification problems.
  • 28. Probability of a class Where Dc is the number of documents in our training set that have this class… and Dt is the total number of documents in our training set
  • 29. Probability of a class given a document Where wx are the words of our text What is the (joint) probability of word 1, word 2, word 3, … given our class
  • 30. Enough abstract formulas for today, 2 simplified examples
  • 31. We have the following data* word good bad total server 5 6 11 crashed 2 14 16 updated 9 1 10 new 8 1 9 total 24 22 46 * in reality your data will contain a lot more words and higher counts
  • 32. word good bad total server 5 6 11 crashed 2 14 16 … … … … total 24 22 46 The server has crashed (We applied a stopword filter that removes the words “the” and “has”)
  • 33. word good bad total server 5 6 11 updated 9 1 10 new 8 1 9 … … … … total 24 22 46 The new server is updated (We applied a stopword filter that removes the words “the” and “is”)
  • 35. NlpTools NlpTools is a library for natural language processing written in PHP Classes for classifying, tokenizing, stemming, clustering, topic modeling, … . Released under the WTFL license (Do what you want)
  • 36. Tokenizing a sentence // text we will be converting into tokens $text = "PHP is a server side scripting language."; // initialize Whitespace and punctuation tokenizer $tokenizer = new WhitespaceTokenizer(); // print array of tokens print_r($tokenizer->tokenize($text));
  • 37. Dealing with stop words // text we will be converting into tokens $text = "PHP is a server side scripting language."; // define a list of stop words $stop = new StopWords(array("is", "a", "as")); // initialize Whitespace tokenizer $tokenizer = new WhitespaceTokenizer(); // init token document $doc = new TokensDocument($tokenizer->tokenize($text)); // apply our stopwords $doc->applyTransformation($stop); // print filtered tokens print_r($doc->getDocumentData());
  • 39. Stemming words // init PorterStemmer $stemmer = new PorterStemmer(); // stemming variants of upload printf("%sn", $stemmer->stem("uploading")); printf("%sn", $stemmer->stem("uploaded")); printf("%sn", $stemmer->stem("uploads")); // stemming variants of delete printf("%sn", $stemmer->stem("delete")); printf("%sn", $stemmer->stem("deleted")); printf("%sn", $stemmer->stem("deleting"));
  • 41. Classification (training 1/2) $training = array( array('us','new york is a hell of a town'), array('us','the statue of liberty'), array('us','new york is in the united states'), array('uk','london is in the uk'), array('uk','the big ben is in london’), … ); // hold our training documents $trainingSet = new TrainingSet(); // our tokenizer $tokenizer = new WhitespaceTokenizer(); // will hold the features we will be working $features = new DataAsFeatures();
  • 42. Classification (training 2/2) // iterate over training array foreach ($training as $trainingDocument){ // add to our training set $trainingSet->addDocument( // class $trainingDocument[0], // document new TokensDocument($tokenizer->tokenize($trainingDocument[1])) ); } // train our Naive Bayes Model $bayesModel = new FeatureBasedNB(); $bayesModel->train($features, $trainingSet);
  • 43. Classification (classifying) $testSet = array( array('us','i want to see the statue of liberty'), array('uk','i saw the big ben yesterday’), … ); // init our Naive Bayes Class using the features and our model $classifier = new MultinomialNBClassifier($features, $bayesModel); // iterate over our test set foreach ($testSet as $testDocument){ // predict our sentence $prediction = $classifier->classify( array('new york','us'), // the classes that can be predicted new TokensDocument($tokenizer->tokenize($testDocument[1])) // the sentence ); printf("sentence: %s | class: %s | predicted: %sn”, $testDocument[1], $testDocument[0], $prediction ); }
  • 45. Some tips It is a best practice to split your data in a training and test set instead of training on your whole dataset! If you train your classifier against the whole dataset it can happen that it will be very accurate on the dataset but performs badly on unseen data, this is also called overfitting in machine learning. There isn’t a best split but 80-20 (Pareto principle) or 70-30 are safe ratio’s. The numbers tells the tale! There are multiple ways of telling how accurate your classifier performs but precision and recall are a good start ! - http://www.kdnuggets.com/faq/ precision-recall.html

  • 46. Some online PHP resources http://www.php-nlp-tools.com/ - The homepage of NlpTools http://www.phpir.com - Contains a lot of tutorials regarding information retrieval in PHP https://github.com/camspiers/statistical- classifier - An alternative Bayes Classifier but also supports SVM
  • 47. Reading material Code examples written in Java and Python but concepts can easily be applied in other languages…
  • 48. PHP NLP projects released as open source php-dutch-stemmer: is a PHP class that stems Dutch words. Based on Porters algorithm. 
 
 https://github.com/simplicitylab/php-dutch-stemmer php-luhn-summarize: is a class that provides a basic implementation of Luhn’s algorithm. This algorithm can automatically create a summary of a given text. 
 
 https://github.com/simplicitylab/php-luhn-summarize