Words in Space
A Visual Exploration of Distance, Documents, and
Distributions for Text Analysis
PyData NYC
2018
Dr. Rebecca Bilbro
Head of Data Science, ICX Media
Co-creator, Scikit-Yellowbrick
Author, Applied Text Analysis with Python
@rebeccabilbro
Machine Learning Review
The Machine Learning Problem:
Given a set of n samples of data such that each sample is
represented by more than a single number (e.g. multivariate
data that has several attributes or features), create a model
that is able to predict unknown properties of each sample.
Spatial interpretation:
Given data points in a bounded,
high dimensional space, define
regions of decisions for any point
in that space.
Instances are composed of features that make up our dimensions.
Feature space is the n-dimensions where our variables live (not
including target).
Feature extraction is the art of creating a space with decision
boundaries.
Example
Target
Y ≡ Thickness of car tires after some testing period
Variables
X1
≡ distance travelled in test
X2
≡ time duration of test
X3
≡ amount of chemical C in tires
The feature space is R3
, or more accurately, the positive quadrant in R3
as all the X
variables can only be positive quantities.
Domain knowledge about tires might suggest that the speed the vehicle was
moving at is important, hence we generate another variable, X4
(this is the feature
extraction part):
X4
= X1
/ X2
≡ the speed of the vehicle during testing.
This extends our old feature space into a new one, the positive part of R4
.
A mapping is a function, ϕ, from R3
to R4
:
ϕ(x1
,x2
,x3
) = (x1
,x2
,x3
,x1
x2
)
Modeling Non-Numeric Data
Real-world data is often not
represented numerically
out of the box (e.g. text,
images), therefore some
transformation must be
applied in order to do
machine learning.
Tricky Part
Machine learning relies on our ability to imagine data as
points in space, where the relative closeness of any two
is a measure of their similarity.
So...when we transform those non-numeric features into
numeric ones, how should we quantify the distance
between instances?
Many ways of quantifying “distance” (or similarity)
often the
default for
numeric data
common rule
of thumb for
text data
With text, our choice of distance metric is very
important! Why?
Challenges of Modeling Text Data
● Very high dimensional
○ One dimension for every word (token) in the corpus!
● Sparsely distributed
○ Documents vary in length!
○ Most instances (documents) may be mostly zeros!
● Has some features that are more important than others
○ E.g. the “of” dimension vs. the “basketball” dimension when clustering sports articles.
● Has some feature variations that matter more than others
○ E.g. freq(tree) vs. freq(horticulture) in classifying gardening books.
Help!
● Extends the Scikit-Learn API.
● Enhances the model selection process.
● Tools for feature visualization, visual
diagnostics, and visual steering.
● Not a replacement for other visualization
libraries.
Yellowbrick
Feature
Analysis
Algorithm
Selection
Hyperparameter
Tuning
model selection isiterative, but can besteered!
TSNE (t-distributed Stochastic Neighbor
Embedding)
1. Apply SVD (or PCA) to reduce
dimensionality (for efficiency).
2. Embed vectors using probability
distributions from both the original
dimensionality and the decomposed
dimensionality.
3. Cluster and visualize similar
documents in a scatterplot.
Three Example Datasets
Hobbies corpus
● From the Baleen project
● 448 newspaper/blog articles
● 5 classes: gaming, cooking, cinema, books, sports
● Doc length (in words): 532 avg, 14564 max, 1 min
Farm Ads corpus
● From the UCI Repository
● 4144 ads represented as a list of metadata tags
● 2 classes: accepted, not accepted
● Doc length (in words): 270 avg, 5316 max, 1 min
Dresses Attributes Sales corpus
● From the UCI Repository
● 500 dresses represented as features: neckline, waistline, fabric, size, season
● Doc length (in words): 11 avg, 11 max, 11 min
Euclidean Distance
Euclidean distance is the straight-line distance between 2 points in Euclidean
(metric) space.
tsne = TSNEVisualizer(metric="euclidean")
tsne.fit(docs, labels)
tsne.poof()
5 10 15 20 25
252015105
Doc 2
(20, 19)
Doc 1
(7, 14)
Euclidean Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Cityblock (Manhattan) Distance
Manhattan distance between two points is computed as the sum of the absolute
differences of their Cartesian coordinates.
tsne = TSNEVisualizer(metric="cityblock")
tsne.fit(docs, labels)
tsne.poof()
Cityblock (Manhattan) Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Chebyshev Distance
Chebyshev distance is the L∞-norm of the difference between two points, a special
case of the Minkowski distance where p goes to infinity. It is also known as
chessboard distance.
tsne = TSNEVisualizer(metric="chebyshev")
tsne.fit(docs, labels)
tsne.poof()
Chebyshev Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Minkowski Distance
Minkowski distance is a generalization of Euclidean, Manhattan, and Chebyshev
distance, and defines distance between points in a normalized vector space as the
generalized Lp-norm of their difference.
tsne = TSNEVisualizer(metric="minkowski")
tsne.fit(docs, labels)
tsne.poof()
Minkowski Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Mahalanobis Distance
A multidimensional generalization
of the distance between a point
and a distribution of points.
tsne = TSNEVisualizer(metric="mahalanobis", method='exact')
tsne.fit(docs, labels)
tsne.poof()
Think: shifting and rescaling coordinates with respect to distribution. Can help find
similarities between different-length docs.
Mahalanobis Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Cosine “Distance”
Cosine “distance” is the cosine of the angle between two doc vectors. The more
parallel, the more similar. Corrects for length variations (angles rather than
magnitudes). Considers only non-zero elements (efficient for sparse vectors!).
Note: Cosine distance is not technically a distance measure because it doesn’t
satisfy the triangle inequality.
tsne = TSNEVisualizer(metric="cosine")
tsne.fit(docs, labels)
tsne.poof()
Cosine “Distance”
Hobbies Corpus Ads Corpus Dresses Corpus
Canberra Distance
Canberra distance is a weighted version of Manhattan distance. It is often used for
data scattered around an origin, as it is biased for measures around the origin and
very sensitive for values close to zero.
tsne = TSNEVisualizer(metric="canberra")
tsne.fit(docs, labels)
tsne.poof()
Canberra Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Jaccard Distance
Jaccard distance defines similarity between finite sets as the
quotient of their intersection and their union. More effective for
detecting things like document duplication.
tsne = TSNEVisualizer(metric="jaccard")
tsne.fit(docs, labels)
tsne.poof()
Jaccard Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Hamming Distance
Hamming distance between two strings is the number of positions at which the
corresponding symbols are different. Measures minimum substitutions required to
change one string into the other.
tsne = TSNEVisualizer(metric="hamming")
tsne.fit(docs, labels)
tsne.poof()
Hamming Distance
Hobbies Corpus Ads Corpus Dresses Corpus
Other Yellowbrick Text Visualizers
Intercluster
Distance
Maps
Token
Frequency
Distribution
Dispersion
Plot
“Overview first, zoom and filter, then
details-on-demand”
- Ben Schneiderman
Thank you!

More Related Content

PDF
Document Modeling with Implicit Approximate Posterior Distributions
PPTX
Clique and sting
PDF
Improving search time for contentment based image retrieval via, LSH, MTRee, ...
PDF
Optics ordering points to identify the clustering structure
PPTX
Optics
PDF
PDF
Siamese networks
PPTX
Clique
Document Modeling with Implicit Approximate Posterior Distributions
Clique and sting
Improving search time for contentment based image retrieval via, LSH, MTRee, ...
Optics ordering points to identify the clustering structure
Optics
Siamese networks
Clique

What's hot (20)

PDF
Clustering
PPT
Clustering: Large Databases in data mining
PPT
3.4 density and grid methods
PDF
A1804010105
PDF
Oblivious Neural Network Predictions via MiniONN Transformations
PDF
Density Based Clustering
PPTX
Data compression
PPTX
A Diffusion Wavelet Approach For 3 D Model Matching
PDF
ADAPTIVE CONTOURLET TRANSFORM AND WAVELET TRANSFORM BASED IMAGE STEGANOGRAPHY...
PPTX
DBSCAN : A Clustering Algorithm
PDF
IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...
PPT
4 Cliques Clusters
PDF
Machine learning in science and industry — day 4
PDF
00463517b1e90c1e63000000
PDF
www.ijerd.com
PPT
Lecture8 clustering
PPTX
Deep Learning
PPTX
Dbscan algorithom
PDF
Report Satellite Navigation Systems
PDF
I0341042048
Clustering
Clustering: Large Databases in data mining
3.4 density and grid methods
A1804010105
Oblivious Neural Network Predictions via MiniONN Transformations
Density Based Clustering
Data compression
A Diffusion Wavelet Approach For 3 D Model Matching
ADAPTIVE CONTOURLET TRANSFORM AND WAVELET TRANSFORM BASED IMAGE STEGANOGRAPHY...
DBSCAN : A Clustering Algorithm
IMAGE RETRIEVAL USING QUADRATIC DISTANCE BASED ON COLOR FEATURE AND PYRAMID S...
4 Cliques Clusters
Machine learning in science and industry — day 4
00463517b1e90c1e63000000
www.ijerd.com
Lecture8 clustering
Deep Learning
Dbscan algorithom
Report Satellite Navigation Systems
I0341042048
Ad

Similar to Words in space (20)

PDF
Words in Space - Rebecca Bilbro
PDF
A Visual Exploration of Distance, Documents, and Distributions
PPT
similarities-knn-1.ppt
PPTX
Data Mining Lecture_5.pptx
PDF
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
PPTX
similarities-knn.pptx
PPTX
SkNoushadddoja_28100119039.pptx
PPT
[PPT]
PDF
Chapter2 NEAREST NEIGHBOURHOOD ALGORITHMS.pdf
PPT
Cs345 cl
PDF
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERING
PPTX
Distance function
PDF
DMTM Lecture 11 Clustering
PDF
09Evaluation_Clustering.pdf
PDF
Combined cosine-linear regression model similarity with application to handwr...
PDF
C4.1.2
PDF
DMTM 2015 - 06 Introduction to Clustering
PDF
apidays Paris 2024 - Embeddings: Core Concepts for Developers, Jocelyn Matthe...
PPT
Digital Distance Geometry
PPTX
Clasification approaches
Words in Space - Rebecca Bilbro
A Visual Exploration of Distance, Documents, and Distributions
similarities-knn-1.ppt
Data Mining Lecture_5.pptx
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
similarities-knn.pptx
SkNoushadddoja_28100119039.pptx
[PPT]
Chapter2 NEAREST NEIGHBOURHOOD ALGORITHMS.pdf
Cs345 cl
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERING
Distance function
DMTM Lecture 11 Clustering
09Evaluation_Clustering.pdf
Combined cosine-linear regression model similarity with application to handwr...
C4.1.2
DMTM 2015 - 06 Introduction to Clustering
apidays Paris 2024 - Embeddings: Core Concepts for Developers, Jocelyn Matthe...
Digital Distance Geometry
Clasification approaches
Ad

More from Rebecca Bilbro (20)

PDF
Data Secrets From a Platform Engineer (Bilbro)
PDF
PyData London 2024: Mistakes were made (Dr. Rebecca Bilbro)
PDF
Data Structures for Data Privacy: Lessons Learned in Production
PDF
Conflict-Free Replicated Data Types (PyCon 2022)
PDF
(Py)testing the Limits of Machine Learning
PDF
Anti-Entropy Replication for Cost-Effective Eventual Consistency
PDF
The Promise and Peril of Very Big Models
PDF
Beyond Off the-Shelf Consensus
PDF
PyData Global: Thrifty Machine Learning
PDF
EuroSciPy 2019: Visual diagnostics at scale
PDF
Visual diagnostics at scale
PDF
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
PDF
The Incredible Disappearing Data Scientist
PPTX
PPTX
Learning machine learning with Yellowbrick
PPTX
Escaping the Black Box
PDF
Data Intelligence 2017 - Building a Gigaword Corpus
PDF
Building a Gigaword Corpus (PyCon 2017)
PDF
Yellowbrick: Steering machine learning with visual transformers
PDF
Visualizing the model selection process
Data Secrets From a Platform Engineer (Bilbro)
PyData London 2024: Mistakes were made (Dr. Rebecca Bilbro)
Data Structures for Data Privacy: Lessons Learned in Production
Conflict-Free Replicated Data Types (PyCon 2022)
(Py)testing the Limits of Machine Learning
Anti-Entropy Replication for Cost-Effective Eventual Consistency
The Promise and Peril of Very Big Models
Beyond Off the-Shelf Consensus
PyData Global: Thrifty Machine Learning
EuroSciPy 2019: Visual diagnostics at scale
Visual diagnostics at scale
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
The Incredible Disappearing Data Scientist
Learning machine learning with Yellowbrick
Escaping the Black Box
Data Intelligence 2017 - Building a Gigaword Corpus
Building a Gigaword Corpus (PyCon 2017)
Yellowbrick: Steering machine learning with visual transformers
Visualizing the model selection process

Recently uploaded (20)

PPTX
AI AND ML PROPOSAL PRESENTATION MUST.pptx
PPTX
ifsm.pptx, institutional food service management
PDF
Concepts of Database Management, 10th Edition by Lisa Friedrichsen Test Bank.pdf
PPTX
GPS sensor used agriculture land for automation
PPT
dsa Lec-1 Introduction FOR THE STUDENTS OF bscs
PPTX
indiraparyavaranbhavan-240418134200-31d840b3.pptx
PDF
9 FinOps Tools That Simplify Cloud Cost Reporting.pdf
PPT
Classification methods in data analytics.ppt
PPTX
lung disease detection using transfer learning approach.pptx
PPTX
machinelearningoverview-250809184828-927201d2.pptx
PDF
Grey Minimalist Professional Project Presentation (1).pdf
PPTX
Statisticsccdxghbbnhhbvvvvvvvvvv. Dxcvvvhhbdzvbsdvvbbvv ccc
PPTX
Stats annual compiled ipd opd ot br 2024
PPTX
inbound2857676998455010149.pptxmmmmmmmmm
PPT
2011 HCRP presentation-final.pptjrirrififfi
PPTX
Hushh.ai: Your Personal Data, Your Business
PPTX
cyber row.pptx for cyber proffesionals and hackers
PDF
technical specifications solar ear 2025.
PPTX
research framework and review of related literature chapter 2
PPTX
Introduction to Fundamentals of Data Security
AI AND ML PROPOSAL PRESENTATION MUST.pptx
ifsm.pptx, institutional food service management
Concepts of Database Management, 10th Edition by Lisa Friedrichsen Test Bank.pdf
GPS sensor used agriculture land for automation
dsa Lec-1 Introduction FOR THE STUDENTS OF bscs
indiraparyavaranbhavan-240418134200-31d840b3.pptx
9 FinOps Tools That Simplify Cloud Cost Reporting.pdf
Classification methods in data analytics.ppt
lung disease detection using transfer learning approach.pptx
machinelearningoverview-250809184828-927201d2.pptx
Grey Minimalist Professional Project Presentation (1).pdf
Statisticsccdxghbbnhhbvvvvvvvvvv. Dxcvvvhhbdzvbsdvvbbvv ccc
Stats annual compiled ipd opd ot br 2024
inbound2857676998455010149.pptxmmmmmmmmm
2011 HCRP presentation-final.pptjrirrififfi
Hushh.ai: Your Personal Data, Your Business
cyber row.pptx for cyber proffesionals and hackers
technical specifications solar ear 2025.
research framework and review of related literature chapter 2
Introduction to Fundamentals of Data Security

Words in space

  • 1. Words in Space A Visual Exploration of Distance, Documents, and Distributions for Text Analysis PyData NYC 2018
  • 2. Dr. Rebecca Bilbro Head of Data Science, ICX Media Co-creator, Scikit-Yellowbrick Author, Applied Text Analysis with Python @rebeccabilbro
  • 4. The Machine Learning Problem: Given a set of n samples of data such that each sample is represented by more than a single number (e.g. multivariate data that has several attributes or features), create a model that is able to predict unknown properties of each sample.
  • 5. Spatial interpretation: Given data points in a bounded, high dimensional space, define regions of decisions for any point in that space.
  • 6. Instances are composed of features that make up our dimensions.
  • 7. Feature space is the n-dimensions where our variables live (not including target). Feature extraction is the art of creating a space with decision boundaries.
  • 8. Example Target Y ≡ Thickness of car tires after some testing period Variables X1 ≡ distance travelled in test X2 ≡ time duration of test X3 ≡ amount of chemical C in tires The feature space is R3 , or more accurately, the positive quadrant in R3 as all the X variables can only be positive quantities.
  • 9. Domain knowledge about tires might suggest that the speed the vehicle was moving at is important, hence we generate another variable, X4 (this is the feature extraction part): X4 = X1 / X2 ≡ the speed of the vehicle during testing. This extends our old feature space into a new one, the positive part of R4 . A mapping is a function, ϕ, from R3 to R4 : ϕ(x1 ,x2 ,x3 ) = (x1 ,x2 ,x3 ,x1 x2 )
  • 11. Real-world data is often not represented numerically out of the box (e.g. text, images), therefore some transformation must be applied in order to do machine learning.
  • 12. Tricky Part Machine learning relies on our ability to imagine data as points in space, where the relative closeness of any two is a measure of their similarity. So...when we transform those non-numeric features into numeric ones, how should we quantify the distance between instances?
  • 13. Many ways of quantifying “distance” (or similarity) often the default for numeric data common rule of thumb for text data
  • 14. With text, our choice of distance metric is very important! Why?
  • 15. Challenges of Modeling Text Data ● Very high dimensional ○ One dimension for every word (token) in the corpus! ● Sparsely distributed ○ Documents vary in length! ○ Most instances (documents) may be mostly zeros! ● Has some features that are more important than others ○ E.g. the “of” dimension vs. the “basketball” dimension when clustering sports articles. ● Has some feature variations that matter more than others ○ E.g. freq(tree) vs. freq(horticulture) in classifying gardening books.
  • 16. Help!
  • 17. ● Extends the Scikit-Learn API. ● Enhances the model selection process. ● Tools for feature visualization, visual diagnostics, and visual steering. ● Not a replacement for other visualization libraries. Yellowbrick Feature Analysis Algorithm Selection Hyperparameter Tuning model selection isiterative, but can besteered!
  • 18. TSNE (t-distributed Stochastic Neighbor Embedding) 1. Apply SVD (or PCA) to reduce dimensionality (for efficiency). 2. Embed vectors using probability distributions from both the original dimensionality and the decomposed dimensionality. 3. Cluster and visualize similar documents in a scatterplot.
  • 19. Three Example Datasets Hobbies corpus ● From the Baleen project ● 448 newspaper/blog articles ● 5 classes: gaming, cooking, cinema, books, sports ● Doc length (in words): 532 avg, 14564 max, 1 min Farm Ads corpus ● From the UCI Repository ● 4144 ads represented as a list of metadata tags ● 2 classes: accepted, not accepted ● Doc length (in words): 270 avg, 5316 max, 1 min Dresses Attributes Sales corpus ● From the UCI Repository ● 500 dresses represented as features: neckline, waistline, fabric, size, season ● Doc length (in words): 11 avg, 11 max, 11 min
  • 20. Euclidean Distance Euclidean distance is the straight-line distance between 2 points in Euclidean (metric) space. tsne = TSNEVisualizer(metric="euclidean") tsne.fit(docs, labels) tsne.poof() 5 10 15 20 25 252015105 Doc 2 (20, 19) Doc 1 (7, 14)
  • 21. Euclidean Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 22. Cityblock (Manhattan) Distance Manhattan distance between two points is computed as the sum of the absolute differences of their Cartesian coordinates. tsne = TSNEVisualizer(metric="cityblock") tsne.fit(docs, labels) tsne.poof()
  • 23. Cityblock (Manhattan) Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 24. Chebyshev Distance Chebyshev distance is the L∞-norm of the difference between two points, a special case of the Minkowski distance where p goes to infinity. It is also known as chessboard distance. tsne = TSNEVisualizer(metric="chebyshev") tsne.fit(docs, labels) tsne.poof()
  • 25. Chebyshev Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 26. Minkowski Distance Minkowski distance is a generalization of Euclidean, Manhattan, and Chebyshev distance, and defines distance between points in a normalized vector space as the generalized Lp-norm of their difference. tsne = TSNEVisualizer(metric="minkowski") tsne.fit(docs, labels) tsne.poof()
  • 27. Minkowski Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 28. Mahalanobis Distance A multidimensional generalization of the distance between a point and a distribution of points. tsne = TSNEVisualizer(metric="mahalanobis", method='exact') tsne.fit(docs, labels) tsne.poof() Think: shifting and rescaling coordinates with respect to distribution. Can help find similarities between different-length docs.
  • 29. Mahalanobis Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 30. Cosine “Distance” Cosine “distance” is the cosine of the angle between two doc vectors. The more parallel, the more similar. Corrects for length variations (angles rather than magnitudes). Considers only non-zero elements (efficient for sparse vectors!). Note: Cosine distance is not technically a distance measure because it doesn’t satisfy the triangle inequality. tsne = TSNEVisualizer(metric="cosine") tsne.fit(docs, labels) tsne.poof()
  • 31. Cosine “Distance” Hobbies Corpus Ads Corpus Dresses Corpus
  • 32. Canberra Distance Canberra distance is a weighted version of Manhattan distance. It is often used for data scattered around an origin, as it is biased for measures around the origin and very sensitive for values close to zero. tsne = TSNEVisualizer(metric="canberra") tsne.fit(docs, labels) tsne.poof()
  • 33. Canberra Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 34. Jaccard Distance Jaccard distance defines similarity between finite sets as the quotient of their intersection and their union. More effective for detecting things like document duplication. tsne = TSNEVisualizer(metric="jaccard") tsne.fit(docs, labels) tsne.poof()
  • 35. Jaccard Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 36. Hamming Distance Hamming distance between two strings is the number of positions at which the corresponding symbols are different. Measures minimum substitutions required to change one string into the other. tsne = TSNEVisualizer(metric="hamming") tsne.fit(docs, labels) tsne.poof()
  • 37. Hamming Distance Hobbies Corpus Ads Corpus Dresses Corpus
  • 38. Other Yellowbrick Text Visualizers