SlideShare a Scribd company logo
ENTERPRISE-SCALE
TOPOLOGICAL DATA ANALYSIS
USING SPARK
Anshuman Mishra, Lawrence Spracklen
Alpine Data
Alpine Data
What we’ll talk about
•  What’s TDA and why should you care
•  Deep dive into Mapper and bottlenecks
•  Betti Mapper - scaling Mapper to the enterprise
Can anyone recognize this?
We built the first open-source scalable
implementation of TDA Mapper
•  Our implementation of Mapper beats a naïve
version on Spark by 8x-11x* for moderate to large
datasets
•  8x: avg. 305 s for Betti vs. non-completion in 2400 s for
Naïve (100,000 x 784 dataset)
•  11x: avg. 45 s for Betti vs. 511 s for Naïve (10,000 x 784
dataset)
•  We used a novel combination of locality-sensitive
hashing on Spark to increase performance
TDA AND MAPPER: WHY SHOULD
WE CARE?
Conventional ML carries the “curse
of dimensionality”
•  As d à∞, all data points are packed away into
corners of a corresponding d-dimensional
hypercube, with little to separate them
•  Instance learners start to choke
•  Detecting anomalies becomes tougher
How does TDA (Mapper) help?
•  “Topological Methods for the Analysis of High Dimensional
Data Sets and 3D Object Recognition”, G. Singh, F. Memoli, G.
Carlsson, Eurographics Symposium on Point-Based Graphics
(2007)
•  Algorithm consumes a dataset and generates a
topological summary of the whole dataset
•  Summary can help identify localized structures in
high-dimensional data
Some examples of Mapper outputs
DEEP DIVE INTO MAPPER
Mapper: The 30,000 ft. view
M x M
distance
matrix
M x N
Mx1
Mx1
.
. .
.. .
.
.
.
. ..
. …
M x M
distance
matrix
Mapper: 1. Choose a Distance Metric
M x N
The 1st step is to choose a distance metric
for the dataset, in order to compute a distance
matrix.
This will be used to capture similarity between
data points.
Some examples of distance metrics are
Euclidean, Hamming, cosine, etc.
M x M
distance
matrix
Mapper: 2. Compute filter functions
M x N
Next, filter functions (aka lenses) are chosen
to map data points to a single value on the real
line.
These filter functions can be based on:
-  Raw features
-  Statistics – mean, median, variance, etc.
-  Geometry – distance to closest data point,
furthest data point, etc.
-  ML algorithm outputs
Usually two such functions are computed on the
dataset.
M x 1
M x 1
M x M
distance
matrix
Mapper: 3. Apply cover & overlap
M x N
M x 1
M x 1
…
Next, the ranges of each filter application are
“chopped up” into overlapping segments or
intervals using two parameters: cover and
overlap
-  Cover (aka resolution) controls how many
intervals each filter range will be chopped
into, e.g. 40,100
-  Overlap controls the degree of overlap
between intervals (e.g. 20%)
Cover
Overlap
M x M
distance
matrix
Mapper: 4. Compute Cartesians
M x N
The next step is to compute the Cartesian
products of the range intervals (from the
previous step) and assign the original data
points to the resulting two-dimensional regions
based on their filter values.
Note that these two-dimensional regions will
overlap due to the parameters set in the
previous step.
In other words, there will be points in common
between these regions.
M x 2
…
M x M
distance
matrix
Mapper: 5. Perform clustering
M x N
The penultimate stage in the Mapper algorithm
is to perform clustering in the original high-
dimensional space for each (overlapping)
region.
Each cluster will be represented by a node;
since regions overlap, some clusters will have
points in common. Their corresponding
nodes will be connected via an unweighted
edge.
The kind of clustering performed is immaterial.
Our implementation uses DBSCAN.
M x 2
…
.
. .
.. .
..
.
. ..
.
Mapper: 6. Build TDA network
Finally, by joining nodes in topological space
(re: clusters in feature space) that have points
in common, one can derive a topological
network in the form of a graph.
Graph coloring can be performed to capture
localized behavior in the dataset and derive
hidden insights from the data.
Open source Mapper implementations
•  Python:
–  Python Mapper, Mullner and Babu: http://danifold.net/mapper/
–  Proof-of-concept Mapper in a Kaggle notebook, @mlwave:
https://www.kaggle.com/triskelion/digit-recognizer/mapping-digits-with-a-t-sne-
lens/notebook
•  R:
–  TDAmapper package
•  Matlab:
–  Original mapper implementation
Alpine TDA
R
Python
Mapper: Computationally expensive!
M x M
distance
matrix
M x N
Mx1
Mx1
.
. .
.. .
.
.
.
. ..
. …
O(N2) is prohibitive
for large datasets
Single-node open source Mappers choke on large datasets (generously
defined as > 10k data points with >100 columns)
Rolling our own Mapper..
•  Our Mapper implementation
–  Built on PySpark 1.6.1
–  Called Betti Mapper
–  Named after Enrico Betti, a famous topologist
X ✔
Multiple ways to scale Mapper
1.  Naïve Spark implementation
ü  Write the Mapper algorithm using (Py)Spark RDDs
–  Distance matrix computation still performed over entire dataset on
driver node
2.  Down-sampling / landmarking (+ Naïve Spark)
ü  Obtain manageable number of samples from dataset
–  Unreasonable to assume global distribution profiles are captured
by samples
3.  LSH Prototyping!!!!?!
What came first?
•  We use Mapper to detect structure in high-dimensional data using the
concept of similarity.
•  BUT we need to measure similarity so we can sample efficiently.
•  We could use stratified sampling, but then what about
•  Unlabeled data?
•  Anomalies and outliers?
•  LSH is a lower-cost first pass capturing similarity for cheap and helping to
scale Mapper
Locality sensitive hashing by
random projection
•  We draw random vectors with same
dimensions as dataset and compute dot
products with each data point
•  If dot product > 0, mark as 1, else 0
•  Random vectors serve to slice feature
space into bins
•  Series of projection bits can be converted
into a single hash number
•  We have found good results by setting # of
random vectors to: floor(log2 |M|)
1
1
1
0
0
0
…
…
Scaling with LSH Prototyping on Spark
1.  Use Locality Sensitive Hashing
(SimHash / Random Projection)
to drop data points into bins
2.  Compute “prototype” points for
each bin corresponding to bin
centroid
–  can also use median to make
prototyping more robust
3.  Use binning information to
compute topological network:
distMxM => distBxB, where B is no. of
prototype points (1 per bin)
ü Fastest scalable implementation
ü # of random vectors controls #
of bins and therefore fidelity of
topological representation
ü LSH binning tends to select
similar points (inter-bin distance >
intra-bin distance)
Betti Mapper
B x B
Distance
Matrix
of prototypes
M x N
Bx1
Bx1
.
. .
.. .
.
.
.
. ..
. …
LSH
M x M
Distance
Matrix:
D(p1, p2) =
D(bin(p1), bin(p2))
Mx1
Mx1
B x N
prototypes
X X
IMPLEMENTATION
PERFORMANCE
Using pyspark
•  Simple to “sparkify” an existing python mapper
implementation
•  Leverage the rich python ML support to greatest
extent
–  Modify only the computational bottlenecks
•  Numpy/Scipy is essential
•  Turnkey Anaconda deployment on CDH
Naïve performanceTime(Years)
•  4TFLOP/s GPGPU (100% util)
•  5K Columns
•  Euclidean distance
Seconds
Decades
Row count
0.000
0.000
0.000
0.000
0.000
0.001
0.010
0.100
1.000
10.000
100.000
1000.000
10K 100K 1M 10M 100M 1B
Our Approach
Build and test three implementations of Mapper
1.  Naïve Mapper on Spark
2.  Mapper on Spark with sampling (5%, 10%, 25%)
3.  Betti Mapper: LSH + Mapper (8v, 12v, 16v)
Test Hardware
Macbook Pro, mid 2014
•  2.5 GHz Intel® Core i7
•  16 GB 1600 MHz DDR3
•  512 GB SSD
Spark Cluster on Amazon EC2
•  Instance type: r3.large
•  Node: 2 vCPU, 15 GB RAM, 32 GB SSD
•  4 workers, 1 driver
•  250 GB SSD EBS as persistent HDFS
•  Amazon Linux, Anaconda 64-bit 4.0.0,
PySpark 1.6.1
Spark Configuration
•  --driver-memory 8g
•  --executor-memory 12g (each)
•  --executor-cores 2
•  No. of executors: 4
Dataset Configuration
Filename Size (MxN) Size (bytes)
MNIST_1k.csv 1000 rows x 784 cols 1.83 MB
MNIST_10k.csv 10,000 rows x 784 cols 18.3 MB
MNIST_100k.csv 100,000 rows x 784 cols 183 MB
MNIST_1000k.csv 1,000,000 rows x 784 cols 1830 MB
The datasets are sampled with replacement from the
original MNIST dataset available for download using
Python’s scikit-learn library (mldata module)
Test Harness
•  Runs test cases on cluster
•  Test case:
–  <mapper type, dataset size, no. of vectors>
•  Terminates when runtime exceeds 40 minutes
Some DAG Snapshots
Graph coloring by median digitClustering and node assignment
X X X 40minutes
Enterprise Scale Topological Data Analysis Using Spark
Future Work
•  Test other LSH schemes
•  Optimize Spark code and leverage existing
codebases for distributed linear algebra routines
•  Incorporate as a machine learning model on the
Alpine Data platform
Alpine Spark TDA
TDA
Key Takeaways
•  Scaling Mapper algorithm is non-trivial but
possible
•  Gaining control over fidelity of representation is
key to gaining insights from data
•  Open source implementation of Betti Mapper will
be made available after code cleanup! J
References
•  “Topological Methods for the Analysis of High Dimensional Data Sets and 3D
Object Recognition”, G. Singh, F. Memoli, G. Carlsson, Eurographics Symposium
on Point-Based Graphics (2007)
•  “Extracting insights from the shape of complex data using topology”, P. Y. Lum,
G. Singh, A. Lehman, T. Ishkanov, M. Vejdemo-Johansson, M. Alagappan, J.
Carlsson, G. Carlsson, Nature Scientific Reports (2013)
•  “Online generation of locality sensitive hash signatures”, B. V. Durme, A. Lall,
Proceedings of the Association of Computational Linguistics 2010 Conference Short
Papers (2010)
•  PySpark documentation: http://spark.apache.org/docs/latest/api/python/
Acknowledgements
•  Rachel Warren
•  Anya Bida
Alpine is Hiring
•  Platform engineers
•  UX engineers
•  Build engineers
•  Ping me : lawrence@alpinenow.com
Q & (HOPEFULLY) A
THANK YOU.
anshuman@alpinenow.com
lawrence@alpinenow.com

More Related Content

What's hot (20)

PDF
Towards True Elasticity of Spark-(Michael Le and Min Li, IBM)
Spark Summit
 
PDF
Spark Summit EU talk by Kaarthik Sivashanmugam
Spark Summit
 
PPTX
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
Spark Summit
 
PDF
Deploying Accelerators At Datacenter Scale Using Spark
Jen Aman
 
PDF
Best Practices for Building Robust Data Platform with Apache Spark and Delta
Databricks
 
PDF
Deep Learning with DL4J on Apache Spark: Yeah it’s Cool, but are You Doing it...
Databricks
 
PPTX
Stories About Spark, HPC and Barcelona by Jordi Torres
Spark Summit
 
PDF
Unified Framework for Real Time, Near Real Time and Offline Analysis of Video...
Spark Summit
 
PDF
Distributed Heterogeneous Mixture Learning On Spark
Spark Summit
 
PDF
Harnessing Big Data with Spark
Alpine Data
 
PPTX
Hadoop Scheduling - a 7 year perspective
Joydeep Sen Sarma
 
PDF
Spark on YARN
Adarsh Pannu
 
PDF
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Databricks
 
PDF
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Databricks
 
PDF
Distributed Deep Learning with Apache Spark and TensorFlow with Jim Dowling
Databricks
 
PDF
Spark Summit EU talk by Zoltan Zvara
Spark Summit
 
PDF
Scalable Acceleration of XGBoost Training on Apache Spark GPU Clusters
Databricks
 
PPTX
Dr. Ike Nassi, Founder, TidalScale at MLconf NYC - 4/15/16
MLconf
 
PDF
Which Is Deeper - Comparison Of Deep Learning Frameworks On Spark
Spark Summit
 
PDF
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
Evan Chan
 
Towards True Elasticity of Spark-(Michael Le and Min Li, IBM)
Spark Summit
 
Spark Summit EU talk by Kaarthik Sivashanmugam
Spark Summit
 
A Scaleable Implementation of Deep Learning on Spark -Alexander Ulanov
Spark Summit
 
Deploying Accelerators At Datacenter Scale Using Spark
Jen Aman
 
Best Practices for Building Robust Data Platform with Apache Spark and Delta
Databricks
 
Deep Learning with DL4J on Apache Spark: Yeah it’s Cool, but are You Doing it...
Databricks
 
Stories About Spark, HPC and Barcelona by Jordi Torres
Spark Summit
 
Unified Framework for Real Time, Near Real Time and Offline Analysis of Video...
Spark Summit
 
Distributed Heterogeneous Mixture Learning On Spark
Spark Summit
 
Harnessing Big Data with Spark
Alpine Data
 
Hadoop Scheduling - a 7 year perspective
Joydeep Sen Sarma
 
Spark on YARN
Adarsh Pannu
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Databricks
 
Downscaling: The Achilles heel of Autoscaling Apache Spark Clusters
Databricks
 
Distributed Deep Learning with Apache Spark and TensorFlow with Jim Dowling
Databricks
 
Spark Summit EU talk by Zoltan Zvara
Spark Summit
 
Scalable Acceleration of XGBoost Training on Apache Spark GPU Clusters
Databricks
 
Dr. Ike Nassi, Founder, TidalScale at MLconf NYC - 4/15/16
MLconf
 
Which Is Deeper - Comparison Of Deep Learning Frameworks On Spark
Spark Summit
 
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
Evan Chan
 

Similar to Enterprise Scale Topological Data Analysis Using Spark (20)

PDF
Target Holding - Big Dikes and Big Data
Frens Jan Rumph
 
PDF
Expressing and Exploiting Multi-Dimensional Locality in DASH
Menlo Systems GmbH
 
PPTX
PCA-LDA-Lobo.pptxttvertyuytreiopkjhgftfv
Sravani477269
 
PDF
Sandy Ryza – Software Engineer, Cloudera at MLconf ATL
MLconf
 
PPTX
A Scaleable Implemenation of Deep Leaning on Spark- Alexander Ulanov
Spark Summit
 
PPTX
A Tale of Data Pattern Discovery in Parallel
Jenny Liu
 
PPTX
House price prediction
SabahBegum
 
PDF
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
PDF
Hand Written Digit Classification
ijtsrd
 
PDF
MapReduce Algorithm Design - Parallel Reduce Operations
Jason J Pulikkottil
 
PPTX
Distributed approximate spectral clustering for large scale datasets
Bita Kazemi
 
PPT
design mapping lecture6-mapreducealgorithmdesign.ppt
turningpointinnospac
 
PPTX
Real time streaming analytics
Anirudh
 
PDF
Matrix Factorization In Recommender Systems
YONG ZHENG
 
PPTX
Yarn spark next_gen_hadoop_8_jan_2014
Vijay Srinivas Agneeswaran, Ph.D
 
PPT
Space time & power.
Soudip Sinha Roy
 
PPTX
Optimal Chain Matrix Multiplication Big Data Perspective
পল্লব রায়
 
PPTX
Boston Hug by Ted Dunning 2012
MapR Technologies
 
PPTX
Machine Learning Algorithms (Part 1)
Zihui Li
 
PDF
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
byteLAKE
 
Target Holding - Big Dikes and Big Data
Frens Jan Rumph
 
Expressing and Exploiting Multi-Dimensional Locality in DASH
Menlo Systems GmbH
 
PCA-LDA-Lobo.pptxttvertyuytreiopkjhgftfv
Sravani477269
 
Sandy Ryza – Software Engineer, Cloudera at MLconf ATL
MLconf
 
A Scaleable Implemenation of Deep Leaning on Spark- Alexander Ulanov
Spark Summit
 
A Tale of Data Pattern Discovery in Parallel
Jenny Liu
 
House price prediction
SabahBegum
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
Hand Written Digit Classification
ijtsrd
 
MapReduce Algorithm Design - Parallel Reduce Operations
Jason J Pulikkottil
 
Distributed approximate spectral clustering for large scale datasets
Bita Kazemi
 
design mapping lecture6-mapreducealgorithmdesign.ppt
turningpointinnospac
 
Real time streaming analytics
Anirudh
 
Matrix Factorization In Recommender Systems
YONG ZHENG
 
Yarn spark next_gen_hadoop_8_jan_2014
Vijay Srinivas Agneeswaran, Ph.D
 
Space time & power.
Soudip Sinha Roy
 
Optimal Chain Matrix Multiplication Big Data Perspective
পল্লব রায়
 
Boston Hug by Ted Dunning 2012
MapR Technologies
 
Machine Learning Algorithms (Part 1)
Zihui Li
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
byteLAKE
 
Ad

More from Spark Summit (20)

PDF
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
PDF
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
PDF
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
PDF
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
PDF
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
PDF
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
PDF
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
PDF
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
PDF
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
PDF
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
PDF
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
PDF
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
PDF
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
PDF
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
PDF
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
PDF
Goal Based Data Production with Sim Simeonov
Spark Summit
 
PDF
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
PDF
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
PDF
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
PDF
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
Goal Based Data Production with Sim Simeonov
Spark Summit
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 
Ad

Recently uploaded (20)

PDF
apidays Munich 2025 - The Physics of Requirement Sciences Through Application...
apidays
 
PDF
Top Civil Engineer Canada Services111111
nengineeringfirms
 
PPT
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
PPTX
Solution+Architecture+Review+-+Sample.pptx
manuvratsingh1
 
PDF
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
PPTX
7 Easy Ways to Improve Clarity in Your BI Reports
sophiegracewriter
 
PDF
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PPTX
Introduction to computer chapter one 2017.pptx
mensunmarley
 
PPTX
Presentation (1) (1).pptx k8hhfftuiiigff
karthikjagath2005
 
PPTX
Multiscale Segmentation of Survey Respondents: Seeing the Trees and the Fores...
Sione Palu
 
PPTX
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
PPTX
Introduction to Data Analytics and Data Science
KavithaCIT
 
PDF
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
PPTX
short term internship project on Data visualization
JMJCollegeComputerde
 
PDF
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
PDF
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
PPTX
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PDF
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
apidays Munich 2025 - The Physics of Requirement Sciences Through Application...
apidays
 
Top Civil Engineer Canada Services111111
nengineeringfirms
 
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
Solution+Architecture+Review+-+Sample.pptx
manuvratsingh1
 
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
7 Easy Ways to Improve Clarity in Your BI Reports
sophiegracewriter
 
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
Introduction to computer chapter one 2017.pptx
mensunmarley
 
Presentation (1) (1).pptx k8hhfftuiiigff
karthikjagath2005
 
Multiscale Segmentation of Survey Respondents: Seeing the Trees and the Fores...
Sione Palu
 
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
Introduction to Data Analytics and Data Science
KavithaCIT
 
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
short term internship project on Data visualization
JMJCollegeComputerde
 
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 

Enterprise Scale Topological Data Analysis Using Spark

  • 1. ENTERPRISE-SCALE TOPOLOGICAL DATA ANALYSIS USING SPARK Anshuman Mishra, Lawrence Spracklen Alpine Data
  • 3. What we’ll talk about •  What’s TDA and why should you care •  Deep dive into Mapper and bottlenecks •  Betti Mapper - scaling Mapper to the enterprise
  • 5. We built the first open-source scalable implementation of TDA Mapper •  Our implementation of Mapper beats a naïve version on Spark by 8x-11x* for moderate to large datasets •  8x: avg. 305 s for Betti vs. non-completion in 2400 s for Naïve (100,000 x 784 dataset) •  11x: avg. 45 s for Betti vs. 511 s for Naïve (10,000 x 784 dataset) •  We used a novel combination of locality-sensitive hashing on Spark to increase performance
  • 6. TDA AND MAPPER: WHY SHOULD WE CARE?
  • 7. Conventional ML carries the “curse of dimensionality” •  As d à∞, all data points are packed away into corners of a corresponding d-dimensional hypercube, with little to separate them •  Instance learners start to choke •  Detecting anomalies becomes tougher
  • 8. How does TDA (Mapper) help? •  “Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition”, G. Singh, F. Memoli, G. Carlsson, Eurographics Symposium on Point-Based Graphics (2007) •  Algorithm consumes a dataset and generates a topological summary of the whole dataset •  Summary can help identify localized structures in high-dimensional data
  • 9. Some examples of Mapper outputs
  • 10. DEEP DIVE INTO MAPPER
  • 11. Mapper: The 30,000 ft. view M x M distance matrix M x N Mx1 Mx1 . . . .. . . . . . .. . …
  • 12. M x M distance matrix Mapper: 1. Choose a Distance Metric M x N The 1st step is to choose a distance metric for the dataset, in order to compute a distance matrix. This will be used to capture similarity between data points. Some examples of distance metrics are Euclidean, Hamming, cosine, etc.
  • 13. M x M distance matrix Mapper: 2. Compute filter functions M x N Next, filter functions (aka lenses) are chosen to map data points to a single value on the real line. These filter functions can be based on: -  Raw features -  Statistics – mean, median, variance, etc. -  Geometry – distance to closest data point, furthest data point, etc. -  ML algorithm outputs Usually two such functions are computed on the dataset. M x 1 M x 1
  • 14. M x M distance matrix Mapper: 3. Apply cover & overlap M x N M x 1 M x 1 … Next, the ranges of each filter application are “chopped up” into overlapping segments or intervals using two parameters: cover and overlap -  Cover (aka resolution) controls how many intervals each filter range will be chopped into, e.g. 40,100 -  Overlap controls the degree of overlap between intervals (e.g. 20%) Cover Overlap
  • 15. M x M distance matrix Mapper: 4. Compute Cartesians M x N The next step is to compute the Cartesian products of the range intervals (from the previous step) and assign the original data points to the resulting two-dimensional regions based on their filter values. Note that these two-dimensional regions will overlap due to the parameters set in the previous step. In other words, there will be points in common between these regions. M x 2 …
  • 16. M x M distance matrix Mapper: 5. Perform clustering M x N The penultimate stage in the Mapper algorithm is to perform clustering in the original high- dimensional space for each (overlapping) region. Each cluster will be represented by a node; since regions overlap, some clusters will have points in common. Their corresponding nodes will be connected via an unweighted edge. The kind of clustering performed is immaterial. Our implementation uses DBSCAN. M x 2 … . . . .. . .. . . .. .
  • 17. Mapper: 6. Build TDA network Finally, by joining nodes in topological space (re: clusters in feature space) that have points in common, one can derive a topological network in the form of a graph. Graph coloring can be performed to capture localized behavior in the dataset and derive hidden insights from the data.
  • 18. Open source Mapper implementations •  Python: –  Python Mapper, Mullner and Babu: http://danifold.net/mapper/ –  Proof-of-concept Mapper in a Kaggle notebook, @mlwave: https://www.kaggle.com/triskelion/digit-recognizer/mapping-digits-with-a-t-sne- lens/notebook •  R: –  TDAmapper package •  Matlab: –  Original mapper implementation
  • 20. Mapper: Computationally expensive! M x M distance matrix M x N Mx1 Mx1 . . . .. . . . . . .. . … O(N2) is prohibitive for large datasets Single-node open source Mappers choke on large datasets (generously defined as > 10k data points with >100 columns)
  • 21. Rolling our own Mapper.. •  Our Mapper implementation –  Built on PySpark 1.6.1 –  Called Betti Mapper –  Named after Enrico Betti, a famous topologist X ✔
  • 22. Multiple ways to scale Mapper 1.  Naïve Spark implementation ü  Write the Mapper algorithm using (Py)Spark RDDs –  Distance matrix computation still performed over entire dataset on driver node 2.  Down-sampling / landmarking (+ Naïve Spark) ü  Obtain manageable number of samples from dataset –  Unreasonable to assume global distribution profiles are captured by samples 3.  LSH Prototyping!!!!?!
  • 23. What came first? •  We use Mapper to detect structure in high-dimensional data using the concept of similarity. •  BUT we need to measure similarity so we can sample efficiently. •  We could use stratified sampling, but then what about •  Unlabeled data? •  Anomalies and outliers? •  LSH is a lower-cost first pass capturing similarity for cheap and helping to scale Mapper
  • 24. Locality sensitive hashing by random projection •  We draw random vectors with same dimensions as dataset and compute dot products with each data point •  If dot product > 0, mark as 1, else 0 •  Random vectors serve to slice feature space into bins •  Series of projection bits can be converted into a single hash number •  We have found good results by setting # of random vectors to: floor(log2 |M|) 1 1 1 0 0 0 … …
  • 25. Scaling with LSH Prototyping on Spark 1.  Use Locality Sensitive Hashing (SimHash / Random Projection) to drop data points into bins 2.  Compute “prototype” points for each bin corresponding to bin centroid –  can also use median to make prototyping more robust 3.  Use binning information to compute topological network: distMxM => distBxB, where B is no. of prototype points (1 per bin) ü Fastest scalable implementation ü # of random vectors controls # of bins and therefore fidelity of topological representation ü LSH binning tends to select similar points (inter-bin distance > intra-bin distance)
  • 26. Betti Mapper B x B Distance Matrix of prototypes M x N Bx1 Bx1 . . . .. . . . . . .. . … LSH M x M Distance Matrix: D(p1, p2) = D(bin(p1), bin(p2)) Mx1 Mx1 B x N prototypes X X
  • 28. Using pyspark •  Simple to “sparkify” an existing python mapper implementation •  Leverage the rich python ML support to greatest extent –  Modify only the computational bottlenecks •  Numpy/Scipy is essential •  Turnkey Anaconda deployment on CDH
  • 29. Naïve performanceTime(Years) •  4TFLOP/s GPGPU (100% util) •  5K Columns •  Euclidean distance Seconds Decades Row count 0.000 0.000 0.000 0.000 0.000 0.001 0.010 0.100 1.000 10.000 100.000 1000.000 10K 100K 1M 10M 100M 1B
  • 30. Our Approach Build and test three implementations of Mapper 1.  Naïve Mapper on Spark 2.  Mapper on Spark with sampling (5%, 10%, 25%) 3.  Betti Mapper: LSH + Mapper (8v, 12v, 16v)
  • 31. Test Hardware Macbook Pro, mid 2014 •  2.5 GHz Intel® Core i7 •  16 GB 1600 MHz DDR3 •  512 GB SSD Spark Cluster on Amazon EC2 •  Instance type: r3.large •  Node: 2 vCPU, 15 GB RAM, 32 GB SSD •  4 workers, 1 driver •  250 GB SSD EBS as persistent HDFS •  Amazon Linux, Anaconda 64-bit 4.0.0, PySpark 1.6.1
  • 32. Spark Configuration •  --driver-memory 8g •  --executor-memory 12g (each) •  --executor-cores 2 •  No. of executors: 4
  • 33. Dataset Configuration Filename Size (MxN) Size (bytes) MNIST_1k.csv 1000 rows x 784 cols 1.83 MB MNIST_10k.csv 10,000 rows x 784 cols 18.3 MB MNIST_100k.csv 100,000 rows x 784 cols 183 MB MNIST_1000k.csv 1,000,000 rows x 784 cols 1830 MB The datasets are sampled with replacement from the original MNIST dataset available for download using Python’s scikit-learn library (mldata module)
  • 34. Test Harness •  Runs test cases on cluster •  Test case: –  <mapper type, dataset size, no. of vectors> •  Terminates when runtime exceeds 40 minutes
  • 35. Some DAG Snapshots Graph coloring by median digitClustering and node assignment
  • 36. X X X 40minutes
  • 38. Future Work •  Test other LSH schemes •  Optimize Spark code and leverage existing codebases for distributed linear algebra routines •  Incorporate as a machine learning model on the Alpine Data platform
  • 40. Key Takeaways •  Scaling Mapper algorithm is non-trivial but possible •  Gaining control over fidelity of representation is key to gaining insights from data •  Open source implementation of Betti Mapper will be made available after code cleanup! J
  • 41. References •  “Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition”, G. Singh, F. Memoli, G. Carlsson, Eurographics Symposium on Point-Based Graphics (2007) •  “Extracting insights from the shape of complex data using topology”, P. Y. Lum, G. Singh, A. Lehman, T. Ishkanov, M. Vejdemo-Johansson, M. Alagappan, J. Carlsson, G. Carlsson, Nature Scientific Reports (2013) •  “Online generation of locality sensitive hash signatures”, B. V. Durme, A. Lall, Proceedings of the Association of Computational Linguistics 2010 Conference Short Papers (2010) •  PySpark documentation: http://spark.apache.org/docs/latest/api/python/
  • 43. Alpine is Hiring •  Platform engineers •  UX engineers •  Build engineers •  Ping me : [email protected]