SlideShare a Scribd company logo
© 2016 Continuum Analytics - Confidential & Proprietary© 2016 Continuum Analytics - Confidential & Proprietary
Data Science and Machine
Learning with Anaconda
PyData Barcelona
May 20, 2017
Travis E. Oliphant
Co-founder, President, Chief Data Scientist
© 2016 Continuum Analytics - Confidential & Proprietary
A bit about me
• PhD 2001 from Mayo Clinic in Biomedical
Engineering (Ultrasound and MRI)
• MS/BS degrees in Elec. Comp. Engineering
• Creator and Developer of SciPy (1999-2009)
• Professor at BYU (2001-2007)
• Author of NumPy (2005-2012)
• Started Numba (2012)
• Founding Chair of NumFOCUS / PyData
• Previous Python Software Foundation Director
• Co-founder of Continuum Analytics
• CEO => President, Chief Data Scientist
SciPy
© 2016 Continuum Analytics - Confidential & Proprietary
Company
2012 - Created Two Orgs for Sustainable Open Source
Community
Enterprise software, support and
services to empower people who
change the world to get rapid insight
from all of their data — built on open-
source that we contribute to and
sustain.
© 2016 Continuum Analytics - Confidential & Proprietary
NumFOCUS is a 501(c)(3) nonprofit that supports and promotes world-
class, innovative, open source scientific computing.
Community-led non-profit organization
Independent Board of Directors:
• Andy Terrel (President)
• Ralf Gommers (Secretary)
• Didrik Pinte (Treasurer)
• Lorena Barba
• Matthew Turk
• Jennifer Klay
https://www.numfocus.org/community/donate/
© 2016 Continuum Analytics - Confidential & Proprietary
Data Science is growing with Python leading the way
probably the biggest reason is
Machine Learning!
© 2016 Continuum Analytics - Confidential & Proprietary 6
Neural network with
several layers trained
with ~130,000 images.
Matched trained
dermatologists with 91%
area under sensitivity-
specificity curve.
Keys:
• Access to Data
• Access to Software
• Access to Compute
© 2016 Continuum Analytics - Confidential & Proprietary
https://blogs.dropbox.com/tech/2017/04/creating-a-modern-ocr-pipeline-
using-computer-vision-and-deep-learning/
Automatically annotating and indexing your images in
Dropbox
© 2016 Continuum Analytics - Confidential & Proprietary
Augmented Reality: Translating Service
Google Translate via Image
Not always perfect…
© 2016 Continuum Analytics - Confidential & Proprietary
Anaconda makes this machine learning
“magic” accessible to mere mortals
An explosion of innovation is happening in software
Managing this innovation reproducibly is what Anaconda excels at
Anaconda’s coverage of the Python for Data ecosystem helps you with both
the fun part:
• modeling, predicting, classifying, visualizing
…and the not so fun part:
• feature labeling, data-cleaning, data-extraction, scaling, deploying
© 2016 Continuum Analytics - Confidential & Proprietary
© 2016 Continuum Analytics - Confidential & Proprietary
Bringing
Technology
Together
© 2016 Continuum Analytics - Confidential & Proprietary
Data Science Workflow
New	Data
Notebooks
Understand Data
Getting	Data
Understand World
Reports
Microservices
Dashboards
Applications
Decisions
and
Actions
Models
Exploratory Data Analysis and Viz
Data Products
© 2016 Continuum Analytics - Confidential & Proprietary
…DATA
Open Data
Science
Insight
Decisions
Actions
Results
Creations
…
Applications
Dashboards
Notebooks
MicroServices
HDFS SQL Server Teradata Cassandra S3 Flat Files
Virtual Data Lake / Data Catalogue
© 2016 Continuum Analytics - Confidential & Proprietary
© 2016 Continuum Analytics - Confidential & Proprietary
Empowering the Data Science Team
© 2016 Continuum Analytics - Confidential & Proprietary
• Setup your environment
locally (single node) on any
platform or a cluster
HDFSDatabases
• Use same expression to
query data no matter where
it lives
Blaze
• Scale your data processing,
without chaning frameworks
or paradigms
dask
• Present and tell your data
story to decision makers
Bokeh
+
datashader
• Build large scale meaningful
interactive data visualizations
anaconda	project
• Deploy your
interactive analytical/
predictive application
Sharing	insights
Big	Data
Collaboration
Deployment
numba
17
Starts with Conda
© 2017 Continuum Analytics - Confidential & Proprietary 18
A
Python v2.7
Conda Sandboxing Technology
B
Python
v3.4
Pandas
v0.18
Jupyter
C
R
R
Essentials
conda
NumPy
v1.11
NumPy
v1.10
Pandas
v0.16
• Language independent
• Platform independent
• No special privileges required
• No VMs or containers
• Enables:
• Reproducibility
• Collaboration
• Scaling
“conda – package everything”
© 2016 Continuum Analytics - Confidential & Proprietary
221 W. 6th Street
Suite #1550
Austin, TX 78701
+1 512.222.5440
info@continuum.io
@ContinuumIO
© 2016 Continuum Analytics - Confidential & Proprietary
$ anaconda-project run plot —show
conda install tensorflow
© 2016 Continuum Analytics - Confidential & Proprietary
Some conda features
• Excellent support for “system-level” environments — like having mini VMs but
much lighter weight than docker (micro containers)
• Minimizes code-copies (uses hard/soft links if possible)
• Simple format: binary tar-ball + metadata
• Metadata allows static analysis of dependencies
• Easy to create multiple “channels” which are repositories for packages
• User installable (no root privileges needed)
• Integrates very well with pip and other language-specific package managers.
• Cross Platform
© 2016 Continuum Analytics - Confidential & Proprietary
Basic Conda Usage
Install a package conda install sympy
List all installed packages conda list
Search for packages conda search llvm
Create a new environment conda create -n py3k python=3
Remove a package conda remove nose
Get help conda install --help
© 2016 Continuum Analytics - Confidential & Proprietary
Advanced Conda Usage
Install a package in an environment conda install -n py3k sympy
Update all packages conda update --all
Export list of packages conda list --export packages.txt
Install packages from an export conda install --file packages.txt
See package history conda list --revisions
Revert to a revision conda install --revision 23
Remove unused packages and cached tarballs conda clean -pt
© 2016 Continuum Analytics - Confidential & Proprietary
• Environments are simple: just link the package to a different directory
• Hard-links are very cheap, and very fast — even on Windows.
• Conda environments are completely independent installations of everything
• No fiddling with PYTHONPATH or sym-linking site-packages — “Activating”
an environment sets everything up so it *just works*
• Unix:

• Windows:
source activate py3k
Environments
conda create -n py3k python=3.5
activate py3k
© 2016 Continuum Analytics - Confidential & Proprietary
lightweight isolated sandbox to manage your dependencies in
a single file and allow reproducibility of your project
environment.yml
$ conda env create
$ source activate ENV_NAME
$
More Sophisticated Environments
25
Python Ecosystem
© 2016 Continuum Analytics - Confidential & Proprietary 26
Numba
Bokeh
Keras
© 2016 Continuum Analytics - Confidential & Proprietary
Machine Learning Explosion
Scikit-Learn
Tensorflow
Keras
XGBoost
theano
lasagne
caffe/caffe2
torch
mxnet / minpy
neon
CNTK
DAAL
Chainer
Dynet
Apache Singa
Shogun
https://github.com/josephmisiti/awesome-machine-learning#python-general-purpose
http://deeplearning.net/software_links/
http://scikit-learn.org/stable/related_projects.html
28
NumPy
© 2016 Continuum Analytics - Confidential & Proprietary
Without NumPyfrom math import sin, pi
def sinc(x):
if x == 0:
return 1.0
else:
pix = pi*x
return sin(pix)/pix
def step(x):
if x > 0:
return 1.0
elif x < 0:
return 0.0
else:
return 0.5
functions.py
>>> import functions as f
>>> xval = [x/3.0 for x in
range(-10,10)]
>>> yval1 = [f.sinc(x) for x
in xval]
>>> yval2 = [f.step(x) for x
in xval]
Python is a great language but
needed a way to operate quickly
and cleanly over multi-
dimensional arrays.
© 2016 Continuum Analytics - Confidential & Proprietary
With NumPyfrom numpy import sin, pi
from numpy import vectorize
import functions as f
vsinc = vectorize(f.sinc)
def sinc(x):
pix = pi*x
val = sin(pix)/pix
val[x==0] = 1.0
return val
vstep = vectorize(f.step)
def step(x):
y = x*0.0
y[x>0] = 1
y[x==0] = 0.5
return y
>>> import functions2 as f
>>> from numpy import *
>>> x = r_[-10:10]/3.0
>>> y1 = f.sinc(x)
>>> y2 = f.step(x)
functions2.py
Offers N-D array, element-by-element
functions, and basic random numbers,
linear algebra, and FFT capability for
Python
http://numpy.org
Fiscally sponsored by NumFOCUS
© 2016 Continuum Analytics - Confidential & Proprietary
NumPy: an Array Extension of Python
• Data: the array object
– slicing and shaping
– data-type map to Bytes
• Fast Math (ufuncs):
– vectorization
– broadcasting
– aggregations
© 2016 Continuum Analytics - Confidential & Proprietary
shape
NumPy Array
Key Attributes
• dtype
• shape
• ndim
• strides
• data
© 2016 Continuum Analytics - Confidential & Proprietary
NumPy Slicing (Selection)
>>> a[0,3:5]
array([3, 4])
>>> a[4:,4:]
array([[44, 45],
[54, 55]])
>>> a[:,2]
array([2,12,22,32,42,52])
50 51 52 53 54 55
40 41 42 43 44 45
30 31 32 33 34 35
20 21 22 23 24 25
10 11 12 13 14 15
0 1 2 3 4 5
>>> a[2::2,::2]
array([[20, 22, 24],
[40, 42, 44]])
© 2016 Continuum Analytics - Confidential & Proprietary
Summary
• Provides foundational N-dimensional array composed
of homogeneous elements of a particular “dtype”
• The dtype of the elements is extensive (but not very
extensible)
• Arrays can be sliced and diced with simple syntax to
provide easy manipulation and selection.
• Provides fast and powerful math, statistics, and linear
algebra functions that operate over arrays.
• Utilities for sorting, reading and writing data also
provided.
35
SciPy
© 2016 Continuum Analytics - Confidential & Proprietary
SciPy “Distribution of Python Numerical Tools masquerading as a Library”
Name Description
cluster KMeans and Vector Quantization
fftpack Discrete Fourier Transform
integrate Numerical Integration
interpolate Interpolation routines
io Data Input and Output
linalg Fast Linear algebra
misc Utilities
ndimage N-dimensional Image processing
Name Description
odr Orthogonal Distance Regression
optimize
Constrained and Unconstrained
Optimization
signal Signal Processing Tools
sparse Sparse Matrices and Algebra
spatial Spatial Data Structures and Algorithms
special Special functions (e.g. Bessel)
stats Statistical Functions and Distributions
37
Matplotlib
© 2016 Continuum Analytics - Confidential & Proprietary
a powerful plotting engine
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
np.random.seed(0)
# example data
mu = 100 # mean of distribution
sigma = 15 # standard deviation of distribution
x = mu + sigma * np.random.randn(437)
num_bins = 50
fig, ax = plt.subplots()
# the histogram of the data
n, bins, patches = ax.hist(x, num_bins, normed=1)
# add a 'best fit' line
y = mlab.normpdf(bins, mu, sigma)
ax.plot(bins, y, '--')
ax.set_xlabel('Smarts')
ax.set_ylabel('Probability density')
ax.set_title(r'Histogram of IQ: $mu=100$, $
sigma=15$')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.show()
© 2016 Continuum Analytics - Confidential & Proprietary
a powerful plotting engine
import matplotlib.pyplot as plt
import scipy.misc as misc
im = misc.face()
ax = plt.imshow(im)
plt.title('Racoon Face of size
%d x %d' % im.shape[:2])
plt.savefig(‘face.png’)
© 2016 Continuum Analytics - Confidential & Proprietary
a powerful plotting engine
import matplotlib.pyplot as plt
import numpy as np
with plt.xkcd():
fig = plt.figure()
ax = fig.add_axes((0.1, 0.2, 0.8, 0.7))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate('THE DAY I REALIZEDnI COULD
COOK BACONnWHENEVER I WANTED’,
xy=(70, 1),
arrowprops=dict(arrowstyle=‘->'),
xytext=(15, -10))
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
fig.text(0.5, 0.05, '"Stove Ownership"
from xkcd by Randall Monroe',
ha='center')
© 2016 Continuum Analytics - Confidential & Proprietary
a powerful plotting engine
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator,
FormatStrFormatter
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z,
cmap=cm.coolwarm, linewidth=0,
antialiased=False)
# Customize the z axis.
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter(
'%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
42
Pandas
© 2016 Continuum Analytics - Confidential & Proprietary
Easy Data Wrangling
• Adds indexes and labels to 1-d and 2-d NumPy arrays
(Series and DataFrame)
• Many convenience functions and methods to manipulate
messy data-sets including time-series.
• Powerful indexing with automatic data alignment.
• Easy handling of missing data.
• Allows easy joining and merging Data Sets
• Pivots and reshaping (split-apply-combine)
• Powerful group-by operations with summarization
• Builtin visualization using labels and indexes
© 2016 Continuum Analytics - Confidential & Proprietary
Easy Data Wrangling
• Series Data Structure
• built for 1-dimensional series data
• homogeneous data
• Two arrays. One of data and another which is the index that can be a
homogeneous array of any type like integers, objects, or date-times.
• DataFrame
• built for 2-dimensional collections of tabular data (think Excel sheet)
• heterogeneous data comprised of multiple Series
• includes an index column allowing sophisticated selection and alignment
© 2016 Continuum Analytics - Confidential & Proprietary
Easy Data Wrangling
medals = pd.read_csv('data/medals.csv', index_col='name')
medals.head()
gold = medals['medal'] == 'gold'
won = medals['count'] > 0
medals.loc[gold & won, 'count'].sort_values().plot(kind='bar', figsize=(12,8))
© 2016 Continuum Analytics - Confidential & Proprietary
Easy Data Wrangling
google = pd.read_csv('data/goog.csv', index_col='Date', parse_dates=True)
google.info()
google.head()
google.describe()
© 2016 Continuum Analytics - Confidential & Proprietary
Easy Data Wrangling
df = pd.read_excel("data/pbpython/salesfunnel.xlsx")
df.head()
table = pd.pivot_table(df,
index=["Manager","Rep","Product"],
values=["Price","Quantity"],
aggfunc=[np.sum,np.mean])
48
Machine Learning
© 2016 Continuum Analytics - Confidential & Proprietary
Machine Learning made easy
• Supervised Learning — uses “labeled” data to train a model
• Regression — predicted variable is continuous
• Classification — predicted variable is discrete (but c.f. Logistic “Regression”)
• Unsupervised Learning
• Clustering — discover categories in the data
• Density Estimation — determine representation of data
• Dimensionality Reduction — represent data with fewer variables or feature
vectors
• Reinforcement Learning — “goal-oriented” learning (e.g. drive a car)
• Deep Learning — neural networks with many layers
• Semi-supervised Learning (use some labeled data for training)
© 2016 Continuum Analytics - Confidential & Proprietary
Mathematical representation
y = f (x, ✓)Supervised Learning
x
y
✓
Input Data or “feature vectors”
Parameters that determine the model
Training is the process of estimating
these parameters
Labels for training
ˆy Predicted outputs ˆy = f (x, ✓)
f (·, ·) Learning model.
May be part of a family of
models with hyper-
parameters selecting the
specific model
© 2016 Continuum Analytics - Confidential & Proprietary
Unsupervised Learning
y = f (x, ✓)Auto-encoding
x
y
✓
Input Data or “feature vectors”
Parameters of the model
Labels for training — set equal to Input
Auto-encoded model now represents the data in a lower-dimensional space
Model parameters, or estimated data can be used as feature-vectors
Network can “de-noise” future inputs (project new inputs onto data-space)
f (·, ·)
ˆx = g(ˆ✓)
g(·) = f(x, ·)
Underlying model
Data-specific model
Estimated data.
© 2016 Continuum Analytics - Confidential & Proprietary
© 2016 Continuum Analytics - Confidential & Proprietary
Supervised Deep Learning
y = f (x, ✓)Same structure
x
y
✓
Input Data or “feature vectors”
All the weights between layers
Labels for training
ˆy Predicted outputs ˆy = f (x, ✓)
wijk
zij = g
X
k
wijkzi 1k
!
g (u) =
1
1 + e u
© 2016 Continuum Analytics - Confidential & Proprietary
Unsupervised Deep Learning
y = f (x, ✓)Auto-encoding
x
y
✓
Input Data or “feature vectors”
All the weights between layers
Labels for training — set equal to Input
wijk
zij = g
X
k
wijkzi 1k
!
g (u) =
1
1 + e u
Auto-encoded network now represents the data in a lower-dimensional space
Outputs of hidden networks (or weights) can be used as feature-vectors
Network can “de-noise” future inputs (project new inputs onto data-space)
© 2016 Continuum Analytics - Confidential & Proprietary
© 2016 Continuum Analytics - Confidential & Proprietary
Basic scikit-learn experience
1) Create or Load Data
>>> from sklearn import datasets
>>> iris = datasets.load_iris()
>>> digits = datasets.load_digits()
A Scikit-learn dataset is a “dictionary-like” object with input data
stored as the .data attribute and labels stored as the .target attribute
.data attribute is always a 2D array (n_samples, n_features)
May need to extract features from raw data to produce data scikit-learn can use
© 2016 Continuum Analytics - Confidential & Proprietary
2) Choose Model (or Build Pipeline of Models)
>>> from sklearn import svm
>>> clf = svm.SVC(gamma=0.001, C=100.)
Most models are model-families and have “hyper-parameters” that specify
the specific model function. Good values for these can be found via grid-search
and cross-validation (easy target for parallelization).
Here “gamma” and “C” are hyper-parameters.
Many choices of models: http://scikit-learn.org/stable/supervised_learning.html
Basic scikit-learn experience
© 2016 Continuum Analytics - Confidential & Proprietary
Basic scikit-learn experience
3) Train the Model
>>> clf.fit(data[:-1], labels[:-1])
Models have a “fit” method which updates the parameters-to-be-estimated in
the model in-place so that after fitting the model is “trained”
For validation and scoring you need to leave out some of the data to use
later. cross-validation (e.g. k-fold) techniques can also be parallelized easily.
Here we “leave-one-out” (or n-fold)
© 2016 Continuum Analytics - Confidential & Proprietary
Basic scikit-learn experience
4) Predict new values
>>> clf.predict(data[-1:])
Prediction of new data uses the trained parameters of the model. Cross-
validation can be used to understand how sensitive the model is to different
partitions of the data.
>>> from sklearn.model_selection import cross_val_score
>>> scores = cross_val_score(clf, data, target, cv=10)
array([ 0.96…, 1. …, 0.96… , 1. ])
60
Jupyter
© 2016 Continuum Analytics - Confidential & Proprietary
The Jupyter Notebook is an open-source
web application that allows you to create
and share documents that contain live
code, equations, visualizations and
explanatory text. Uses include: data
cleaning and transformation, numerical
simulation, statistical modeling, machine
learning and much more.
© 2017 Continuum Analytics - Confidential & Proprietary 62
© 2016 Continuum Analytics - Confidential & Proprietary 63
Data lineage
Interactive
Visualizations
Advanced notebook extensions
Collaborative Executable Notebooks
64
Scaling Up and Out with
Numba and Dask
© 2016 Continuum Analytics - Confidential & Proprietary
Scale Up vs Scale Out
Big Memory &
Many Cores
/ GPU Box
Best of Both
(e.g. GPU Cluster)
Many commodity
nodes in a cluster
ScaleUp
(BiggerNodes)
Scale Out
(More Nodes)
Numba
Dask
Blaze
66
Scaling Up! Optimized
Python with JIT compilation
from Numba
© 2016 Continuum Analytics - Confidential & Proprietary
Numba
• Started in 2012
• Release 33 (0.33) in May
• Version 1.0 coming in 2017
• Particularly suited to Numeric computing
• Lots of features!!!
• Ahead of Time Compilation
• Wide community adoption and use
conda install numba
© 2016 Continuum Analytics - Confidential & Proprietary
Array Allocation
Looping over ndarray x as an iterator
Using numpy math functions
Returning a slice of the array
Numba decorator

(nopython=True not required)
2.7x Speedup
over NumPy!
Example: Filter an array
© 2016 Continuum Analytics - Confidential & Proprietary
Image Processing
@jit('void(f8[:,:],f8[:,:],f8[:,:])')
def filter(image, filt, output):
M, N = image.shape
m, n = filt.shape
for i in range(m//2, M-m//2):
for j in range(n//2, N-n//2):
result = 0.0
for k in range(m):
for l in range(n):
result += image[i+k-m//2,j+l-n//2]*filt[k, l]
output[i,j] = result
~1500x speed-up
© 2016 Continuum Analytics - Confidential & Proprietary
Does not replace the standard Python interpreter

(all of your existing Python libraries are still available)
Numba Compatibility
© 2016 Continuum Analytics - Confidential & Proprietary
New GPU Data Frame project (pyGDF) — GOAI
72
Scaling Out with Dask
(integrates with but doesn’t
depend on Hadoop)
© 2016 Continuum Analytics - Confidential & Proprietary
Dask
• Started as part of Blaze in early 2014.
• General parallel programming engine
• Flexible and therefore highly suited for
• Commodity Clusters
• Advanced Algorithms
• Wide community adoption and use
conda install dask
pip install dask[complete] distributed --upgrade
© 2016 Continuum Analytics - Confidential & Proprietary
Big DataSmall Data
Numba
Moving from small data to big data
© 2016 Continuum Analytics - Confidential & Proprietary
Dask: From User Interaction to Execution
delayed
© 2016 Continuum Analytics - Confidential & Proprietary 76
Dask: Parallel Data Processing
Synthetic views of
Numpy ndarrays
Synthetic views of
Pandas DataFrames
with HDFS support
DAG construction and
workflow manager
© 2016 Continuum Analytics - Confidential & Proprietary
Dask is a Python parallel computing library that is:
• Familiar: Implements parallel NumPy and Pandas objects
• Fast: Optimized for demanding for numerical applications
• Flexible: for sophisticated and messy algorithms
• Scales up: Runs resiliently on clusters of 100s of machines
• Scales down: Pragmatic in a single process on a laptop
• Interactive: Responsive and fast for interactive data science
Dask complements the rest of Anaconda. It was developed with

NumPy, Pandas, and scikit-learn developers.
Overview of Dask
© 2016 Continuum Analytics - Confidential & Proprietary
x.T - x.mean(axis=0)
df.groupby(df.index).value.mean()
def load(filename):
def clean(data):
def analyze(result):
Dask array (mimics NumPy)
Dask dataframe (mimics Pandas) Dask delayed (wraps custom code)
b.map(json.loads).foldby(...)
Dask bag (collection of data)
Dask Collections: Familiar Expressions and API
© 2016 Continuum Analytics - Confidential & Proprietary
>>> import pandas as pd
>>> df = pd.read_csv('iris.csv')
>>> df.head()
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
>>> max_sepal_length_setosa = df[df.species
== 'setosa'].sepal_length.max()
5.7999999999999998
>>> import dask.dataframe as dd
>>> ddf = dd.read_csv('*.csv')
>>> ddf.head()
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
…
>>> d_max_sepal_length_setosa = ddf[ddf.species
== 'setosa'].sepal_length.max()
>>> d_max_sepal_length_setosa.compute()
5.7999999999999998
Dask Dataframes
© 2016 Continuum Analytics - Confidential & Proprietary
Dask Graphs: Example Machine Learning
Pipeline
© 2016 Continuum Analytics - Confidential & Proprietary
Example 1: Using Dask DataFrames on a cluster with
CSV data
• Built from Pandas DataFrames
• Match Pandas interface
• Access data from HDFS, S3, local, etc.
• Fast, low latency
• Responsive user interface
January, 2016
Febrary, 2016
March, 2016
April, 2016
May, 2016
Pandas
DataFrame}
Dask
DataFrame
}
© 2016 Continuum Analytics - Confidential & Proprietary
>>> import numpy as np
>>> np_ones = np.ones((5000, 1000))
>>> np_ones
array([[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
...,
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.]])
>>> np_y = np.log(np_ones + 1)[:5].sum(axis=1)
>>> np_y
array([ 693.14718056, 693.14718056,
693.14718056, 693.14718056, 693.14718056])
>>> import dask.array as da
>>> da_ones = da.ones((5000000, 1000000),
chunks=(1000, 1000))
>>> da_ones.compute()
array([[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
...,
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.]])
>>> da_y = da.log(da_ones + 1)[:5].sum(axis=1)
>>> np_da_y = np.array(da_y) #fits in memory
array([ 693.14718056, 693.14718056,
693.14718056, 693.14718056, …, 693.14718056])
# If result doesn’t fit in memory
>>> da_y.to_hdf5('myfile.hdf5', 'result')
Dask Arrays
© 2016 Continuum Analytics - Confidential & Proprietary
NumPy
Array
}
}Dask
Array
Example 2: Using Dask Arrays with global temperature data
• Built from NumPy

n-dimensional arrays
• Matches NumPy interface
(subset)
• Solve medium-large
problems
• Complex algorithms
© 2016 Continuum Analytics - Confidential & Proprietary
Scheduler
Worker
Worker
Worker
Worker
Client
Same network
User Machine (laptop)Client
Worker
Dask Schedulers: Distributed Scheduler
© 2016 Continuum Analytics - Confidential & Proprietary
Cluster Architecture Diagram
Client Machine Compute
Node
Compute
Node
Compute
Node
Head Node
© 2016 Continuum Analytics - Confidential & Proprietary
• Single machine with multiple threads or processes
• On a cluster with SSH (dcluster)
• Resource management: YARN (knit), SGE, Slurm
• On the cloud with Amazon EC2 (dec2)
• On a cluster with Anaconda for cluster management
• Manage multiple conda environments and packages 

on bare-metal or cloud-based clusters
Using Anaconda and Dask on your Cluster
© 2016 Continuum Analytics - Confidential & Proprietary
YARN
JVM
Bottom Line

2X-100X faster overall performance
• Interact with data in HDFS and
Amazon S3 natively from Python
• Distributed computations without the
JVM & Python/Java serialization
• Framework for easy, flexible
parallelism using directed acyclic
graphs (DAGs)
• Interactive, distributed computing
with in-memory persistence/caching
Bottom Line
• Leverage Python &
R with Spark
Batch
Processing
Interactive
Processing
HDFS
Ibis
Impala
PySpark & SparkR
Python & R
ecosystem
MPI
High Performance,
Interactive,
Batch
Processing
Native
read & write
NumPy, Pandas, …
720+ packages
High Performance
Hadoop
© 2016 Continuum Analytics - Confidential & Proprietary
Scheduler Visualization with Bokeh
© 2016 Continuum Analytics - Confidential & Proprietary
Look at all of the data with Bokeh’s datashader.
Decouple the data-processing from the visualization. Visualize arbitrarily
large data.
Numba + Dask
• E.g. Open Street Map data:
• About 3 billion GPS coordinates
• https://blog.openstreetmap.org/
2012/04/01/bulk-gps-point-data/.
• This image was rendered in <5
seconds on a standard MacBook
with 16 GB RAM
• Renders in less than a second on
several 128GB Amazon EC2
instances
© 2016 Continuum Analytics - Confidential & Proprietary
Categorical data: 2010 US Census
• One point per
person
• 300 million total
• Categorized by
race
• Interactive
rendering with
Numba+Dask
• No pre-tiling
91
Interactive Data Vizualization
Apps with Bokeh
© 2016 Continuum Analytics - Confidential & Proprietary 92
Interactive Data Visualization
• Interactive viz, widgets, and tools
• Versatile high level graphics
• Streaming, dynamic, large data
• Optimized for the browser
• No Javascript
• With or without a server
© 2016 Continuum Analytics - Confidential & Proprietary 93
Rapid Prototyping Visual Apps
• Python interface
• R interface
• Smart plotting
94
Plotting Billions of Points
and Map Integration with
Datashader
© 2016 Continuum Analytics - Confidential & Proprietary 95
Datashader: Rendering a Billion Points of Data
•datashader provides a fast,
configurable visualization pipeline
for faithfully revealing even very
large datasets
• Each of these visualizations
requires just a few lines of code
and no magic numbers to adjust
by trial and error.
© 2016 Continuum Analytics - Confidential & Proprietary 96
Datashader
97
Data Visualization and
Applications made easy with
Holoviews
© 2016 Continuum Analytics - Confidential & Proprietary
HoloViews: Stop plotting your data
•Exploring data can be tedious if you use a plotting library directly, because you will need to
specify details about your data (units, dimensions, names, etc.) every time you construct a new
type of plot.
•With HoloViews, you instead annotate your data once, and then flexible plotting comes for free
— HoloViews objects just display themselves, alone or in any combination.
•It’s now easy to lay out subfigures, overlay traces, facet or animate a multidimensional dataset,
sample or aggregate to reduce dimensionality, preserving the metadata each time so the results
visualize themselves.
HoloViews makes it simple to create beautiful interactive Bokeh or Matplotlib
visualizations of complex data.
© 2016 Continuum Analytics - Confidential & Proprietary
tiles = gv.WMTS(WMTSTileSource(url='https://server.arcgisonline.com/ArcGIS/rest/services/'
'World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'))
tile_options = dict(width=800,height=475,xaxis=None,yaxis=None,bgcolor='black',show_grid=False)
passenger_counts = sorted(df.passenger_count.unique().tolist())
class Options(hv.streams.Stream):
alpha = param.Magnitude(default=0.75, doc="Alpha value for the map opacity")
colormap = param.ObjectSelector(default=cm["fire"], objects=cm.values())
plot = param.ObjectSelector(default="pickup", objects=["pickup","dropoff"])
passengers = param.ObjectSelector(default=1, objects=passenger_counts)
def make_plot(self, x_range=None, y_range=None, **kwargs):
map_tiles = tiles(style=dict(alpha=self.alpha), plot=tile_options)
df_filt = df[df.passenger_count==self.passengers]
points = hv.Points(gv.Dataset(df_filt, kdims=[self.plot+'_x', self.plot+'_y'], vdims=[]))
taxi_trips = datashade(points, width=800, height=475, x_sampling=1, y_sampling=1,
cmap=self.colormap, element_type=gv.Image,
dynamic=False, x_range=x_range, y_range=y_range)
return map_tiles * taxi_trips
selector = Options(name="")
paramnb.Widgets(selector, callback=selector.update)
hv.DynamicMap(selector.make_plot, kdims=[], streams=[selector, RangeXY()])
Data Widgets and
Applications from
Jupyter Notebooks!
https://anaconda.org/jbednar/nyc_taxi-paramnb/notebook
100
Jupyterlab (Where Jupyter is
heading)
© 2016 Continuum Analytics - Confidential & Proprietary
• IDE
• Extensible
• Notebook ->
Applications
JupyterLab
© 2016 Continuum Analytics - Confidential & Proprietary
More Than Just Notebooks
© 2016 Continuum Analytics - Confidential & Proprietary
Building Blocks
File Browser Notebooks Text Editor
TerminalOutputWidgets
© 2016 Continuum Analytics - Confidential & Proprietary
A completely
modular
architecture

More Related Content

PDF
Scaling PyData Up and Out
Travis Oliphant
 
PDF
London level39
Travis Oliphant
 
PDF
Scale up and Scale Out Anaconda and PyData
Travis Oliphant
 
PDF
PyData Boston 2013
Travis Oliphant
 
PDF
Fast and Scalable Python
Travis Oliphant
 
PDF
Bids talk 9.18
Travis Oliphant
 
PDF
PyData Texas 2015 Keynote
Peter Wang
 
PDF
Python as the Zen of Data Science
Travis Oliphant
 
Scaling PyData Up and Out
Travis Oliphant
 
London level39
Travis Oliphant
 
Scale up and Scale Out Anaconda and PyData
Travis Oliphant
 
PyData Boston 2013
Travis Oliphant
 
Fast and Scalable Python
Travis Oliphant
 
Bids talk 9.18
Travis Oliphant
 
PyData Texas 2015 Keynote
Peter Wang
 
Python as the Zen of Data Science
Travis Oliphant
 

What's hot (16)

KEY
PyData Introduction
Travis Oliphant
 
PPTX
Python for Big Data Analytics
Edureka!
 
PDF
Accelerating Data Analysis of Brain Tissue Simulations with Apache Spark with...
Databricks
 
PPTX
Intro to Python Data Analysis in Wakari
Karissa Rae McKelvey
 
PDF
Bringing an AI Ecosystem to the Domain Expert and Enterprise AI Developer wit...
Databricks
 
PDF
Listening at the Cocktail Party with Deep Neural Networks and TensorFlow
Databricks
 
PDF
High Performance Machine Learning in R with H2O
Sri Ambati
 
PPTX
Session 2
HarithaAshok3
 
PDF
Scipy 2011 Time Series Analysis in Python
Wes McKinney
 
PDF
Deep Learning with MXNet - Dmitry Larko
Sri Ambati
 
PDF
Introduction to Apache Flink - Fast and reliable big data processing
Till Rohrmann
 
PPTX
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
MLconf
 
PPTX
Getting started with TensorFlow
ElifTech
 
PDF
Data science apps: beyond notebooks
Natalino Busa
 
PDF
A real time architecture using Hadoop and Storm @ FOSDEM 2013
Nathan Bijnens
 
PDF
PyData: Past, Present Future (PyData SV 2014 Keynote)
Peter Wang
 
PyData Introduction
Travis Oliphant
 
Python for Big Data Analytics
Edureka!
 
Accelerating Data Analysis of Brain Tissue Simulations with Apache Spark with...
Databricks
 
Intro to Python Data Analysis in Wakari
Karissa Rae McKelvey
 
Bringing an AI Ecosystem to the Domain Expert and Enterprise AI Developer wit...
Databricks
 
Listening at the Cocktail Party with Deep Neural Networks and TensorFlow
Databricks
 
High Performance Machine Learning in R with H2O
Sri Ambati
 
Session 2
HarithaAshok3
 
Scipy 2011 Time Series Analysis in Python
Wes McKinney
 
Deep Learning with MXNet - Dmitry Larko
Sri Ambati
 
Introduction to Apache Flink - Fast and reliable big data processing
Till Rohrmann
 
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
MLconf
 
Getting started with TensorFlow
ElifTech
 
Data science apps: beyond notebooks
Natalino Busa
 
A real time architecture using Hadoop and Storm @ FOSDEM 2013
Nathan Bijnens
 
PyData: Past, Present Future (PyData SV 2014 Keynote)
Peter Wang
 
Ad

Similar to PyData Barcelona Keynote (14)

PDF
PLOTCON NYC: Interactive Visual Statistics on Massive Datasets
Plotly
 
PPTX
Python for Data Science with Anaconda
Travis Oliphant
 
PDF
My First 100 days with a Cassandra Cluster
Gustavo Rene Antunez
 
PPTX
Webinar - Analyzing Video
Turi, Inc.
 
PDF
Keynote at Converge 2019
Travis Oliphant
 
PDF
Enabling Python to be a Better Big Data Citizen
Wes McKinney
 
PPTX
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Patrick McGarry
 
PDF
Cloud Platform for IoT
Naoto Umemori
 
PDF
Building a Distributed & Automated Open Source Program at Netflix
All Things Open
 
PDF
Netflix Open Source: Building a Distributed and Automated Open Source Program
aspyker
 
PDF
Microservices bell labs_kulak_final
Ty Le
 
PPTX
Spark Streaming the Industrial IoT
Jim Haughwout
 
PPTX
Modern Software Development
Angel Conde Manjon
 
PDF
Cytoscape: Now and Future
Keiichiro Ono
 
PLOTCON NYC: Interactive Visual Statistics on Massive Datasets
Plotly
 
Python for Data Science with Anaconda
Travis Oliphant
 
My First 100 days with a Cassandra Cluster
Gustavo Rene Antunez
 
Webinar - Analyzing Video
Turi, Inc.
 
Keynote at Converge 2019
Travis Oliphant
 
Enabling Python to be a Better Big Data Citizen
Wes McKinney
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Patrick McGarry
 
Cloud Platform for IoT
Naoto Umemori
 
Building a Distributed & Automated Open Source Program at Netflix
All Things Open
 
Netflix Open Source: Building a Distributed and Automated Open Source Program
aspyker
 
Microservices bell labs_kulak_final
Ty Le
 
Spark Streaming the Industrial IoT
Jim Haughwout
 
Modern Software Development
Angel Conde Manjon
 
Cytoscape: Now and Future
Keiichiro Ono
 
Ad

More from Travis Oliphant (12)

PDF
Array computing and the evolution of SciPy, NumPy, and PyData
Travis Oliphant
 
PDF
SciPy Latin America 2019
Travis Oliphant
 
PDF
PyCon Estonia 2019
Travis Oliphant
 
PDF
Standardizing arrays -- Microsoft Presentation
Travis Oliphant
 
PPTX
Scaling Python to CPUs and GPUs
Travis Oliphant
 
PDF
Anaconda and PyData Solutions
Travis Oliphant
 
PDF
Continuum Analytics and Python
Travis Oliphant
 
PDF
Effectively using Open Source with conda
Travis Oliphant
 
PDF
Blaze: a large-scale, array-oriented infrastructure for Python
Travis Oliphant
 
PDF
Numba: Array-oriented Python Compiler for NumPy
Travis Oliphant
 
KEY
Numba lightning
Travis Oliphant
 
KEY
Numba
Travis Oliphant
 
Array computing and the evolution of SciPy, NumPy, and PyData
Travis Oliphant
 
SciPy Latin America 2019
Travis Oliphant
 
PyCon Estonia 2019
Travis Oliphant
 
Standardizing arrays -- Microsoft Presentation
Travis Oliphant
 
Scaling Python to CPUs and GPUs
Travis Oliphant
 
Anaconda and PyData Solutions
Travis Oliphant
 
Continuum Analytics and Python
Travis Oliphant
 
Effectively using Open Source with conda
Travis Oliphant
 
Blaze: a large-scale, array-oriented infrastructure for Python
Travis Oliphant
 
Numba: Array-oriented Python Compiler for NumPy
Travis Oliphant
 
Numba lightning
Travis Oliphant
 

Recently uploaded (20)

PPTX
Presentation about variables and constant.pptx
kr2589474
 
PDF
49785682629390197565_LRN3014_Migrating_the_Beast.pdf
Abilash868456
 
PPTX
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
PPTX
Can You Build Dashboards Using Open Source Visualization Tool.pptx
Varsha Nayak
 
PDF
Key Features to Look for in Arizona App Development Services
Net-Craft.com
 
PDF
Protecting the Digital World Cyber Securit
dnthakkar16
 
PDF
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
PDF
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
PDF
Applitools Platform Pulse: What's New and What's Coming - July 2025
Applitools
 
PPTX
Role Of Python In Programing Language.pptx
jaykoshti048
 
PPT
Activate_Methodology_Summary presentatio
annapureddyn
 
PPTX
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
PPT
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
PDF
Generating Union types w/ Static Analysis
K. Matthew Dupree
 
PPTX
Explanation about Structures in C language.pptx
Veeral Rathod
 
PDF
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
PPTX
Visualising Data with Scatterplots in IBM SPSS Statistics.pptx
Version 1 Analytics
 
PPTX
AI-Ready Handoff: Auto-Summaries & Draft Emails from MQL to Slack in One Flow
bbedford2
 
PPTX
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
PDF
49784907924775488180_LRN2959_Data_Pump_23ai.pdf
Abilash868456
 
Presentation about variables and constant.pptx
kr2589474
 
49785682629390197565_LRN3014_Migrating_the_Beast.pdf
Abilash868456
 
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
Can You Build Dashboards Using Open Source Visualization Tool.pptx
Varsha Nayak
 
Key Features to Look for in Arizona App Development Services
Net-Craft.com
 
Protecting the Digital World Cyber Securit
dnthakkar16
 
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
Applitools Platform Pulse: What's New and What's Coming - July 2025
Applitools
 
Role Of Python In Programing Language.pptx
jaykoshti048
 
Activate_Methodology_Summary presentatio
annapureddyn
 
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
Generating Union types w/ Static Analysis
K. Matthew Dupree
 
Explanation about Structures in C language.pptx
Veeral Rathod
 
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
Visualising Data with Scatterplots in IBM SPSS Statistics.pptx
Version 1 Analytics
 
AI-Ready Handoff: Auto-Summaries & Draft Emails from MQL to Slack in One Flow
bbedford2
 
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
49784907924775488180_LRN2959_Data_Pump_23ai.pdf
Abilash868456
 

PyData Barcelona Keynote

  • 1. © 2016 Continuum Analytics - Confidential & Proprietary© 2016 Continuum Analytics - Confidential & Proprietary Data Science and Machine Learning with Anaconda PyData Barcelona May 20, 2017 Travis E. Oliphant Co-founder, President, Chief Data Scientist
  • 2. © 2016 Continuum Analytics - Confidential & Proprietary A bit about me • PhD 2001 from Mayo Clinic in Biomedical Engineering (Ultrasound and MRI) • MS/BS degrees in Elec. Comp. Engineering • Creator and Developer of SciPy (1999-2009) • Professor at BYU (2001-2007) • Author of NumPy (2005-2012) • Started Numba (2012) • Founding Chair of NumFOCUS / PyData • Previous Python Software Foundation Director • Co-founder of Continuum Analytics • CEO => President, Chief Data Scientist SciPy
  • 3. © 2016 Continuum Analytics - Confidential & Proprietary Company 2012 - Created Two Orgs for Sustainable Open Source Community Enterprise software, support and services to empower people who change the world to get rapid insight from all of their data — built on open- source that we contribute to and sustain.
  • 4. © 2016 Continuum Analytics - Confidential & Proprietary NumFOCUS is a 501(c)(3) nonprofit that supports and promotes world- class, innovative, open source scientific computing. Community-led non-profit organization Independent Board of Directors: • Andy Terrel (President) • Ralf Gommers (Secretary) • Didrik Pinte (Treasurer) • Lorena Barba • Matthew Turk • Jennifer Klay https://www.numfocus.org/community/donate/
  • 5. © 2016 Continuum Analytics - Confidential & Proprietary Data Science is growing with Python leading the way probably the biggest reason is Machine Learning!
  • 6. © 2016 Continuum Analytics - Confidential & Proprietary 6 Neural network with several layers trained with ~130,000 images. Matched trained dermatologists with 91% area under sensitivity- specificity curve. Keys: • Access to Data • Access to Software • Access to Compute
  • 7. © 2016 Continuum Analytics - Confidential & Proprietary https://blogs.dropbox.com/tech/2017/04/creating-a-modern-ocr-pipeline- using-computer-vision-and-deep-learning/ Automatically annotating and indexing your images in Dropbox
  • 8. © 2016 Continuum Analytics - Confidential & Proprietary Augmented Reality: Translating Service Google Translate via Image Not always perfect…
  • 9. © 2016 Continuum Analytics - Confidential & Proprietary Anaconda makes this machine learning “magic” accessible to mere mortals An explosion of innovation is happening in software Managing this innovation reproducibly is what Anaconda excels at Anaconda’s coverage of the Python for Data ecosystem helps you with both the fun part: • modeling, predicting, classifying, visualizing …and the not so fun part: • feature labeling, data-cleaning, data-extraction, scaling, deploying
  • 10. © 2016 Continuum Analytics - Confidential & Proprietary
  • 11. © 2016 Continuum Analytics - Confidential & Proprietary Bringing Technology Together
  • 12. © 2016 Continuum Analytics - Confidential & Proprietary Data Science Workflow New Data Notebooks Understand Data Getting Data Understand World Reports Microservices Dashboards Applications Decisions and Actions Models Exploratory Data Analysis and Viz Data Products
  • 13. © 2016 Continuum Analytics - Confidential & Proprietary …DATA Open Data Science Insight Decisions Actions Results Creations … Applications Dashboards Notebooks MicroServices HDFS SQL Server Teradata Cassandra S3 Flat Files Virtual Data Lake / Data Catalogue
  • 14. © 2016 Continuum Analytics - Confidential & Proprietary
  • 15. © 2016 Continuum Analytics - Confidential & Proprietary Empowering the Data Science Team
  • 16. © 2016 Continuum Analytics - Confidential & Proprietary • Setup your environment locally (single node) on any platform or a cluster HDFSDatabases • Use same expression to query data no matter where it lives Blaze • Scale your data processing, without chaning frameworks or paradigms dask • Present and tell your data story to decision makers Bokeh + datashader • Build large scale meaningful interactive data visualizations anaconda project • Deploy your interactive analytical/ predictive application Sharing insights Big Data Collaboration Deployment numba
  • 18. © 2017 Continuum Analytics - Confidential & Proprietary 18 A Python v2.7 Conda Sandboxing Technology B Python v3.4 Pandas v0.18 Jupyter C R R Essentials conda NumPy v1.11 NumPy v1.10 Pandas v0.16 • Language independent • Platform independent • No special privileges required • No VMs or containers • Enables: • Reproducibility • Collaboration • Scaling “conda – package everything”
  • 19. © 2016 Continuum Analytics - Confidential & Proprietary 221 W. 6th Street Suite #1550 Austin, TX 78701 +1 512.222.5440 [email protected] @ContinuumIO © 2016 Continuum Analytics - Confidential & Proprietary $ anaconda-project run plot —show conda install tensorflow
  • 20. © 2016 Continuum Analytics - Confidential & Proprietary Some conda features • Excellent support for “system-level” environments — like having mini VMs but much lighter weight than docker (micro containers) • Minimizes code-copies (uses hard/soft links if possible) • Simple format: binary tar-ball + metadata • Metadata allows static analysis of dependencies • Easy to create multiple “channels” which are repositories for packages • User installable (no root privileges needed) • Integrates very well with pip and other language-specific package managers. • Cross Platform
  • 21. © 2016 Continuum Analytics - Confidential & Proprietary Basic Conda Usage Install a package conda install sympy List all installed packages conda list Search for packages conda search llvm Create a new environment conda create -n py3k python=3 Remove a package conda remove nose Get help conda install --help
  • 22. © 2016 Continuum Analytics - Confidential & Proprietary Advanced Conda Usage Install a package in an environment conda install -n py3k sympy Update all packages conda update --all Export list of packages conda list --export packages.txt Install packages from an export conda install --file packages.txt See package history conda list --revisions Revert to a revision conda install --revision 23 Remove unused packages and cached tarballs conda clean -pt
  • 23. © 2016 Continuum Analytics - Confidential & Proprietary • Environments are simple: just link the package to a different directory • Hard-links are very cheap, and very fast — even on Windows. • Conda environments are completely independent installations of everything • No fiddling with PYTHONPATH or sym-linking site-packages — “Activating” an environment sets everything up so it *just works* • Unix:
 • Windows: source activate py3k Environments conda create -n py3k python=3.5 activate py3k
  • 24. © 2016 Continuum Analytics - Confidential & Proprietary lightweight isolated sandbox to manage your dependencies in a single file and allow reproducibility of your project environment.yml $ conda env create $ source activate ENV_NAME $ More Sophisticated Environments
  • 26. © 2016 Continuum Analytics - Confidential & Proprietary 26 Numba Bokeh Keras
  • 27. © 2016 Continuum Analytics - Confidential & Proprietary Machine Learning Explosion Scikit-Learn Tensorflow Keras XGBoost theano lasagne caffe/caffe2 torch mxnet / minpy neon CNTK DAAL Chainer Dynet Apache Singa Shogun https://github.com/josephmisiti/awesome-machine-learning#python-general-purpose http://deeplearning.net/software_links/ http://scikit-learn.org/stable/related_projects.html
  • 29. © 2016 Continuum Analytics - Confidential & Proprietary Without NumPyfrom math import sin, pi def sinc(x): if x == 0: return 1.0 else: pix = pi*x return sin(pix)/pix def step(x): if x > 0: return 1.0 elif x < 0: return 0.0 else: return 0.5 functions.py >>> import functions as f >>> xval = [x/3.0 for x in range(-10,10)] >>> yval1 = [f.sinc(x) for x in xval] >>> yval2 = [f.step(x) for x in xval] Python is a great language but needed a way to operate quickly and cleanly over multi- dimensional arrays.
  • 30. © 2016 Continuum Analytics - Confidential & Proprietary With NumPyfrom numpy import sin, pi from numpy import vectorize import functions as f vsinc = vectorize(f.sinc) def sinc(x): pix = pi*x val = sin(pix)/pix val[x==0] = 1.0 return val vstep = vectorize(f.step) def step(x): y = x*0.0 y[x>0] = 1 y[x==0] = 0.5 return y >>> import functions2 as f >>> from numpy import * >>> x = r_[-10:10]/3.0 >>> y1 = f.sinc(x) >>> y2 = f.step(x) functions2.py Offers N-D array, element-by-element functions, and basic random numbers, linear algebra, and FFT capability for Python http://numpy.org Fiscally sponsored by NumFOCUS
  • 31. © 2016 Continuum Analytics - Confidential & Proprietary NumPy: an Array Extension of Python • Data: the array object – slicing and shaping – data-type map to Bytes • Fast Math (ufuncs): – vectorization – broadcasting – aggregations
  • 32. © 2016 Continuum Analytics - Confidential & Proprietary shape NumPy Array Key Attributes • dtype • shape • ndim • strides • data
  • 33. © 2016 Continuum Analytics - Confidential & Proprietary NumPy Slicing (Selection) >>> a[0,3:5] array([3, 4]) >>> a[4:,4:] array([[44, 45], [54, 55]]) >>> a[:,2] array([2,12,22,32,42,52]) 50 51 52 53 54 55 40 41 42 43 44 45 30 31 32 33 34 35 20 21 22 23 24 25 10 11 12 13 14 15 0 1 2 3 4 5 >>> a[2::2,::2] array([[20, 22, 24], [40, 42, 44]])
  • 34. © 2016 Continuum Analytics - Confidential & Proprietary Summary • Provides foundational N-dimensional array composed of homogeneous elements of a particular “dtype” • The dtype of the elements is extensive (but not very extensible) • Arrays can be sliced and diced with simple syntax to provide easy manipulation and selection. • Provides fast and powerful math, statistics, and linear algebra functions that operate over arrays. • Utilities for sorting, reading and writing data also provided.
  • 36. © 2016 Continuum Analytics - Confidential & Proprietary SciPy “Distribution of Python Numerical Tools masquerading as a Library” Name Description cluster KMeans and Vector Quantization fftpack Discrete Fourier Transform integrate Numerical Integration interpolate Interpolation routines io Data Input and Output linalg Fast Linear algebra misc Utilities ndimage N-dimensional Image processing Name Description odr Orthogonal Distance Regression optimize Constrained and Unconstrained Optimization signal Signal Processing Tools sparse Sparse Matrices and Algebra spatial Spatial Data Structures and Algorithms special Special functions (e.g. Bessel) stats Statistical Functions and Distributions
  • 38. © 2016 Continuum Analytics - Confidential & Proprietary a powerful plotting engine import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt np.random.seed(0) # example data mu = 100 # mean of distribution sigma = 15 # standard deviation of distribution x = mu + sigma * np.random.randn(437) num_bins = 50 fig, ax = plt.subplots() # the histogram of the data n, bins, patches = ax.hist(x, num_bins, normed=1) # add a 'best fit' line y = mlab.normpdf(bins, mu, sigma) ax.plot(bins, y, '--') ax.set_xlabel('Smarts') ax.set_ylabel('Probability density') ax.set_title(r'Histogram of IQ: $mu=100$, $ sigma=15$') # Tweak spacing to prevent clipping of ylabel fig.tight_layout() plt.show()
  • 39. © 2016 Continuum Analytics - Confidential & Proprietary a powerful plotting engine import matplotlib.pyplot as plt import scipy.misc as misc im = misc.face() ax = plt.imshow(im) plt.title('Racoon Face of size %d x %d' % im.shape[:2]) plt.savefig(‘face.png’)
  • 40. © 2016 Continuum Analytics - Confidential & Proprietary a powerful plotting engine import matplotlib.pyplot as plt import numpy as np with plt.xkcd(): fig = plt.figure() ax = fig.add_axes((0.1, 0.2, 0.8, 0.7)) ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') plt.xticks([]) plt.yticks([]) ax.set_ylim([-30, 10]) data = np.ones(100) data[70:] -= np.arange(30) plt.annotate('THE DAY I REALIZEDnI COULD COOK BACONnWHENEVER I WANTED’, xy=(70, 1), arrowprops=dict(arrowstyle=‘->'), xytext=(15, -10)) plt.plot(data) plt.xlabel('time') plt.ylabel('my overall health') fig.text(0.5, 0.05, '"Stove Ownership" from xkcd by Randall Monroe', ha='center')
  • 41. © 2016 Continuum Analytics - Confidential & Proprietary a powerful plotting engine from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np fig = plt.figure() ax = fig.gca(projection='3d') # Make data. X = np.arange(-5, 5, 0.25) Y = np.arange(-5, 5, 0.25) X, Y = np.meshgrid(X, Y) R = np.sqrt(X**2 + Y**2) Z = np.sin(R) # Plot the surface. surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0, antialiased=False) # Customize the z axis. ax.set_zlim(-1.01, 1.01) ax.zaxis.set_major_locator(LinearLocator(10)) ax.zaxis.set_major_formatter(FormatStrFormatter( '%.02f')) # Add a color bar which maps values to colors. fig.colorbar(surf, shrink=0.5, aspect=5) plt.show()
  • 43. © 2016 Continuum Analytics - Confidential & Proprietary Easy Data Wrangling • Adds indexes and labels to 1-d and 2-d NumPy arrays (Series and DataFrame) • Many convenience functions and methods to manipulate messy data-sets including time-series. • Powerful indexing with automatic data alignment. • Easy handling of missing data. • Allows easy joining and merging Data Sets • Pivots and reshaping (split-apply-combine) • Powerful group-by operations with summarization • Builtin visualization using labels and indexes
  • 44. © 2016 Continuum Analytics - Confidential & Proprietary Easy Data Wrangling • Series Data Structure • built for 1-dimensional series data • homogeneous data • Two arrays. One of data and another which is the index that can be a homogeneous array of any type like integers, objects, or date-times. • DataFrame • built for 2-dimensional collections of tabular data (think Excel sheet) • heterogeneous data comprised of multiple Series • includes an index column allowing sophisticated selection and alignment
  • 45. © 2016 Continuum Analytics - Confidential & Proprietary Easy Data Wrangling medals = pd.read_csv('data/medals.csv', index_col='name') medals.head() gold = medals['medal'] == 'gold' won = medals['count'] > 0 medals.loc[gold & won, 'count'].sort_values().plot(kind='bar', figsize=(12,8))
  • 46. © 2016 Continuum Analytics - Confidential & Proprietary Easy Data Wrangling google = pd.read_csv('data/goog.csv', index_col='Date', parse_dates=True) google.info() google.head() google.describe()
  • 47. © 2016 Continuum Analytics - Confidential & Proprietary Easy Data Wrangling df = pd.read_excel("data/pbpython/salesfunnel.xlsx") df.head() table = pd.pivot_table(df, index=["Manager","Rep","Product"], values=["Price","Quantity"], aggfunc=[np.sum,np.mean])
  • 49. © 2016 Continuum Analytics - Confidential & Proprietary Machine Learning made easy • Supervised Learning — uses “labeled” data to train a model • Regression — predicted variable is continuous • Classification — predicted variable is discrete (but c.f. Logistic “Regression”) • Unsupervised Learning • Clustering — discover categories in the data • Density Estimation — determine representation of data • Dimensionality Reduction — represent data with fewer variables or feature vectors • Reinforcement Learning — “goal-oriented” learning (e.g. drive a car) • Deep Learning — neural networks with many layers • Semi-supervised Learning (use some labeled data for training)
  • 50. © 2016 Continuum Analytics - Confidential & Proprietary Mathematical representation y = f (x, ✓)Supervised Learning x y ✓ Input Data or “feature vectors” Parameters that determine the model Training is the process of estimating these parameters Labels for training ˆy Predicted outputs ˆy = f (x, ✓) f (·, ·) Learning model. May be part of a family of models with hyper- parameters selecting the specific model
  • 51. © 2016 Continuum Analytics - Confidential & Proprietary Unsupervised Learning y = f (x, ✓)Auto-encoding x y ✓ Input Data or “feature vectors” Parameters of the model Labels for training — set equal to Input Auto-encoded model now represents the data in a lower-dimensional space Model parameters, or estimated data can be used as feature-vectors Network can “de-noise” future inputs (project new inputs onto data-space) f (·, ·) ˆx = g(ˆ✓) g(·) = f(x, ·) Underlying model Data-specific model Estimated data.
  • 52. © 2016 Continuum Analytics - Confidential & Proprietary
  • 53. © 2016 Continuum Analytics - Confidential & Proprietary Supervised Deep Learning y = f (x, ✓)Same structure x y ✓ Input Data or “feature vectors” All the weights between layers Labels for training ˆy Predicted outputs ˆy = f (x, ✓) wijk zij = g X k wijkzi 1k ! g (u) = 1 1 + e u
  • 54. © 2016 Continuum Analytics - Confidential & Proprietary Unsupervised Deep Learning y = f (x, ✓)Auto-encoding x y ✓ Input Data or “feature vectors” All the weights between layers Labels for training — set equal to Input wijk zij = g X k wijkzi 1k ! g (u) = 1 1 + e u Auto-encoded network now represents the data in a lower-dimensional space Outputs of hidden networks (or weights) can be used as feature-vectors Network can “de-noise” future inputs (project new inputs onto data-space)
  • 55. © 2016 Continuum Analytics - Confidential & Proprietary
  • 56. © 2016 Continuum Analytics - Confidential & Proprietary Basic scikit-learn experience 1) Create or Load Data >>> from sklearn import datasets >>> iris = datasets.load_iris() >>> digits = datasets.load_digits() A Scikit-learn dataset is a “dictionary-like” object with input data stored as the .data attribute and labels stored as the .target attribute .data attribute is always a 2D array (n_samples, n_features) May need to extract features from raw data to produce data scikit-learn can use
  • 57. © 2016 Continuum Analytics - Confidential & Proprietary 2) Choose Model (or Build Pipeline of Models) >>> from sklearn import svm >>> clf = svm.SVC(gamma=0.001, C=100.) Most models are model-families and have “hyper-parameters” that specify the specific model function. Good values for these can be found via grid-search and cross-validation (easy target for parallelization). Here “gamma” and “C” are hyper-parameters. Many choices of models: http://scikit-learn.org/stable/supervised_learning.html Basic scikit-learn experience
  • 58. © 2016 Continuum Analytics - Confidential & Proprietary Basic scikit-learn experience 3) Train the Model >>> clf.fit(data[:-1], labels[:-1]) Models have a “fit” method which updates the parameters-to-be-estimated in the model in-place so that after fitting the model is “trained” For validation and scoring you need to leave out some of the data to use later. cross-validation (e.g. k-fold) techniques can also be parallelized easily. Here we “leave-one-out” (or n-fold)
  • 59. © 2016 Continuum Analytics - Confidential & Proprietary Basic scikit-learn experience 4) Predict new values >>> clf.predict(data[-1:]) Prediction of new data uses the trained parameters of the model. Cross- validation can be used to understand how sensitive the model is to different partitions of the data. >>> from sklearn.model_selection import cross_val_score >>> scores = cross_val_score(clf, data, target, cv=10) array([ 0.96…, 1. …, 0.96… , 1. ])
  • 61. © 2016 Continuum Analytics - Confidential & Proprietary The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
  • 62. © 2017 Continuum Analytics - Confidential & Proprietary 62
  • 63. © 2016 Continuum Analytics - Confidential & Proprietary 63 Data lineage Interactive Visualizations Advanced notebook extensions Collaborative Executable Notebooks
  • 64. 64 Scaling Up and Out with Numba and Dask
  • 65. © 2016 Continuum Analytics - Confidential & Proprietary Scale Up vs Scale Out Big Memory & Many Cores / GPU Box Best of Both (e.g. GPU Cluster) Many commodity nodes in a cluster ScaleUp (BiggerNodes) Scale Out (More Nodes) Numba Dask Blaze
  • 66. 66 Scaling Up! Optimized Python with JIT compilation from Numba
  • 67. © 2016 Continuum Analytics - Confidential & Proprietary Numba • Started in 2012 • Release 33 (0.33) in May • Version 1.0 coming in 2017 • Particularly suited to Numeric computing • Lots of features!!! • Ahead of Time Compilation • Wide community adoption and use conda install numba
  • 68. © 2016 Continuum Analytics - Confidential & Proprietary Array Allocation Looping over ndarray x as an iterator Using numpy math functions Returning a slice of the array Numba decorator
 (nopython=True not required) 2.7x Speedup over NumPy! Example: Filter an array
  • 69. © 2016 Continuum Analytics - Confidential & Proprietary Image Processing @jit('void(f8[:,:],f8[:,:],f8[:,:])') def filter(image, filt, output): M, N = image.shape m, n = filt.shape for i in range(m//2, M-m//2): for j in range(n//2, N-n//2): result = 0.0 for k in range(m): for l in range(n): result += image[i+k-m//2,j+l-n//2]*filt[k, l] output[i,j] = result ~1500x speed-up
  • 70. © 2016 Continuum Analytics - Confidential & Proprietary Does not replace the standard Python interpreter
 (all of your existing Python libraries are still available) Numba Compatibility
  • 71. © 2016 Continuum Analytics - Confidential & Proprietary New GPU Data Frame project (pyGDF) — GOAI
  • 72. 72 Scaling Out with Dask (integrates with but doesn’t depend on Hadoop)
  • 73. © 2016 Continuum Analytics - Confidential & Proprietary Dask • Started as part of Blaze in early 2014. • General parallel programming engine • Flexible and therefore highly suited for • Commodity Clusters • Advanced Algorithms • Wide community adoption and use conda install dask pip install dask[complete] distributed --upgrade
  • 74. © 2016 Continuum Analytics - Confidential & Proprietary Big DataSmall Data Numba Moving from small data to big data
  • 75. © 2016 Continuum Analytics - Confidential & Proprietary Dask: From User Interaction to Execution delayed
  • 76. © 2016 Continuum Analytics - Confidential & Proprietary 76 Dask: Parallel Data Processing Synthetic views of Numpy ndarrays Synthetic views of Pandas DataFrames with HDFS support DAG construction and workflow manager
  • 77. © 2016 Continuum Analytics - Confidential & Proprietary Dask is a Python parallel computing library that is: • Familiar: Implements parallel NumPy and Pandas objects • Fast: Optimized for demanding for numerical applications • Flexible: for sophisticated and messy algorithms • Scales up: Runs resiliently on clusters of 100s of machines • Scales down: Pragmatic in a single process on a laptop • Interactive: Responsive and fast for interactive data science Dask complements the rest of Anaconda. It was developed with
 NumPy, Pandas, and scikit-learn developers. Overview of Dask
  • 78. © 2016 Continuum Analytics - Confidential & Proprietary x.T - x.mean(axis=0) df.groupby(df.index).value.mean() def load(filename): def clean(data): def analyze(result): Dask array (mimics NumPy) Dask dataframe (mimics Pandas) Dask delayed (wraps custom code) b.map(json.loads).foldby(...) Dask bag (collection of data) Dask Collections: Familiar Expressions and API
  • 79. © 2016 Continuum Analytics - Confidential & Proprietary >>> import pandas as pd >>> df = pd.read_csv('iris.csv') >>> df.head() sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 Iris-setosa 1 4.9 3.0 1.4 0.2 Iris-setosa 2 4.7 3.2 1.3 0.2 Iris-setosa 3 4.6 3.1 1.5 0.2 Iris-setosa 4 5.0 3.6 1.4 0.2 Iris-setosa >>> max_sepal_length_setosa = df[df.species == 'setosa'].sepal_length.max() 5.7999999999999998 >>> import dask.dataframe as dd >>> ddf = dd.read_csv('*.csv') >>> ddf.head() sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 Iris-setosa 1 4.9 3.0 1.4 0.2 Iris-setosa 2 4.7 3.2 1.3 0.2 Iris-setosa 3 4.6 3.1 1.5 0.2 Iris-setosa 4 5.0 3.6 1.4 0.2 Iris-setosa … >>> d_max_sepal_length_setosa = ddf[ddf.species == 'setosa'].sepal_length.max() >>> d_max_sepal_length_setosa.compute() 5.7999999999999998 Dask Dataframes
  • 80. © 2016 Continuum Analytics - Confidential & Proprietary Dask Graphs: Example Machine Learning Pipeline
  • 81. © 2016 Continuum Analytics - Confidential & Proprietary Example 1: Using Dask DataFrames on a cluster with CSV data • Built from Pandas DataFrames • Match Pandas interface • Access data from HDFS, S3, local, etc. • Fast, low latency • Responsive user interface January, 2016 Febrary, 2016 March, 2016 April, 2016 May, 2016 Pandas DataFrame} Dask DataFrame }
  • 82. © 2016 Continuum Analytics - Confidential & Proprietary >>> import numpy as np >>> np_ones = np.ones((5000, 1000)) >>> np_ones array([[ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], ..., [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.]]) >>> np_y = np.log(np_ones + 1)[:5].sum(axis=1) >>> np_y array([ 693.14718056, 693.14718056, 693.14718056, 693.14718056, 693.14718056]) >>> import dask.array as da >>> da_ones = da.ones((5000000, 1000000), chunks=(1000, 1000)) >>> da_ones.compute() array([[ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], ..., [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.], [ 1., 1., 1., ..., 1., 1., 1.]]) >>> da_y = da.log(da_ones + 1)[:5].sum(axis=1) >>> np_da_y = np.array(da_y) #fits in memory array([ 693.14718056, 693.14718056, 693.14718056, 693.14718056, …, 693.14718056]) # If result doesn’t fit in memory >>> da_y.to_hdf5('myfile.hdf5', 'result') Dask Arrays
  • 83. © 2016 Continuum Analytics - Confidential & Proprietary NumPy Array } }Dask Array Example 2: Using Dask Arrays with global temperature data • Built from NumPy
 n-dimensional arrays • Matches NumPy interface (subset) • Solve medium-large problems • Complex algorithms
  • 84. © 2016 Continuum Analytics - Confidential & Proprietary Scheduler Worker Worker Worker Worker Client Same network User Machine (laptop)Client Worker Dask Schedulers: Distributed Scheduler
  • 85. © 2016 Continuum Analytics - Confidential & Proprietary Cluster Architecture Diagram Client Machine Compute Node Compute Node Compute Node Head Node
  • 86. © 2016 Continuum Analytics - Confidential & Proprietary • Single machine with multiple threads or processes • On a cluster with SSH (dcluster) • Resource management: YARN (knit), SGE, Slurm • On the cloud with Amazon EC2 (dec2) • On a cluster with Anaconda for cluster management • Manage multiple conda environments and packages 
 on bare-metal or cloud-based clusters Using Anaconda and Dask on your Cluster
  • 87. © 2016 Continuum Analytics - Confidential & Proprietary YARN JVM Bottom Line
 2X-100X faster overall performance • Interact with data in HDFS and Amazon S3 natively from Python • Distributed computations without the JVM & Python/Java serialization • Framework for easy, flexible parallelism using directed acyclic graphs (DAGs) • Interactive, distributed computing with in-memory persistence/caching Bottom Line • Leverage Python & R with Spark Batch Processing Interactive Processing HDFS Ibis Impala PySpark & SparkR Python & R ecosystem MPI High Performance, Interactive, Batch Processing Native read & write NumPy, Pandas, … 720+ packages High Performance Hadoop
  • 88. © 2016 Continuum Analytics - Confidential & Proprietary Scheduler Visualization with Bokeh
  • 89. © 2016 Continuum Analytics - Confidential & Proprietary Look at all of the data with Bokeh’s datashader. Decouple the data-processing from the visualization. Visualize arbitrarily large data. Numba + Dask • E.g. Open Street Map data: • About 3 billion GPS coordinates • https://blog.openstreetmap.org/ 2012/04/01/bulk-gps-point-data/. • This image was rendered in <5 seconds on a standard MacBook with 16 GB RAM • Renders in less than a second on several 128GB Amazon EC2 instances
  • 90. © 2016 Continuum Analytics - Confidential & Proprietary Categorical data: 2010 US Census • One point per person • 300 million total • Categorized by race • Interactive rendering with Numba+Dask • No pre-tiling
  • 92. © 2016 Continuum Analytics - Confidential & Proprietary 92 Interactive Data Visualization • Interactive viz, widgets, and tools • Versatile high level graphics • Streaming, dynamic, large data • Optimized for the browser • No Javascript • With or without a server
  • 93. © 2016 Continuum Analytics - Confidential & Proprietary 93 Rapid Prototyping Visual Apps • Python interface • R interface • Smart plotting
  • 94. 94 Plotting Billions of Points and Map Integration with Datashader
  • 95. © 2016 Continuum Analytics - Confidential & Proprietary 95 Datashader: Rendering a Billion Points of Data •datashader provides a fast, configurable visualization pipeline for faithfully revealing even very large datasets • Each of these visualizations requires just a few lines of code and no magic numbers to adjust by trial and error.
  • 96. © 2016 Continuum Analytics - Confidential & Proprietary 96 Datashader
  • 97. 97 Data Visualization and Applications made easy with Holoviews
  • 98. © 2016 Continuum Analytics - Confidential & Proprietary HoloViews: Stop plotting your data •Exploring data can be tedious if you use a plotting library directly, because you will need to specify details about your data (units, dimensions, names, etc.) every time you construct a new type of plot. •With HoloViews, you instead annotate your data once, and then flexible plotting comes for free — HoloViews objects just display themselves, alone or in any combination. •It’s now easy to lay out subfigures, overlay traces, facet or animate a multidimensional dataset, sample or aggregate to reduce dimensionality, preserving the metadata each time so the results visualize themselves. HoloViews makes it simple to create beautiful interactive Bokeh or Matplotlib visualizations of complex data.
  • 99. © 2016 Continuum Analytics - Confidential & Proprietary tiles = gv.WMTS(WMTSTileSource(url='https://server.arcgisonline.com/ArcGIS/rest/services/' 'World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg')) tile_options = dict(width=800,height=475,xaxis=None,yaxis=None,bgcolor='black',show_grid=False) passenger_counts = sorted(df.passenger_count.unique().tolist()) class Options(hv.streams.Stream): alpha = param.Magnitude(default=0.75, doc="Alpha value for the map opacity") colormap = param.ObjectSelector(default=cm["fire"], objects=cm.values()) plot = param.ObjectSelector(default="pickup", objects=["pickup","dropoff"]) passengers = param.ObjectSelector(default=1, objects=passenger_counts) def make_plot(self, x_range=None, y_range=None, **kwargs): map_tiles = tiles(style=dict(alpha=self.alpha), plot=tile_options) df_filt = df[df.passenger_count==self.passengers] points = hv.Points(gv.Dataset(df_filt, kdims=[self.plot+'_x', self.plot+'_y'], vdims=[])) taxi_trips = datashade(points, width=800, height=475, x_sampling=1, y_sampling=1, cmap=self.colormap, element_type=gv.Image, dynamic=False, x_range=x_range, y_range=y_range) return map_tiles * taxi_trips selector = Options(name="") paramnb.Widgets(selector, callback=selector.update) hv.DynamicMap(selector.make_plot, kdims=[], streams=[selector, RangeXY()]) Data Widgets and Applications from Jupyter Notebooks! https://anaconda.org/jbednar/nyc_taxi-paramnb/notebook
  • 101. © 2016 Continuum Analytics - Confidential & Proprietary • IDE • Extensible • Notebook -> Applications JupyterLab
  • 102. © 2016 Continuum Analytics - Confidential & Proprietary More Than Just Notebooks
  • 103. © 2016 Continuum Analytics - Confidential & Proprietary Building Blocks File Browser Notebooks Text Editor TerminalOutputWidgets
  • 104. © 2016 Continuum Analytics - Confidential & Proprietary A completely modular architecture