SlideShare a Scribd company logo
Regularization for Deep Learning
Goodfellow, Bengio, & Courville (2016) Deep Learning, Chap 7.
Shigeru ONO (Insight Factory)
DL 読書会: 2020/08
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 1 / 40
TOC
1 7.1 Parameter Norm Penalties
2 7.2 Norm Penalties as Constrained Optimization
3 7.3 Regularization and Under-Constrained Problems
4 7.4 Dataset Augmentation
5 7.5 Noise Robustness
6 7.6 Semi-Supervised Learning
7 7.7 Multitask Learning
8 7.8 Early Stopping
9 7.9 Parameter Tying and Parameter Sharing
10 7.10 Sparse Representation
11 7.11 Bagging and Other Ensemble Methods
12 7.12 Dropout
13 7.13 Adversarial Training
14 7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 2 / 40
(introduction)
Regularization:
any modification we make to a learning algorithm that is intended to reduce
its generalization error
possibly at the expense of increasing training error
In the context of DL, most regularization strategies are based on regularizing
estimators
Possible situations (See Chap.5) :
(1) the model family excluded the true DGP (underfitting)
(2) the model family matched the true DGP
(3) the model family included the true DGP but also many other possible DGP
(overfitting)
The goal of regularization is to take the model from (3) into (2). But...
In most applications of DL, the true DGP is outside the model family (=(1)).
Controlling the complexity of the model is not to find the model of the right
size, but to find the model with appropriate regularization in which
generalization error is minimized.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 3 / 40
7.1 Parameter Norm Penalties
Adding a parameter norm penalty Ω(θ) to the objective function J.
˜J(θ; X, y) = J(θ; X, y) + αΩ(θ)
α(≥ 0): weight of the relative contribution of Ω.
For NN, we typically choose Ω that penalizes only w (the weights of the
affine transformation at each layer).
It is reasonable to use the same α at all layers.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 4 / 40
7.1.1 L2
Parameter Regularization
Ω(θ) =
1
2
||w||2
aka. weight decay, ridge regression, Tikhonov regularization.
Bayesian interpretation: MAP inference with a Gaussian prior on the weights.
(See 5.6.1)
Total objective function:
˜J(w; X, y) = J(w; X, y) +
α
2
w⊤
w
Parameter gradient:
∇w
˜J(w; X, y) = αw + ∇wJ(w; X, y)
What happens in a single gradient step? ... The learning rule is modified to
shrink w by a constant factor.
w ← (1 − ϵα)w − ϵ∇wJ(w; X, y)
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 5 / 40
7.1.1 L2
Parameter Regularization
What happens over the entire course of training? (in general)
Unregularized:
Let w∗
be the weights which minimize the unregularized objective function:
w∗
= arg min
w
J(w).
Make a quadratic approximation to the J(w) in the neighborhood of w∗
:
ˆJ(θ) = J(w∗
) +
1
2
(w − w∗
)⊤
H(w − w∗
)
where H is the Hessian matrix of J with respect to w evaluated at w∗
.
The minimum of ˆJ occurs where its gradient ∇w
ˆJ(w) = H(w − w∗
) is 0.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 6 / 40
7.1.1 L2
Parameter Regularization
(Cont’d)
Regularized:
Let ˜w be the weights with minimize the regularized objective function ˜J.
The minimum of ˜J occurs where α˜w + H(w − w∗
) = 0.
It follows that ˜w = (H + αI)−1
Hw∗
H is real and symmetric. We can have a eigenvalue decomposition
H = QΛQ⊤
.
˜w = Q(Λ + αI)−1
ΛQ⊤
w∗
. i.e. The weight decay rescales w∗
along the axes
defined by the eigenvector of H.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 7 / 40
7.1.1 L2
Parameter Regularization
What happens over the entire course of training? (in the case of linear regression)
Unregularized:
Cost function: (Xw − y)⊤
(Xw − y)
Solution: w = (X⊤
X)−1
X⊤
y
Regularized:
Cost function: (Xw − y)⊤
(Xw − y) + 1
2 αw⊤
w
Solution: w = (X⊤
X + αI)−1
X⊤
y.
i.e. The regularization cause the algorithm to ”perceive” that X has higher
variance (than the variance it really has).
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 8 / 40
7.1.1 L2
Parameter Regularization
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 9 / 40
7.1.2 L1
Regularization
Ω(θ) = ||w||1 =
∑
i
|wi|
Total objective function :
˜J(w; X, y) = J(w; X, y) + α||w||1
Parameter gradient:
∇w
˜J(w; X, y) = αsign(w) + ∇wJ(w; X, y)
It does not admit clean algebraic solution.
For simple linear model with a quadratic cost function,
∇w
ˆJ(w) = H(w − w∗
)
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 10 / 40
7.1.2 L1
Regularization
(Cont’d)
Assume the Hessian is diagonal, H = diag([H11, . . . , Hnn]) (i.e. no correlation
between the input features)
Then we have a quadratic approximation of the cost function:
ˆJ(w; X, y) = J(w∗
; X, y) +
∑
i
(
1
2
Hii(wi − w∗
i )2
+ α|wi|
)
The solution is:
wi = sign(w∗
i ) max
(
|w∗
i | −
α
Hii
, 0
)
Consider the situation where w∗
i > 0 for all i. Then
When w∗
i ≤ α
Hii
, the optimal value is wi = 0.
When w∗
i > α
Hii
, the optimal value is just shifted by a distance α
Hii
.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 11 / 40
7.1.2 L1
Regularization
(Cont’d)
In short, the solution is more sparse (i.e. some parameter have an optimal
value of zero).
It has been used as a feature selection mechanism. E.g. LASSO
Bayesian interpretation: MAP inference with a isotropic Laplace prior on the
weights.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 12 / 40
7.2 Norm Penalties as Constrained Optimization
We can think of the penalties as constraints.
Cost function:
˜J(θ; X, y) = J(θ; X, y) + αΩ(θ)
If we wanted to constrain as Ω(θ) < k, we could construct a generalized
Lagrangian
L(θ, α; X, y) = J(θ; X, y) + α(Ω(θ) − k)
The solution is
θ∗
= arg min
θ
max
α,α≥0
L(θ, α)
we can fix α as its optimal value α∗
:
θ∗
= arg min
θ
L(θ, α∗
) = arg min
θ
J(θ; X, y) + α∗
Ω(θ)
This is same as the problem of minimizing ˜J.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 13 / 40
7.2 Norm Penalties as Constrained Optimization
Sometimes we may with to use explicit constraints rather than penalties:
when we know the appropriate value of k
when the penalties can cause optimization to get stuck in local minima
corresponding to small θ.
when we with to impose some stability on the optimization procedure
Approach:
Srebro & Shraibman (2005): constraining the norm of each column of the
weight matrix of a layer
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 14 / 40
7.3 Regularization and Under-Constrained Problems
Sometimes regularization is necessary for ML problems to be properly defined.
when the problem depends on (X⊤
X)−1
but X⊤
X is singular.
when the problem has no closed form solution. E.g., logistic regression
applied to a problem where the class are linear separable. If weight w can
achieve perfect classification, 2w will also achieve with higher likelihood.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 15 / 40
7.4 Dataset Augmentation
Idea: Create fake data and add it to the training set.
an effective technique particularly for object recognition. E.g. translating the
training images a few pixels in each direction.
Injecting noise in the input to a NN. It can improve the robustness of NNs.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 16 / 40
7.5 Noise Robustness
Idea: Add noise to the weights.
It can be interpreted as a stochastic implementation of Bayesian inference
over the weight.
Noise reflect our uncertainty on the model weights.
It can also be interpreted as equivalent to a more traditional form of
regularization.
Consider we wish to train a function ˆy(x) using the least-square cost function
J = Ep(x,y)[(ˆy(x) − y)2
]
Assume that we also include a random perturbation ϵw ∼ N(ϵ; 0, ηI) of the
network weights.
The objective function becomes ˜JW = Ep(x,y,ϵW)[(ˆyeW (x) − y)2
]
For small η, it is equivalent to J with a regularization term
ηEp(x,y)[||∇Wˆy(x)||2
].
It push the model into regions where the model is relatively insensitive to small
variations in the weights, finding points that are not merely minima, but
minima surrounded by flat regions.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 17 / 40
7.5.1 Injecting Noise at the Output Targets
Idea: Explicitly model the noise on the y labels.
label smoothing: regularize a model based on a softmax with k output values
by replacing classification target
0 with ϵ/(k − 1)
1 with 1 − ϵ
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 18 / 40
7.6 Semi-Supervised Learning
Idea: Use both unlabeled example (from P(x)) and labeled example (from P(x, y))
in order to estimate P(y|x)
In the context of DL, semi-supervised learning usually refers to learning a
representation h = f(x).
The goal is to learn a representation so that examples from the same class
have similar representations.
Construct models in which a generative model of either P(x) or P(x, y) shares
parameters with a discriminative model of P(y|x)
One can find a better trade-off of two types of criterion:
The supervised criterion: − log P(y|x)
The unsupervised (generative) criterion: − log P(x) or − log P(x, y)
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 19 / 40
7.7 Multitask Learning
Idea: Pool the examples arising out of several tasks
The model can be divided into two parts:
Task-specific parameters
Generic parameters, shared across the tasks
It can improve generalization and generalization error bounds
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 20 / 40
7.7 Multitask Learning
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 21 / 40
7.8 Early Stopping
Idea : Obtain a model with the parameters at the point in time with the lowest
validation set error (rather than with the latest parameters in the training process)
the most commonly use form of regularization in DL
can be interpreted as a hyperparameter (the number of training steps)
selection algorithm
requires a validation set, which is not fed to the model
One can perform extra training (where all training data is used) after initial
learning (with early stopping). Two basic strategies:
Initialize the model again and retrain on all the data (for the same number of
steps as the first round)
Keep the parameter and continue training (but now using all the data). It is
not as well behaved.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 22 / 40
7.8 Early Stopping
How early stopping acts as regularizer:
Restricting both the number of iterations and the learning rate limit the
volume of parameter space reachable from the initial parameter value.
In a simple linear model with a quadratic error function and simple gradient
decent, early stopping is equivalent to L2
regularization. [...skipped...]
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 23 / 40
7.8 Early Stopping
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 24 / 40
7.9 Parameter Tying and Parameter Sharing
Sometimes we may know there should be some dependencies between the
parameters.
Parameter Tying:
E.g. two models performing the same classification task but with different
input distributions:
ˆy(A)
= f(w(A))
, x), ˆy(B)
= f(w(B))
, x)
We believe the model parameters should be close to each other
We can use a penalty Ω(w(A)
, w(B)
) = ||w(A)
− w(B)
||2
Parameter Sharing:
force sets of parameters to be equal
can lead to significant reduction of memory
The most popular use: convolutional neural network (CNNs) (See Chap.9)
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 25 / 40
7.10 Sparse Representation
Idea: place a penalty on the activation of the unit (rather than on the parameters)
Norm penalty regularization of representation:
h: sparse representation of the data x
add a norm penalty on the representation Ω(h) to the loss function J:
˜J(θ; X, y) = J(θ; X, y) + αΩ(h)
We can use L1
penalty Ω(h) = ||h||1 or other types of penalties
Orthogonal matching pursuit (OMP-k):
encodes x with h that solves the constrained optimization problem
arg min
h,||h||0<k
||x − Wh||2
where ||h||0 is the number of nonzero entries of h
OMP-1 can be a very effective feature extractor for DL
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 26 / 40
7.11 Bagging and Other Ensemble Methods
Ensemble methods
combine several models (trained separately) in order to reduce generalization
error
an example of a general strategy called model averaging
On average, the ensemble will perform at least as well as any of its members.
If the members make independent errors, the ensemble will perform
significantly better.
Bagging (bootstrap aggregating)
construct k different datasets of same size by sampling with replacement
from the original dataset
Model i is trained on data set i
Boosting:
construct an ensemble with higher capacity then individual models
Boosting of NN: incrementally add NN to the ensemble
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 27 / 40
7.12 Dropout
Background:
Bagging involves training multiple models and evaluating them on each test
example.
This seems impractical when each model is a large NN.
Dropout can be thought of as a method of making bagging practical.
What is Dropout?
make all subnetworks that can be formed by removing nonoutput units from
an base network
In many cases, we can remove a unit by multiplying its output value by zero.
Let µ be a vector of binary mask, which is applied to all the input and hidden
units.
train them with a minibatch-based algorithm
Each time we load an example into a minibatch, we randomly sample µ and
apply it.
Typically, an input unit is included with probability 0.8, and a hidden unit is
included with 0.5.
Run forward propagation, back-propagation, and the learning update.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 28 / 40
7.12 Dropout
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 29 / 40
7.12 Dropout
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 30 / 40
7.12 Dropout
How to make a prediction:
At training time, µ is sampled from the probability distribution p(µ)
Each submodel defined by µ defines a probability distribution p(y|x, µ)
To make a prediction from all submodels, we can use arithmetic mean:∑
µ p(µ)p(y|x, µ)
But geometric mean performs better. Let ˜pensemble(y|x) be the geometric
mean of p(y|x, µ).
˜p(y|x) is not guaranteed to be a probability distribution. We must
renormalize:
pensemble(y|x) =
˜pensemble(y|x)
∑
y′ ˜pensemble(y′|x)
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 31 / 40
7.12 Dropout
Weight scaling inference rule:
We can approximate pensemble by evaluating p(y|x) in one model
This model uses all units, but with the weights going out of unit i multiplied
by the probability of including unit i
if an inclusion probability of a unit is 1/2, the weight of the unit is multiplied
by 1/2 at the end of training, or the states of the unit is multiplied by 2 during
training
There is not yet any theoretical argument for this rule, but empirically it
performs well
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 32 / 40
7.12 Dropout
Advantages of dropout:
very computationally cheap
it does not significantly limit the type of model or training procedure
Limitations:
it reduces the effective capacity of a model. To offset this effect, we must
increase the size of the model.
it is less effective when extremely few labeled training examples are available.
When additional unlabeled data is available, unsupervised feature learning
can gain an advantage over dropout.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 33 / 40
7.12 Dropout
fast dropout:
analytical approximations to the sum over all submodels
more principled approach than the weight scaling inference rule.
Interpretation of dropout:
an experiments using ”dropout boosting”
use exactly the same mask noise as dropout
trains the entire ensemble to jointly (not independently) maximize the
log-likelihood on the training set
shows almost no regularization effect
This demonstrates that dropout is a type of bagging. Dropout in itself have
no robustness to noise.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 34 / 40
7.12 Dropout
Other approaches inspired by dropout:
DropConnect: each product between a single scalar weight and a single
hidden unit state is considered a unit that can be dropped
Stochastic pooling: build ensembles of CNNs
real valued mask: multiplying the weights by µ ∼ N(1, I) can outperform
dropout
Another view of dropout:
Dropout regularize each hidden unit to be not merely a good feature but a
feature that is good in many context.
Masking can be seen as a form of highly intelligent, adaptive destruction of
the information content of the input (rather than destruction of the raw
input). It allows the model to make use of all the knowledge about the input
distribution which has acquired so far.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 35 / 40
7.13 Adversarial Training
Adversarial example:
an input x′
near a data point x such that the model output is very different
at x′
In many cases, human observer cannot tell the difference between x and x′
One of the causes of these examples is excessive linearity in NN. The value of
a linear function can change very rapidly if it has numerous inputs.
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 36 / 40
7.13 Adversarial Training
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 37 / 40
7.13 Adversarial Training
Adversarial Training:
training on adversarially perturbed examples from the training set
a way of explicitly introducing a local constancy prior into NN
Virtual adversarial example:
Suppose the model assigns some label ˆy at a point x which has no true label.
We can seek an adversarial example x′
that causes the model to output a
label y′
(̸= ˆy)
We can train the model to assign the same label to x and x′
This encourages the model to learn a function which is robust to small change
This provide a means of semi-supervised learning
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 38 / 40
7.14 Tangent Distance, Tangent Prop and Manifold
Tangent Classifier
Manifold hypothesis:
the data lies near a low-dimensional manifold
Tangent distance algorithm
non-parametric nearest neighbor algorithm, where the distance between
points x1 and x2 is the distance between the manifolds M1 and M2 to which
they respectively belong
approximate Mi by its tangent plane at xi
The user has to specify the tangent vectors
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 39 / 40
7.14 Tangent Distance, Tangent Prop and Manifold
Tangent Classifier
Tangent prop algorithm:
[...skipped...]
double backprop:
[...skipped...]
Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 40 / 40

More Related Content

PPTX
NumPy.pptx
EN1036VivekSingh
 
PDF
An introduction to deep reinforcement learning
Big Data Colombia
 
PDF
Reinforcement Learning 2. Multi-armed Bandits
Seung Jae Lee
 
PDF
Deep Learning: Application & Opportunity
iTrain
 
PPT
Reinforcement learning
Chandra Meena
 
PDF
Deep Learning: Recurrent Neural Network (Chapter 10)
Larry Guo
 
PDF
CS6701 CRYPTOGRAPHY AND NETWORK SECURITY
Kathirvel Ayyaswamy
 
PPTX
Lecture #01
Konpal Darakshan
 
NumPy.pptx
EN1036VivekSingh
 
An introduction to deep reinforcement learning
Big Data Colombia
 
Reinforcement Learning 2. Multi-armed Bandits
Seung Jae Lee
 
Deep Learning: Application & Opportunity
iTrain
 
Reinforcement learning
Chandra Meena
 
Deep Learning: Recurrent Neural Network (Chapter 10)
Larry Guo
 
CS6701 CRYPTOGRAPHY AND NETWORK SECURITY
Kathirvel Ayyaswamy
 
Lecture #01
Konpal Darakshan
 

What's hot (20)

PPTX
Batch normalization presentation
Owin Will
 
PDF
The Back Propagation Learning Algorithm
ESCOM
 
PDF
Artificial Neural Networks Lect7: Neural networks based on competition
Mohammed Bennamoun
 
PPSX
Perceptron in ANN
Zaid Al-husseini
 
PPTX
Perceptron & Neural Networks
NAGUR SHAREEF SHAIK
 
PPTX
Bayesian Belief Network and its Applications.pptx
SamyakJain710491
 
PPT
2.5 backpropagation
Krish_ver2
 
PPT
backpropagation in neural networks
Akash Goel
 
PDF
Generative adversarial networks
남주 김
 
PPTX
Multilayer perceptron
omaraldabash
 
PDF
Deep Feed Forward Neural Networks and Regularization
Yan Xu
 
PPTX
Hyperparameter Tuning
Jon Lederman
 
PDF
Autoencoder
HARISH R
 
PPTX
Deep learning: Mathematical Perspective
YounusS2
 
PPT
Artificial neural network
AkshanshAgarwal4
 
ODP
Simple Introduction to AutoEncoder
Jun Lang
 
PPTX
Introduction to Deep learning
leopauly
 
PDF
Introduction to Recurrent Neural Network
Knoldus Inc.
 
PPTX
Artificial Neural Networks for NIU session 2016 17
Prof. Neeta Awasthy
 
PPTX
Machine Learning using Support Vector Machine
Mohsin Ul Haq
 
Batch normalization presentation
Owin Will
 
The Back Propagation Learning Algorithm
ESCOM
 
Artificial Neural Networks Lect7: Neural networks based on competition
Mohammed Bennamoun
 
Perceptron in ANN
Zaid Al-husseini
 
Perceptron & Neural Networks
NAGUR SHAREEF SHAIK
 
Bayesian Belief Network and its Applications.pptx
SamyakJain710491
 
2.5 backpropagation
Krish_ver2
 
backpropagation in neural networks
Akash Goel
 
Generative adversarial networks
남주 김
 
Multilayer perceptron
omaraldabash
 
Deep Feed Forward Neural Networks and Regularization
Yan Xu
 
Hyperparameter Tuning
Jon Lederman
 
Autoencoder
HARISH R
 
Deep learning: Mathematical Perspective
YounusS2
 
Artificial neural network
AkshanshAgarwal4
 
Simple Introduction to AutoEncoder
Jun Lang
 
Introduction to Deep learning
leopauly
 
Introduction to Recurrent Neural Network
Knoldus Inc.
 
Artificial Neural Networks for NIU session 2016 17
Prof. Neeta Awasthy
 
Machine Learning using Support Vector Machine
Mohsin Ul Haq
 
Ad

Similar to Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7 (20)

PDF
07 regularization
Ronald Teo
 
PDF
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Ono Shigeru
 
PDF
"Let us talk about output features! by Florence d’Alché-Buc, LTCI & Full Prof...
Paris Women in Machine Learning and Data Science
 
PDF
MLHEP Lectures - day 2, basic track
arogozhnikov
 
PDF
MLHEP 2015: Introductory Lecture #3
arogozhnikov
 
PDF
Machine learning (1)
NYversity
 
PDF
Iclr2016 vaeまとめ
Deep Learning JP
 
PDF
MLHEP 2015: Introductory Lecture #2
arogozhnikov
 
PDF
slides for "Supervised Model Learning with Feature Grouping based on a Discre...
Kensuke Mitsuzawa
 
PDF
Machine Learning 1
cairo university
 
PDF
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
Karl Rudeen
 
PDF
Project Paper
Brian Whetter
 
PDF
More on randomization semi-definite programming and derandomization
Abner Chih Yi Huang
 
PPTX
Bagging_and_Boosting.pptx
ABINASHPADHY6
 
PDF
Gradient Estimation Using Stochastic Computation Graphs
Yoonho Lee
 
PDF
Chap 8. Optimization for training deep models
Young-Geun Choi
 
PDF
Sparse autoencoder
Devashish Patel
 
PDF
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
PDF
Introduction to Machine Learning Lectures
ssuserfece35
 
PDF
A New Lagrangian Relaxation Approach To The Generalized Assignment Problem
Kim Daniels
 
07 regularization
Ronald Teo
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Ono Shigeru
 
"Let us talk about output features! by Florence d’Alché-Buc, LTCI & Full Prof...
Paris Women in Machine Learning and Data Science
 
MLHEP Lectures - day 2, basic track
arogozhnikov
 
MLHEP 2015: Introductory Lecture #3
arogozhnikov
 
Machine learning (1)
NYversity
 
Iclr2016 vaeまとめ
Deep Learning JP
 
MLHEP 2015: Introductory Lecture #2
arogozhnikov
 
slides for "Supervised Model Learning with Feature Grouping based on a Discre...
Kensuke Mitsuzawa
 
Machine Learning 1
cairo university
 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
Karl Rudeen
 
Project Paper
Brian Whetter
 
More on randomization semi-definite programming and derandomization
Abner Chih Yi Huang
 
Bagging_and_Boosting.pptx
ABINASHPADHY6
 
Gradient Estimation Using Stochastic Computation Graphs
Yoonho Lee
 
Chap 8. Optimization for training deep models
Young-Geun Choi
 
Sparse autoencoder
Devashish Patel
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
Introduction to Machine Learning Lectures
ssuserfece35
 
A New Lagrangian Relaxation Approach To The Generalized Assignment Problem
Kim Daniels
 
Ad

More from Ono Shigeru (6)

PDF
Miller_Resnick_Zhackhauser_2005
Ono Shigeru
 
PDF
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering, Chapter 9
Ono Shigeru
 
PDF
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering, Chapter 7
Ono Shigeru
 
PDF
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering: Chapter 5
Ono Shigeru
 
PDF
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering: Chapter 3
Ono Shigeru
 
PDF
Hong&Page(2012): Some Microfoundations of Collective Wisdom
Ono Shigeru
 
Miller_Resnick_Zhackhauser_2005
Ono Shigeru
 
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering, Chapter 9
Ono Shigeru
 
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering, Chapter 7
Ono Shigeru
 
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering: Chapter 5
Ono Shigeru
 
Lilien, G.L. & Rangaswamy, A. (2004) Marketing Engineering: Chapter 3
Ono Shigeru
 
Hong&Page(2012): Some Microfoundations of Collective Wisdom
Ono Shigeru
 

Recently uploaded (20)

PPTX
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
PPTX
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
PDF
blockchain123456789012345678901234567890
tanvikhunt1003
 
PDF
CH2-MODEL-SETUP-v2017.1-JC-APR27-2017.pdf
jcc00023con
 
PDF
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
PPTX
Economic Sector Performance Recovery.pptx
yulisbaso2020
 
PDF
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
PPTX
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PPTX
Azure Data management Engineer project.pptx
sumitmundhe77
 
PPTX
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
PPTX
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
akmibrahimbd
 
PPTX
Blue and Dark Blue Modern Technology Presentation.pptx
ap177979
 
PPTX
International-health-agency and it's work.pptx
shreehareeshgs
 
PPTX
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
PPTX
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
PPT
2009worlddatasheet_presentation.ppt peoole
umutunsalnsl4402
 
PPTX
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
PPTX
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
PPT
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
Data Security Breach: Immediate Action Plan
varmabhuvan266
 
Introduction to Biostatistics Presentation.pptx
AtemJoshua
 
blockchain123456789012345678901234567890
tanvikhunt1003
 
CH2-MODEL-SETUP-v2017.1-JC-APR27-2017.pdf
jcc00023con
 
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
Economic Sector Performance Recovery.pptx
yulisbaso2020
 
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
Azure Data management Engineer project.pptx
sumitmundhe77
 
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
akmibrahimbd
 
Blue and Dark Blue Modern Technology Presentation.pptx
ap177979
 
International-health-agency and it's work.pptx
shreehareeshgs
 
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
Complete_STATA_Introduction_Beginner.pptx
mbayekebe
 
2009worlddatasheet_presentation.ppt peoole
umutunsalnsl4402
 
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
Real Life Application of Set theory, Relations and Functions
manavparmar205
 

Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7

  • 1. Regularization for Deep Learning Goodfellow, Bengio, & Courville (2016) Deep Learning, Chap 7. Shigeru ONO (Insight Factory) DL 読書会: 2020/08 Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 1 / 40
  • 2. TOC 1 7.1 Parameter Norm Penalties 2 7.2 Norm Penalties as Constrained Optimization 3 7.3 Regularization and Under-Constrained Problems 4 7.4 Dataset Augmentation 5 7.5 Noise Robustness 6 7.6 Semi-Supervised Learning 7 7.7 Multitask Learning 8 7.8 Early Stopping 9 7.9 Parameter Tying and Parameter Sharing 10 7.10 Sparse Representation 11 7.11 Bagging and Other Ensemble Methods 12 7.12 Dropout 13 7.13 Adversarial Training 14 7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 2 / 40
  • 3. (introduction) Regularization: any modification we make to a learning algorithm that is intended to reduce its generalization error possibly at the expense of increasing training error In the context of DL, most regularization strategies are based on regularizing estimators Possible situations (See Chap.5) : (1) the model family excluded the true DGP (underfitting) (2) the model family matched the true DGP (3) the model family included the true DGP but also many other possible DGP (overfitting) The goal of regularization is to take the model from (3) into (2). But... In most applications of DL, the true DGP is outside the model family (=(1)). Controlling the complexity of the model is not to find the model of the right size, but to find the model with appropriate regularization in which generalization error is minimized. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 3 / 40
  • 4. 7.1 Parameter Norm Penalties Adding a parameter norm penalty Ω(θ) to the objective function J. ˜J(θ; X, y) = J(θ; X, y) + αΩ(θ) α(≥ 0): weight of the relative contribution of Ω. For NN, we typically choose Ω that penalizes only w (the weights of the affine transformation at each layer). It is reasonable to use the same α at all layers. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 4 / 40
  • 5. 7.1.1 L2 Parameter Regularization Ω(θ) = 1 2 ||w||2 aka. weight decay, ridge regression, Tikhonov regularization. Bayesian interpretation: MAP inference with a Gaussian prior on the weights. (See 5.6.1) Total objective function: ˜J(w; X, y) = J(w; X, y) + α 2 w⊤ w Parameter gradient: ∇w ˜J(w; X, y) = αw + ∇wJ(w; X, y) What happens in a single gradient step? ... The learning rule is modified to shrink w by a constant factor. w ← (1 − ϵα)w − ϵ∇wJ(w; X, y) Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 5 / 40
  • 6. 7.1.1 L2 Parameter Regularization What happens over the entire course of training? (in general) Unregularized: Let w∗ be the weights which minimize the unregularized objective function: w∗ = arg min w J(w). Make a quadratic approximation to the J(w) in the neighborhood of w∗ : ˆJ(θ) = J(w∗ ) + 1 2 (w − w∗ )⊤ H(w − w∗ ) where H is the Hessian matrix of J with respect to w evaluated at w∗ . The minimum of ˆJ occurs where its gradient ∇w ˆJ(w) = H(w − w∗ ) is 0. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 6 / 40
  • 7. 7.1.1 L2 Parameter Regularization (Cont’d) Regularized: Let ˜w be the weights with minimize the regularized objective function ˜J. The minimum of ˜J occurs where α˜w + H(w − w∗ ) = 0. It follows that ˜w = (H + αI)−1 Hw∗ H is real and symmetric. We can have a eigenvalue decomposition H = QΛQ⊤ . ˜w = Q(Λ + αI)−1 ΛQ⊤ w∗ . i.e. The weight decay rescales w∗ along the axes defined by the eigenvector of H. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 7 / 40
  • 8. 7.1.1 L2 Parameter Regularization What happens over the entire course of training? (in the case of linear regression) Unregularized: Cost function: (Xw − y)⊤ (Xw − y) Solution: w = (X⊤ X)−1 X⊤ y Regularized: Cost function: (Xw − y)⊤ (Xw − y) + 1 2 αw⊤ w Solution: w = (X⊤ X + αI)−1 X⊤ y. i.e. The regularization cause the algorithm to ”perceive” that X has higher variance (than the variance it really has). Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 8 / 40
  • 9. 7.1.1 L2 Parameter Regularization Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 9 / 40
  • 10. 7.1.2 L1 Regularization Ω(θ) = ||w||1 = ∑ i |wi| Total objective function : ˜J(w; X, y) = J(w; X, y) + α||w||1 Parameter gradient: ∇w ˜J(w; X, y) = αsign(w) + ∇wJ(w; X, y) It does not admit clean algebraic solution. For simple linear model with a quadratic cost function, ∇w ˆJ(w) = H(w − w∗ ) Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 10 / 40
  • 11. 7.1.2 L1 Regularization (Cont’d) Assume the Hessian is diagonal, H = diag([H11, . . . , Hnn]) (i.e. no correlation between the input features) Then we have a quadratic approximation of the cost function: ˆJ(w; X, y) = J(w∗ ; X, y) + ∑ i ( 1 2 Hii(wi − w∗ i )2 + α|wi| ) The solution is: wi = sign(w∗ i ) max ( |w∗ i | − α Hii , 0 ) Consider the situation where w∗ i > 0 for all i. Then When w∗ i ≤ α Hii , the optimal value is wi = 0. When w∗ i > α Hii , the optimal value is just shifted by a distance α Hii . Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 11 / 40
  • 12. 7.1.2 L1 Regularization (Cont’d) In short, the solution is more sparse (i.e. some parameter have an optimal value of zero). It has been used as a feature selection mechanism. E.g. LASSO Bayesian interpretation: MAP inference with a isotropic Laplace prior on the weights. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 12 / 40
  • 13. 7.2 Norm Penalties as Constrained Optimization We can think of the penalties as constraints. Cost function: ˜J(θ; X, y) = J(θ; X, y) + αΩ(θ) If we wanted to constrain as Ω(θ) < k, we could construct a generalized Lagrangian L(θ, α; X, y) = J(θ; X, y) + α(Ω(θ) − k) The solution is θ∗ = arg min θ max α,α≥0 L(θ, α) we can fix α as its optimal value α∗ : θ∗ = arg min θ L(θ, α∗ ) = arg min θ J(θ; X, y) + α∗ Ω(θ) This is same as the problem of minimizing ˜J. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 13 / 40
  • 14. 7.2 Norm Penalties as Constrained Optimization Sometimes we may with to use explicit constraints rather than penalties: when we know the appropriate value of k when the penalties can cause optimization to get stuck in local minima corresponding to small θ. when we with to impose some stability on the optimization procedure Approach: Srebro & Shraibman (2005): constraining the norm of each column of the weight matrix of a layer Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 14 / 40
  • 15. 7.3 Regularization and Under-Constrained Problems Sometimes regularization is necessary for ML problems to be properly defined. when the problem depends on (X⊤ X)−1 but X⊤ X is singular. when the problem has no closed form solution. E.g., logistic regression applied to a problem where the class are linear separable. If weight w can achieve perfect classification, 2w will also achieve with higher likelihood. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 15 / 40
  • 16. 7.4 Dataset Augmentation Idea: Create fake data and add it to the training set. an effective technique particularly for object recognition. E.g. translating the training images a few pixels in each direction. Injecting noise in the input to a NN. It can improve the robustness of NNs. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 16 / 40
  • 17. 7.5 Noise Robustness Idea: Add noise to the weights. It can be interpreted as a stochastic implementation of Bayesian inference over the weight. Noise reflect our uncertainty on the model weights. It can also be interpreted as equivalent to a more traditional form of regularization. Consider we wish to train a function ˆy(x) using the least-square cost function J = Ep(x,y)[(ˆy(x) − y)2 ] Assume that we also include a random perturbation ϵw ∼ N(ϵ; 0, ηI) of the network weights. The objective function becomes ˜JW = Ep(x,y,ϵW)[(ˆyeW (x) − y)2 ] For small η, it is equivalent to J with a regularization term ηEp(x,y)[||∇Wˆy(x)||2 ]. It push the model into regions where the model is relatively insensitive to small variations in the weights, finding points that are not merely minima, but minima surrounded by flat regions. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 17 / 40
  • 18. 7.5.1 Injecting Noise at the Output Targets Idea: Explicitly model the noise on the y labels. label smoothing: regularize a model based on a softmax with k output values by replacing classification target 0 with ϵ/(k − 1) 1 with 1 − ϵ Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 18 / 40
  • 19. 7.6 Semi-Supervised Learning Idea: Use both unlabeled example (from P(x)) and labeled example (from P(x, y)) in order to estimate P(y|x) In the context of DL, semi-supervised learning usually refers to learning a representation h = f(x). The goal is to learn a representation so that examples from the same class have similar representations. Construct models in which a generative model of either P(x) or P(x, y) shares parameters with a discriminative model of P(y|x) One can find a better trade-off of two types of criterion: The supervised criterion: − log P(y|x) The unsupervised (generative) criterion: − log P(x) or − log P(x, y) Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 19 / 40
  • 20. 7.7 Multitask Learning Idea: Pool the examples arising out of several tasks The model can be divided into two parts: Task-specific parameters Generic parameters, shared across the tasks It can improve generalization and generalization error bounds Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 20 / 40
  • 21. 7.7 Multitask Learning Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 21 / 40
  • 22. 7.8 Early Stopping Idea : Obtain a model with the parameters at the point in time with the lowest validation set error (rather than with the latest parameters in the training process) the most commonly use form of regularization in DL can be interpreted as a hyperparameter (the number of training steps) selection algorithm requires a validation set, which is not fed to the model One can perform extra training (where all training data is used) after initial learning (with early stopping). Two basic strategies: Initialize the model again and retrain on all the data (for the same number of steps as the first round) Keep the parameter and continue training (but now using all the data). It is not as well behaved. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 22 / 40
  • 23. 7.8 Early Stopping How early stopping acts as regularizer: Restricting both the number of iterations and the learning rate limit the volume of parameter space reachable from the initial parameter value. In a simple linear model with a quadratic error function and simple gradient decent, early stopping is equivalent to L2 regularization. [...skipped...] Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 23 / 40
  • 24. 7.8 Early Stopping Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 24 / 40
  • 25. 7.9 Parameter Tying and Parameter Sharing Sometimes we may know there should be some dependencies between the parameters. Parameter Tying: E.g. two models performing the same classification task but with different input distributions: ˆy(A) = f(w(A)) , x), ˆy(B) = f(w(B)) , x) We believe the model parameters should be close to each other We can use a penalty Ω(w(A) , w(B) ) = ||w(A) − w(B) ||2 Parameter Sharing: force sets of parameters to be equal can lead to significant reduction of memory The most popular use: convolutional neural network (CNNs) (See Chap.9) Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 25 / 40
  • 26. 7.10 Sparse Representation Idea: place a penalty on the activation of the unit (rather than on the parameters) Norm penalty regularization of representation: h: sparse representation of the data x add a norm penalty on the representation Ω(h) to the loss function J: ˜J(θ; X, y) = J(θ; X, y) + αΩ(h) We can use L1 penalty Ω(h) = ||h||1 or other types of penalties Orthogonal matching pursuit (OMP-k): encodes x with h that solves the constrained optimization problem arg min h,||h||0<k ||x − Wh||2 where ||h||0 is the number of nonzero entries of h OMP-1 can be a very effective feature extractor for DL Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 26 / 40
  • 27. 7.11 Bagging and Other Ensemble Methods Ensemble methods combine several models (trained separately) in order to reduce generalization error an example of a general strategy called model averaging On average, the ensemble will perform at least as well as any of its members. If the members make independent errors, the ensemble will perform significantly better. Bagging (bootstrap aggregating) construct k different datasets of same size by sampling with replacement from the original dataset Model i is trained on data set i Boosting: construct an ensemble with higher capacity then individual models Boosting of NN: incrementally add NN to the ensemble Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 27 / 40
  • 28. 7.12 Dropout Background: Bagging involves training multiple models and evaluating them on each test example. This seems impractical when each model is a large NN. Dropout can be thought of as a method of making bagging practical. What is Dropout? make all subnetworks that can be formed by removing nonoutput units from an base network In many cases, we can remove a unit by multiplying its output value by zero. Let µ be a vector of binary mask, which is applied to all the input and hidden units. train them with a minibatch-based algorithm Each time we load an example into a minibatch, we randomly sample µ and apply it. Typically, an input unit is included with probability 0.8, and a hidden unit is included with 0.5. Run forward propagation, back-propagation, and the learning update. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 28 / 40
  • 29. 7.12 Dropout Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 29 / 40
  • 30. 7.12 Dropout Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 30 / 40
  • 31. 7.12 Dropout How to make a prediction: At training time, µ is sampled from the probability distribution p(µ) Each submodel defined by µ defines a probability distribution p(y|x, µ) To make a prediction from all submodels, we can use arithmetic mean:∑ µ p(µ)p(y|x, µ) But geometric mean performs better. Let ˜pensemble(y|x) be the geometric mean of p(y|x, µ). ˜p(y|x) is not guaranteed to be a probability distribution. We must renormalize: pensemble(y|x) = ˜pensemble(y|x) ∑ y′ ˜pensemble(y′|x) Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 31 / 40
  • 32. 7.12 Dropout Weight scaling inference rule: We can approximate pensemble by evaluating p(y|x) in one model This model uses all units, but with the weights going out of unit i multiplied by the probability of including unit i if an inclusion probability of a unit is 1/2, the weight of the unit is multiplied by 1/2 at the end of training, or the states of the unit is multiplied by 2 during training There is not yet any theoretical argument for this rule, but empirically it performs well Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 32 / 40
  • 33. 7.12 Dropout Advantages of dropout: very computationally cheap it does not significantly limit the type of model or training procedure Limitations: it reduces the effective capacity of a model. To offset this effect, we must increase the size of the model. it is less effective when extremely few labeled training examples are available. When additional unlabeled data is available, unsupervised feature learning can gain an advantage over dropout. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 33 / 40
  • 34. 7.12 Dropout fast dropout: analytical approximations to the sum over all submodels more principled approach than the weight scaling inference rule. Interpretation of dropout: an experiments using ”dropout boosting” use exactly the same mask noise as dropout trains the entire ensemble to jointly (not independently) maximize the log-likelihood on the training set shows almost no regularization effect This demonstrates that dropout is a type of bagging. Dropout in itself have no robustness to noise. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 34 / 40
  • 35. 7.12 Dropout Other approaches inspired by dropout: DropConnect: each product between a single scalar weight and a single hidden unit state is considered a unit that can be dropped Stochastic pooling: build ensembles of CNNs real valued mask: multiplying the weights by µ ∼ N(1, I) can outperform dropout Another view of dropout: Dropout regularize each hidden unit to be not merely a good feature but a feature that is good in many context. Masking can be seen as a form of highly intelligent, adaptive destruction of the information content of the input (rather than destruction of the raw input). It allows the model to make use of all the knowledge about the input distribution which has acquired so far. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 35 / 40
  • 36. 7.13 Adversarial Training Adversarial example: an input x′ near a data point x such that the model output is very different at x′ In many cases, human observer cannot tell the difference between x and x′ One of the causes of these examples is excessive linearity in NN. The value of a linear function can change very rapidly if it has numerous inputs. Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 36 / 40
  • 37. 7.13 Adversarial Training Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 37 / 40
  • 38. 7.13 Adversarial Training Adversarial Training: training on adversarially perturbed examples from the training set a way of explicitly introducing a local constancy prior into NN Virtual adversarial example: Suppose the model assigns some label ˆy at a point x which has no true label. We can seek an adversarial example x′ that causes the model to output a label y′ (̸= ˆy) We can train the model to assign the same label to x and x′ This encourages the model to learn a function which is robust to small change This provide a means of semi-supervised learning Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 38 / 40
  • 39. 7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier Manifold hypothesis: the data lies near a low-dimensional manifold Tangent distance algorithm non-parametric nearest neighbor algorithm, where the distance between points x1 and x2 is the distance between the manifolds M1 and M2 to which they respectively belong approximate Mi by its tangent plane at xi The user has to specify the tangent vectors Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 39 / 40
  • 40. 7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier Tangent prop algorithm: [...skipped...] double backprop: [...skipped...] Shigeru ONO (Insight Factory) DL Chap.7 DL 読書会: 2020/08 40 / 40