Problem setup
-
Input-output pairs: not to mention
-
Representing the output: one-hot vector
-
yi=exp(zi)∑jexp(zj) y_{i}=\frac{\exp \left(z_{i}\right)}{\sum_{j} \exp \left(z_{j}\right)} yi=∑jexp(zj)exp(zi)
-
two classes of softmax = sigmoid
-
-
Divergence: must be differentiable
-
For real-valued output vectors, the (scaled) L2L_2L2 divergence
- Div(Y,d)=12∥Y−d∥2=12∑i(yi−di)2 \operatorname{Div}(Y, d)=\frac{1}{2}\|Y-d\|^{2}=\frac{1}{2} \sum_{i}\left(y_{i}-d_{i}\right)^{2} Div(Y,d)=21∥Y−d∥2=21i∑(yi−di)2
-
For binary classifier
-
Div(Y,d)=−dlogY−(1−d)log(1−Y) \operatorname{Div}(Y, d)=-\operatorname{dlog} Y-(1-d) \log (1-Y) Div(Y,d)=−dlogY−(1−d)log(1−Y)
-
Note: the derivative is not zero even d=Yd = Yd=Y, but it can converge very quickly
-
-
For multi-class classification
-
Div(Y,d)=−∑idilogyi=−logyc \operatorname{Div}(Y, d)=-\sum_{i} d_{i} \log y_{i}=-\log y_{c} Div(Y,d)=−i∑dilogyi=−logyc
-
If yc<1y_c < 1yc<1 , the slope is negative w.r.t. ycy_cyc, indicates increasing ycy_cyc will reduce divergence
-
-
Train the network
Distributed Chain rule
y=f(g1(x),g1(x),…,gM(x)) y=f\left(g_{1}(x), g_{1}(x), \ldots, g_{M}(x)\right) y=f(g1(x),g1(x),…,gM(x))
dydx=∂f∂g1(x)dg1(x)dx+∂f∂g2(x)dg2(x)dx+⋯+∂f∂gM(x)dgM(x)dx \frac{d y}{d x}=\frac{\partial f}{\partial g_{1}(x)} \frac{d g_{1}(x)}{d x}+\frac{\partial f}{\partial g_{2}(x)} \frac{d g_{2}(x)}{d x}+\cdots+\frac{\partial f}{\partial g_{M}(x)} \frac{d g_{M}(x)}{d x} dxdy=∂g1(x)∂fdxdg1(x)+∂g2(x)∂fdxdg2(x)+⋯+∂gM(x)∂fdxdgM(x)
Backpropagation
-
For each layer: we caculate ∂Div∂yi\frac{\partial D i v}{\partial y_{i}}∂yi∂Div,∂Dicv∂z\frac{\partial Dicv}{\partial z}∂z∂Dicv, and ∂Div∂wij\frac{\partial Div}{\partial w_{ij}}∂wij∂Div
-
For ouput layer
- It is easy to caculate ∂Div∂yi(N)\frac{\partial D i v}{\partial y_{i}^{(N)}}∂yi(N)∂Div
- So: ∂Div∂zi(N)=fN′(zi(N))∂Div∂yi(N)\frac{\partial D i v}{\partial z_{i}^{(N)}}=f_{N}^{\prime}\left(z_{i}^{(N)}\right) \frac{\partial D i v}{\partial y_{i}^{(N)}}∂zi(N)∂Div=fN′(zi(N))∂yi(N)∂Div
- ∂Div∂wij(N)=∂zj(N)∂wij(N)∂Div∂zj(N)\frac{\partial D i v}{\partial w_{ij}^{(N)}}=\frac{\partial z_{j}^{(N)}}{\partial w_{ij}^{(N)}} \frac{\partial D i v}{\partial z_{j}^{(N)}}∂wij(N)∂Div=∂wij(N)∂zj(N)∂zj(N)∂Div, where ∂zj(N)∂wij(N)=yi(N)\frac{\partial z_{j}^{(N)}}{\partial w_{ij}^{(N)}} = y_i^{(N)}∂wij(N)∂zj(N)=yi(N)
-
Pass on
- zj(N)=∑iwij(2)yi(v−1)z_{j}^{(N)}=\sum_{i} w_{i j}^{(2)} y_{i}^{(v-1)}zj(N)=∑iwij(2)yi(v−1), so ∂zj(N)∂y1(N−1)=wij(N)\frac{\partial z_{j}^{(N)}}{\partial y_{1}^{(N-1)}} = w_{ij}^{(N)}∂y1(N−1)∂zj(N)=wij(N)
- ∂Div∂yi(N−1)=∑jwij(N)∂Div∂zj(N)\frac{\partial D i v}{\partial y_{i}^{(N-1)}}=\sum_{j} w_{i j}^{(N)} \frac{\partial D i v}{\partial z_{j}^{(N)}}∂yi(N−1)∂Div=∑jwij(N)∂zj(N)∂Div
- ∂Div∂zi(N−1)=fN−1′(zi(N−1))∂Div∂yi(N−1)\frac{\partial D i v}{\partial z_{i}^{(N-1)}}=f_{N-1}^{\prime}(z_{i}^{(N-1)}) \frac{\partial D i v}{\partial y_{i}^{(N-1)}}∂zi(N−1)∂Div=fN−1′(zi(N−1))∂yi(N−1)∂Div
- ∂Div∂wij(N−1)=yi(N−2)∂Div∂zj(N−1)\frac{\partial D i v}{\partial w_{i j}^{(N-1)}}=y_{i}^{(N-2)} \frac{\partial D i v}{\partial z_{j}^{(N-1)}}∂wij(N−1)∂Div=yi(N−2)∂zj(N−1)∂Div
Special case
Vector activations
- Vector activations: all outputs are functions of all inputs
-
So the derivatives need to change a little
-
∂Div∂zi(k)=∑j∂Div∂yj(k)∂yj(k)∂zi(k) \frac{\partial D i v}{\partial z_{i}^{(k)}}=\sum_{j} \frac{\partial D i v}{\partial y_{j}^{(k)}} \frac{\partial y_{j}^{(k)}}{\partial z_{i}^{(k)}} ∂zi(k)∂Div=j∑∂yj(k)∂Div∂zi(k)∂yj(k)
-
Note: derivatives of scalar activations are just a special case of vector activations:
-
∂yj(k)∂zi(k)=0 for i≠j \frac{\partial y_{j}^{(k)}}{\partial z_{i}^{(k)}}=0 \text { for } i \neq j ∂zi(k)∂yj(k)=0 for i=j
-
For example, Softmax:
yi(k)=exp(zi(k))∑jexp(zj(k)) y_{i}^{(k)}=\frac{\exp \left(z_{i}^{(k)}\right)}{\sum_{j} \exp \left(z_{j}^{(k)}\right)} yi(k)=∑jexp(zj(k))exp(zi(k))
∂Div∂zi(k)=∑j∂Div∂yj(k)∂yj(k)∂zi(k) \frac{\partial D i v}{\partial z_{i}^{(k)}}=\sum_{j} \frac{\partial D i v}{\partial y_{j}^{(k)}} \frac{\partial y_{j}^{(k)}}{\partial z_{i}^{(k)}} ∂zi(k)∂Div=j∑∂yj(k)∂Div∂zi(k)∂yj(k)
KaTeX parse error: Got function '\newline' with no arguments as argument to '\left' at position 1: \̲n̲e̲w̲l̲i̲n̲e̲
- Using Keonecker delta δij=1\delta_{i j}=1δij=1 if i=j,0i=j, \quad 0i=j,0 if i≠ji \neq ji=j
∂Div∂zi(k)=∑j∂Div∂yj(k)yi(k)(δij−yj(k)) \frac{\partial D i v}{\partial z_{i}^{(k)}}=\sum_{j} \frac{\partial D i v}{\partial y_{j}^{(k)}} y_{i}^{(k)}\left(\delta_{i j}-y_{j}^{(k)}\right) ∂zi(k)∂Div=j∑∂yj(k)∂Divyi(k)(δij−yj(k))
Multiplicative networks
- Some types of networks have multiplicative combination(instead of additive combination)
- Seen in networks such as LSTMs, GRUs, attention models, etc.
- So the derivatives need to change
∂Div∂oi(k)=∑jwij(k+1)∂Div∂zj(k+1) \frac{\partial D i v}{\partial o_{i}^{(k)}}=\sum_{j} w_{i j}^{(k+1)} \frac{\partial D i v}{\partial z_{j}^{(k+1)}} ∂oi(k)∂Div=j∑wij(k+1)∂zj(k+1)∂Div
∂Div∂yj(k−1)=∂oi(k)∂yj(k−1)∂Div∂oi(k)=yl(k−1)∂Div∂oi(k) \frac{\partial D i v}{\partial y_{j}^{(k-1)}}=\frac{\partial o_{i}^{(k)}}{\partial y_{j}^{(k-1)}} \frac{\partial D i v}{\partial o_{i}^{(k)}}=y_{l}^{(k-1)} \frac{\partial D i v}{\partial o_{i}^{(k)}} ∂yj(k−1)∂Div=∂yj(k−1)∂oi(k)∂oi(k)∂Div=yl(k−1)∂oi(k)∂Div
- A layer of multiplicative combination is a special case of vector activation
Non-differentiable activations
-
Activation functions are sometimes not actually differentiable
- The RELU (Rectified Linear Unit)
- And its variants: leaky RELU, randomized leaky RELU
- The “max” function
- The RELU (Rectified Linear Unit)
-
Subgradient
-
(f(x)−f(x0))≥vT(x−x0) \left(f(x)-f\left(x_{0}\right)\right) \geq v^{T}\left(x-x_{0}\right) (f(x)−f(x0))≥vT(x−x0)
-
The subgradient is a direction in which the function is guaranteed to increase
-
If the function is differentiable at xxx , the subgradient is the gradient
-
But gradient is not always the subgradient though
-
Vector formulation
- Define the vectors:
Forward pass
Backward pass
- Chain rule
- y=f(g(x))\mathbf{y}=\boldsymbol{f}(\boldsymbol{g}(\mathbf{x}))y=f(g(x))
- Let z=g(x)z = g(x)z=g(x),y=f(z)y = f(z)y=f(z)
- So Jy(x)=Jy(z)Jz(x)J_{\mathbf{y}}(\mathbf{x})=J_{\mathbf{y}}(\mathbf{z}) J_{\mathbf{z}}(\mathbf{x})Jy(x)=Jy(z)Jz(x)
- For scalar functions:
- D=f(Wy+b)D = f(Wy + b)D=f(Wy+b)
- Let z=Wy+bz = Wy + bz=Wy+b, D=f(z)D = f(z)D=f(z)
- ∇xD=∇z(D)Jz(x)\nabla_{x} D = \nabla_z(D)J_z(x)∇xD=∇z(D)Jz(x)
- So for backward process
- ∇ZNDiv=∇YDiv∇ZNY\nabla_{Z_N} Div = \nabla_Y Div \nabla_{Z_N}Y∇ZNDiv=∇YDiv∇ZNY
- $\nabla_{y_{N-1}}Div = \nabla_{Z_N} Div \nabla_{y_{N-1}} z_N $
- ∇WNDiv=yN−1∇ZNDiv\nabla_{W_N} Div = y_{N-1} \nabla_{Z_N} Div∇WNDiv=yN−1∇ZNDiv
- ∇bNDiv=∇ZNDiv\nabla_{b_N} Div = \nabla_{Z_N} Div∇bNDiv=∇ZNDiv
- For each layer
- First compute ∇yDiv\nabla_{y} Div∇yDiv
- Then compute ∇zDiv\nabla_{z}Div∇zDiv
- Finally ∇WDiv\nabla_{W} Div∇WDiv, ∇WDiv\nabla_{W} Div∇WDiv
Training
Analogy to forward pass