SlideShare a Scribd company logo
Lecture 8. Generative Adversarial Network
 GAN was first introduced by Ian Goodfellow et al in 2014
 Have been used in generating images, videos, poems, some simple
conversation.
 Note, image processing is easy (all animals can do it), NLP is hard
(only human can do it).
 This co-evolution approach might have far-reaching implications.
Bengio: this may hold the key to making computers a lot more
intelligent.
 Ian Goodfellow:
https://www.youtube.com/watch?v=YpdP_0-IEOw
 Radford, (generate voices also here)
https://www.youtube.com/watch?v=KeJINHjyzOU
 Tips for training GAN: https://github.com/soumith/ganhacks
Autoencoder
As close as possible
NN
Encoder
NN
Decoder
code
NN
Decoder
code
Randomly
generate a vector
as code
Image ?
Autoencoder with 3 fully connected layers
Large  small, learn to compress
Training: model.fit(X,X)
Cost function: Σk=1..N (xk – x’k)2
Auto-encoder
NN
Decoder
code
2D code
-1.5 1.5
NN
Decoder
NN
Decoder
Auto-encoder
-1.5 1.5
NN
Encoder
NN
Decoder
code
input output
Auto-encoder
VAE
NN
Encoder
input NN
Decoder
output
m1
m2
m3
From a normal
distribution
X
+
Minimize
reconstruction error
ex
p
Minimize
Auto-Encoding Variational Bayes,
https://arxiv.org/abs/1312.6114
σ1
σ2
σ3
e3
e2
e1
ci = exp(σi)ei + mi
Σi=1..3 [exp(σi)−(1+σi)+(mi)2 ]
c3
c2
c1
This constrains σi approacing 0 is good
Problems of VAE
 It does not really try to simulate real images
NN
Decoder
code
Output As close as
possible
One pixel difference to
the target
Also one pixel
difference to the target
Realistic Fake
VAE treats these the same
Gradual and step-wise generation
NN
Generator
v1
Discri-
minator
v1
Real images:
NN
Generator
v2
Discri-
minator
v2
NN
Generator
v3
Discri-
minator
v3
Generated
images
These are
Binary classifiers
GAN – Learn a discriminator
NN
Generator
v1
Real images
Sampled from
DB:
Discri-
minator
v1
image 1/0 (real or fake)
Something like
Decoder in VAE
Randomly
sample a
vector
1 1 1 1
0 0 0 0
GAN – Learn a generator
Discri-
minator
v1
NN
Generator
v1
Randomly sample
a vector
0.13
Updating the parameters of
generator
The output be classified
as “real” (as close to 1
as possible)
Generator + Discriminator =
a network
Using gradient descent to
update the parameters in the
generator, but fix the
discriminator 1.0
v2
Train
this
Do not
Train
This
They have
Opposite
objectives
Generating 2nd element figures
Source of images: https://zhuanlan.zhihu.com/p/24767059
From Dr. HY Lee’s notes.
DCGAN: https://github.com/carpedm20/DCGAN-tensorflow
You can use the following to start a project (but this is in Chinese):
GAN – generating 2nd element figures
100 rounds
This is fast, I think you can use your CPU
GAN – generating 2nd element figures
1000 rounds
GAN – generating 2nd element figures
2000 rounds
GAN – generating 2nd element figures
5000 rounds
GAN – generating 2nd element figures
10,000 rounds
GAN – generating 2nd element figures
20,000 rounds
GAN – generating 2nd element figures
50,000 rounds
Next few images from Goodfellow lecture
Traditional mean-squared
Error, averaged, blurry
Last 2 are by deep learning approaches.
Deep-Learning-2017-Lecture7GAN.ppt
Deep-Learning-2017-Lecture7GAN.ppt
Similar to word embedding (DCGAN paper)
256x256 high resolution pictures
by Plug and Play generative network
From natural language to pictures
Deriving GAN
During the rest of this lecture, we will go
thru the original ideas and derive GAN.
I will avoid the continuous case and stick
to simple explanations.
Maximum Likelihood Estimation
 Give a data distribution Pdata(x)
 We use a distribution PG(x;θ) parameterized by θ to
approximate it
 E.g. PG(x;θ) is a Gaussian Mixture Model, where θ contains
means and variances of the Gaussians.
 We wish to find θ s.t. PG(x;θ) is close to Pdata(x)
 In order to do this, we can sample
{x1,x2, … xm} from Pdata(x)
 The likelihood of generating these
xi’s under PG is
L= Πi=1…m PG(xi; θ)
 Then we can find θ* maximizing the L.
KL (Kullback-Leibler) divergence
 Discrete:
DKL(P||Q) = ΣiP(i)log[P(i)/Q(i)]
 Continuous:
DKL(P||Q) = p(x)log [p(x)/q(x)]
 Explanations:
Entropy: - ΣiP(i)logP(i) - expected code length (also optimal)
Cross Entropy: - ΣiP(i)log Q(i) – expected coding
length using optimal code for Q
DKL= ΣiP(i)log[P(i)/Q(i)] = ΣiP(i)[logP(i) – logQ(i)], extra bits
JSD(P||Q) = ½ DKL(P||M)+ ½ DKL(Q||M), M= ½ (P+Q), symmetric KL
* JSD = Jensen-Shannon Divergency
−∞
∞
Maximum Likelihood Estimation
θ* = arg maxθ Πi=1..mPG(xi; θ) 
arg maxθ log Πi=1..mPG(xi; θ)
= arg maxθ Σi=1..m log PG(xi; θ), {x1,..., xm} sampled from Pdata(x)
= arg maxθ Σi=1..m Pdata(xi) log PG(xi; θ) --- this is cross entropy
≅ arg maxθ Σi=1..m Pdata(xi) log PG(xi; θ) - Σi=1..m Pdata(xi )logPdata(x i)
= arg minθ KL (Pdata(x) || PG(x; θ)) --- this is KL divergence
Note: PG is Gaussian mixture model, finding best θ will still be
Gaussians, this only can generate a few blubs. Thus this above
maximum likelihood approach does not work well.
Next we will introduce GAN that will change PG, not just estimating PG is
parameters We will find best PG , which is more complicated and
structured, to approximate Pdata.
https://blog.openai.com/generative-models/
PG(x,θ)
How to compute the
likelihood?
Thus let’s use an NN as PG(x; θ)
Pdata(x)
G
θ
Smaller
dimension
Larger
dimension
Prior
distribution
of z
PG(x) = Integrationz Pprior(z) I[G(z)=x]dz
Basic Idea of GAN
 Generator G
 G is a function, input z, output x
 Given a prior distribution Pprior(z), a probability distribution
PG(x) is defined by function G
 Discriminator D
 D is a function, input x, output scalar
 Evaluate the “difference” between PG(x) and Pdata(x)
 In order for D to find difference between Pdata from PG,
we need a cost function V(G,D):
G*=arg minGmaxDV(G,D)
Note, we are changing distribution G, not just update
its parameters (as in the max likelihood case).
Hard to learn PG by maximum likelihood
Basic Idea G* = arg minGmaxD V(G,D)
V(G1,D) V(G3,D)
V(G2,D)
G1 G2 G3
Given a generator G, maxDV(G,D) evaluates the
“difference” between PG and Pdata
Pick JSD function: V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))]
Pick the G s.t. PG is most similar to Pdata
 Given G, what is the optimal D* maximizing
 Given x, the optimal D* maximizing is:
f(D) = alogD + blog(1-D)  D*=a/(a+b)
Assuming D(x) can have any value here
MaxDV(G,D), G*=arg minGmaxDV(G,D)
V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))]
= Σ [ Pdata(x) log D(x) + PG(x) log(1-D(x) ]
Thus: D*(x) = Pdata(x) / (Pdata(x)+PG(x))
V(G1,D)
V(G1,D*1)
“difference” between
PG1 and Pdata
maxDV(G,D), G* = arg minGmaxD V(G,D)
D1*(x) = Pdata(x) / (Pdata(x)+PG_1(x))
D2*(x) = Pdata(x) / (Pdata(x)+PG_2(x))
V(G2,D) V(G3,D)
maxDV(G,D) V = Ex~P_data [log D(x)]
+ Ex~P_G[log(1-D(x))]
maxD V(G,D)
= V(G,D*), where D*(x) = Pdata / (Pdata + PG), and
1-D*(x) = PG / (Pdata + PG)
= Ex~P_data log D*(x) + Ex~P_G log (1-D*(x))
≈ Σ [ Pdata (x) log D*(x) + PG(x) log (1-D*(x)) ]
= -2log2 + 2 JSD(Pdata || PG ),
JSD(P||Q) = Jensen-Shannon divergence
= ½ DKL(P||M)+ ½ DKL(Q||M)
where M= ½ (P+Q).
DKL(P||Q) = Σ P(x) log P(x) /Q(x)
Summary:
 Generator G, Discriminator D
 Looking for G* such that
 Given G, maxD V(G,D)
= -2log2 + 2JSD(Pdata(x) || PG(x))
 What is the optimal G? It is G that makes JSD
smallest = 0:
PG(x) = Pdata (x)
V = Ex~P_data [log D(x)]
+ Ex~P_G[log(1-D(x))]
G* = arg minGmaxD V(G,D)
Algorithm
 To find the best G minimizing the loss function L(G):
θG  θG =−η L(G)/ θG , θG defines G
 Solved by gradient descent. Having max ok. Consider
simple case:
f(x) = max {D1(x), D2(x), D3(x)}
dD1(x)/dx dD2(x)/dx dD3(x)/dx
If Di(x) is the
Max in that region,
then do dDi(x)/dx
L(G), this is the
loss function
G* = arg minGmaxD V(G,D)
D1(x)
D2(x)
D3(x)
Algorithm
G* = arg minGmaxD V(G,D)
L(G)
 Given G0
 Find D*0 maximizing V(G0,D)
V(G0,D0*) is the JS divergence between Pdata(x) and PG0(x)
 θG  θG −η ΔV(G,D0*) / θG  Obtaining G1 (decrease JSD)
 Find D1* maximizing V(G1,D)
V(G1,D1*) is the JS divergence between Pdata(x) and PG1(x)
 θG  θG −η ΔV(G,D1*) / θG  Obtaining G2 (decrease JSD)
 And so on …
In practice …
Minimize Cross-entropy
This is what a Binary Classifier do
Output is D(x)
Minimize –log D(x)
If x is a positive example
If x is a negative example Minimize –log(1-D(x))
V = Ex~P_data [log D(x)]
+ Ex~P_G[log(1-D(x))]
 Given G, how to compute maxDV(G,D)?
Sample {x1, … ,xm} from Pdata
Sample {x*1, … ,x*m} from generator PG
Maximize:
V’ = 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
Positive example
D must accept
Negative example
D must reject
{x1,x2, … xm} from Pdata (x)
D is a binary classifier (can be deep) with parameters θd
Positive examples
Negative examples
Maximize
Minimize L = - V’
Minimize Cross-entropy
Binary Classifier
Output is f(x)
Minimize –log f(x)
If x is a positive example
If x is a negative example Minimize –log(1-f(x))
{x*1,x*2, … x*m} from PG(x)
V’ = Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
or
Algorithm
Repeat
k times
Learning D
Learning G
Initialize θd for D and θg for G
Can only find lower bound
of JSD or maxDV(G,D)
Only
Once
In each training iteration
Sample m examples {x1,x2, … xm} from data
distribution Pdata(x)
Sample m noise samples {z1, … , zm} from a simple
prior Pprior(z)
Obtain generated data {x*1, … , x*m}, x*i=G(zi)
Update discriminator parameters θd to maximize
 V’ ≈ 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
 θd  θd + ηΔV’(θd) (gradient ascent)
Simple another m noise samples {z1,z2, … zm} from
the prior Pprior(z),G(zi)=x*i
Update generator parameters θg to minimize
V’= 1/mΣi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
θg  θg − ηΔV’(θg) (gradient descent)
Ian Goodfellow
comment: this
is also done once
Objective Function for Generator
in Real Implementation
Real implementation:
label x from PG as positive
Training slow at the beginning
V = Ex~P_data [log D(x)
+ Ex~P_G[log(1-D(x))]
V = Ex~P_G [ − log (D(x)) ]
Some issues in training GAN
M. Arjovsky, L. Bottou, Towards principled
methods for training generative adversarial
networks, 2017.
Evaluating JS divergence
Martin Arjovsky, Léon Bottou, Towards Principled Methods for Training
Generative Adversarial Networks, 2017, arXiv preprint
Discriminator is too strong: for all three
Generators, JSD = 0
Evaluating JS divergence
JS divergence estimated by discriminator
telling little information
https://arxiv.org/a
bs/1701.07875
Weak Generator Strong Generator
Discriminator
Reason 1. Approximate by sampling
1 for all positive examples 0 for all negative examples
= 0
log 2 when Pdata and PG differ
completely
Weaken your discriminator?
Can weak discriminator
compute JS divergence?
V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))]
= 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
maxDV(G,D) = -2log2 + 2 JSD(Pdata || PG )
Discriminator
Reason 2. the nature of data
1 0
= 0
log2
V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))]
= 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
maxDV(G,D) = -2log2 + 2 JSD(Pdata || PG )
Pdata(x) and PG(x) have very little
overlap in high dimensional space
Theoretical estimation
GAN implementation
estimation
≈ 0
Evolution http://www.guokr.com/post/773890/
Better
Evolution needs to be smooth:
PG_0(x)
PG_50(x)
PG_100(x)
Better
……
……
Not really
better ……
Pdata(x)
Pdata(x)
Pdata(x)
JSD(PG_0 || Pdata) = log2
JSD(PG_100 || Pdata) = 0
JSD(PG_50 || Pdata) = log2
One simple solution: add noise
 Add some artificial noise to the inputs of
discriminator
 Make the labels noisy for the discriminator
Pdata(x) and PG(x) have
some overlap
Discriminator cannot perfectly separate real and generated
data
Noises need to decay over time
Mode Collapse
Data
Distribution
Generated
Distribution
Sometimes, this is hard to tell since
one sees only what’s generated, but not what’s missed.
Converge to same faces
Mode Collapse Example
8 Gaussian distributions:
What we
want …
In reality …
Pdata
Text to Image, by conditional GAN
Text to Image
- Results
"red flower with
black center"
Project topic: Code and data are all on web, many possibilities!
From CY Lee lecture
Algorithm
Repeat
k times
Learning D
Learning G
Only
Once
In each training iteration
Sample m examples {x1,x2, … xm} from data
distribution Pdata(x)
Sample m noise samples {z1, … , zm} from a simple
prior Pprior(z)
Obtain generated data {x*1, … , x*m}, x*i=G(zi)
Update discriminator parameters θd to maximize
 V’ ≈ Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
 θd  θd + ηΔV’(θd) (gradient ascent plus weight clipping)
Simple another m noise samples {z1,z2, … zm} from
the prior Pprior(z),G(zi)=x*i
Update generator parameters θg to minimize
V’= 1/mΣi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))
θg  θg − ηΔV’(θg) (gradient descent)
Ian Goodfellow
comment: this
is also done once
WGAN
Experimental Results
Approximate a mixture of Gaussians by
single mixture
WGAN Background
We have seen that JSD does not give
GAN a smooth and continuous
improvement curve.
We would like to find another distance
which gives that.
This is the Wasserstein Distance or earth
mover’s distance.
Earth Mover’s Distance
 Considering one distribution P as a pile of earth (total
amount of earth is 1), and another distribution Q (another
pile of earth) as the target
 The “earth mover’s distance” or “Wasserstein Distance”
is the average distance the earth mover has to move the
earth in an optimal plan.
d
Earth Mover’s Distance: best plan to
move
P
Q
JS vs Earth Mover’s Distance
PG_50
…… ……
d50
W(PG_0, Pdata)=d0
d0 d100
PG_0 Pdata PG_100 Pdata
Pdata
JS(PG_0, Pdata) = log2 JS(PG_50, Pdata) = log2 JS(PG_100, Pdata) = 0
W(PG_100, Pdata)=0
W(PG_50, Pdata)=d50
Explaining WGAN
Let W be the Wasserstein distance.
W(Pdata, PG) = maxD is 1-Lipschitz[Ex~P_data D(x) – Ex~P_G D(x)]
Where a function f is a
k-Lipschitz function if
||f(x1) – f(x2) ≤ k||x1 – x2 ||
How to guarantee this?
Weight clipping: for all
parameter updates, if w>c
Then w=c, if w<-c, then w=-c.
WGAN will provide gradient to
push PG towards Pdata
Blue: D(x) for original GAN
Green: D(x) for WGAN
Earth Mover Distance Examples:
Multi-layer perceptron

More Related Content

Similar to Deep-Learning-2017-Lecture7GAN.ppt (20)

PPTX
GAN Generative Adversarial Networks.pptx
ssuser2624f71
 
PPTX
Gans - Generative Adversarial Nets
SajalRastogi8
 
PDF
Generative adversarial networks
남주 김
 
PPTX
GAN Deep Learning Approaches to Image Processing Applications (1).pptx
RMDAcademicCoordinat
 
PDF
GAN(と強化学習との関係)
Masahiro Suzuki
 
PPTX
Generative Adeversarial Networks mathematics.pptx
AmitaArora27
 
PDF
Generative models : VAE and GAN
SEMINARGROOT
 
PPTX
GDC2019 - SEED - Towards Deep Generative Models in Game Development
Electronic Arts / DICE
 
PPTX
GANs Deep Learning Summer School
Rubens Zimbres, PhD
 
PPTX
Synthetic Image Data Generation using GAN &Triple GAN.pptx
RupeshKumar301638
 
PDF
Metrics for generativemodels
Dai-Hai Nguyen
 
PDF
Gan
Edaphon
 
PDF
Variational Autoencoders VAE - Santiago Pascual - UPC Barcelona 2018
Universitat Politècnica de Catalunya
 
PDF
Deep Generative Learning for All
Universitat Politècnica de Catalunya
 
PDF
Deep Generative Models II (DLAI D10L1 2017 UPC Deep Learning for Artificial I...
Universitat Politècnica de Catalunya
 
PPTX
gan-190318135433 (1).pptx
kiran814572
 
PPTX
Review of generative adversarial nets
SungminYou
 
PDF
A Short Introduction to Generative Adversarial Networks
Jong Wook Kim
 
PPTX
Reviews on Deep Generative Models in the early days / GANs & VAEs paper review
changedaeoh
 
PDF
M4L19 Generative Models - Slides v 3.pdf
yireme8491
 
GAN Generative Adversarial Networks.pptx
ssuser2624f71
 
Gans - Generative Adversarial Nets
SajalRastogi8
 
Generative adversarial networks
남주 김
 
GAN Deep Learning Approaches to Image Processing Applications (1).pptx
RMDAcademicCoordinat
 
GAN(と強化学習との関係)
Masahiro Suzuki
 
Generative Adeversarial Networks mathematics.pptx
AmitaArora27
 
Generative models : VAE and GAN
SEMINARGROOT
 
GDC2019 - SEED - Towards Deep Generative Models in Game Development
Electronic Arts / DICE
 
GANs Deep Learning Summer School
Rubens Zimbres, PhD
 
Synthetic Image Data Generation using GAN &Triple GAN.pptx
RupeshKumar301638
 
Metrics for generativemodels
Dai-Hai Nguyen
 
Gan
Edaphon
 
Variational Autoencoders VAE - Santiago Pascual - UPC Barcelona 2018
Universitat Politècnica de Catalunya
 
Deep Generative Learning for All
Universitat Politècnica de Catalunya
 
Deep Generative Models II (DLAI D10L1 2017 UPC Deep Learning for Artificial I...
Universitat Politècnica de Catalunya
 
gan-190318135433 (1).pptx
kiran814572
 
Review of generative adversarial nets
SungminYou
 
A Short Introduction to Generative Adversarial Networks
Jong Wook Kim
 
Reviews on Deep Generative Models in the early days / GANs & VAEs paper review
changedaeoh
 
M4L19 Generative Models - Slides v 3.pdf
yireme8491
 

Recently uploaded (20)

PPTX
MPMC_Module-2 xxxxxxxxxxxxxxxxxxxxx.pptx
ShivanshVaidya5
 
PPTX
artificial intelligence applications in Geomatics
NawrasShatnawi1
 
PDF
UNIT-4-FEEDBACK AMPLIFIERS AND OSCILLATORS (1).pdf
Sridhar191373
 
PPTX
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
PDF
Zilliz Cloud Demo for performance and scale
Zilliz
 
PPTX
265587293-NFPA 101 Life safety code-PPT-1.pptx
chandermwason
 
PPTX
Day2 B2 Best.pptx
helenjenefa1
 
PPT
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
PDF
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
PPTX
MobileComputingMANET2023 MobileComputingMANET2023.pptx
masterfake98765
 
PPTX
Heart Bleed Bug - A case study (Course: Cryptography and Network Security)
Adri Jovin
 
PDF
MAD Unit - 2 Activity and Fragment Management in Android (Diploma IT)
JappanMavani
 
PPTX
ISO/IEC JTC 1/WG 9 (MAR) Convenor Report
Kurata Takeshi
 
DOCX
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
PDF
GTU Civil Engineering All Semester Syllabus.pdf
Vimal Bhojani
 
DOCX
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
PDF
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
PDF
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
PDF
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
PDF
International Journal of Information Technology Convergence and services (IJI...
ijitcsjournal4
 
MPMC_Module-2 xxxxxxxxxxxxxxxxxxxxx.pptx
ShivanshVaidya5
 
artificial intelligence applications in Geomatics
NawrasShatnawi1
 
UNIT-4-FEEDBACK AMPLIFIERS AND OSCILLATORS (1).pdf
Sridhar191373
 
Introduction to Neural Networks and Perceptron Learning Algorithm.pptx
Kayalvizhi A
 
Zilliz Cloud Demo for performance and scale
Zilliz
 
265587293-NFPA 101 Life safety code-PPT-1.pptx
chandermwason
 
Day2 B2 Best.pptx
helenjenefa1
 
Oxygen Co2 Transport in the Lungs(Exchange og gases)
SUNDERLINSHIBUD
 
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
MobileComputingMANET2023 MobileComputingMANET2023.pptx
masterfake98765
 
Heart Bleed Bug - A case study (Course: Cryptography and Network Security)
Adri Jovin
 
MAD Unit - 2 Activity and Fragment Management in Android (Diploma IT)
JappanMavani
 
ISO/IEC JTC 1/WG 9 (MAR) Convenor Report
Kurata Takeshi
 
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
GTU Civil Engineering All Semester Syllabus.pdf
Vimal Bhojani
 
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
Introduction to Productivity and Quality
মোঃ ফুরকান উদ্দিন জুয়েল
 
International Journal of Information Technology Convergence and services (IJI...
ijitcsjournal4
 
Ad

Deep-Learning-2017-Lecture7GAN.ppt

  • 1. Lecture 8. Generative Adversarial Network  GAN was first introduced by Ian Goodfellow et al in 2014  Have been used in generating images, videos, poems, some simple conversation.  Note, image processing is easy (all animals can do it), NLP is hard (only human can do it).  This co-evolution approach might have far-reaching implications. Bengio: this may hold the key to making computers a lot more intelligent.  Ian Goodfellow: https://www.youtube.com/watch?v=YpdP_0-IEOw  Radford, (generate voices also here) https://www.youtube.com/watch?v=KeJINHjyzOU  Tips for training GAN: https://github.com/soumith/ganhacks
  • 2. Autoencoder As close as possible NN Encoder NN Decoder code NN Decoder code Randomly generate a vector as code Image ?
  • 3. Autoencoder with 3 fully connected layers Large  small, learn to compress Training: model.fit(X,X) Cost function: Σk=1..N (xk – x’k)2
  • 6. NN Encoder NN Decoder code input output Auto-encoder VAE NN Encoder input NN Decoder output m1 m2 m3 From a normal distribution X + Minimize reconstruction error ex p Minimize Auto-Encoding Variational Bayes, https://arxiv.org/abs/1312.6114 σ1 σ2 σ3 e3 e2 e1 ci = exp(σi)ei + mi Σi=1..3 [exp(σi)−(1+σi)+(mi)2 ] c3 c2 c1 This constrains σi approacing 0 is good
  • 7. Problems of VAE  It does not really try to simulate real images NN Decoder code Output As close as possible One pixel difference to the target Also one pixel difference to the target Realistic Fake VAE treats these the same
  • 8. Gradual and step-wise generation NN Generator v1 Discri- minator v1 Real images: NN Generator v2 Discri- minator v2 NN Generator v3 Discri- minator v3 Generated images These are Binary classifiers
  • 9. GAN – Learn a discriminator NN Generator v1 Real images Sampled from DB: Discri- minator v1 image 1/0 (real or fake) Something like Decoder in VAE Randomly sample a vector 1 1 1 1 0 0 0 0
  • 10. GAN – Learn a generator Discri- minator v1 NN Generator v1 Randomly sample a vector 0.13 Updating the parameters of generator The output be classified as “real” (as close to 1 as possible) Generator + Discriminator = a network Using gradient descent to update the parameters in the generator, but fix the discriminator 1.0 v2 Train this Do not Train This They have Opposite objectives
  • 11. Generating 2nd element figures Source of images: https://zhuanlan.zhihu.com/p/24767059 From Dr. HY Lee’s notes. DCGAN: https://github.com/carpedm20/DCGAN-tensorflow You can use the following to start a project (but this is in Chinese):
  • 12. GAN – generating 2nd element figures 100 rounds This is fast, I think you can use your CPU
  • 13. GAN – generating 2nd element figures 1000 rounds
  • 14. GAN – generating 2nd element figures 2000 rounds
  • 15. GAN – generating 2nd element figures 5000 rounds
  • 16. GAN – generating 2nd element figures 10,000 rounds
  • 17. GAN – generating 2nd element figures 20,000 rounds
  • 18. GAN – generating 2nd element figures 50,000 rounds
  • 19. Next few images from Goodfellow lecture Traditional mean-squared Error, averaged, blurry
  • 20. Last 2 are by deep learning approaches.
  • 23. Similar to word embedding (DCGAN paper)
  • 24. 256x256 high resolution pictures by Plug and Play generative network
  • 25. From natural language to pictures
  • 26. Deriving GAN During the rest of this lecture, we will go thru the original ideas and derive GAN. I will avoid the continuous case and stick to simple explanations.
  • 27. Maximum Likelihood Estimation  Give a data distribution Pdata(x)  We use a distribution PG(x;θ) parameterized by θ to approximate it  E.g. PG(x;θ) is a Gaussian Mixture Model, where θ contains means and variances of the Gaussians.  We wish to find θ s.t. PG(x;θ) is close to Pdata(x)  In order to do this, we can sample {x1,x2, … xm} from Pdata(x)  The likelihood of generating these xi’s under PG is L= Πi=1…m PG(xi; θ)  Then we can find θ* maximizing the L.
  • 28. KL (Kullback-Leibler) divergence  Discrete: DKL(P||Q) = ΣiP(i)log[P(i)/Q(i)]  Continuous: DKL(P||Q) = p(x)log [p(x)/q(x)]  Explanations: Entropy: - ΣiP(i)logP(i) - expected code length (also optimal) Cross Entropy: - ΣiP(i)log Q(i) – expected coding length using optimal code for Q DKL= ΣiP(i)log[P(i)/Q(i)] = ΣiP(i)[logP(i) – logQ(i)], extra bits JSD(P||Q) = ½ DKL(P||M)+ ½ DKL(Q||M), M= ½ (P+Q), symmetric KL * JSD = Jensen-Shannon Divergency −∞ ∞
  • 29. Maximum Likelihood Estimation θ* = arg maxθ Πi=1..mPG(xi; θ)  arg maxθ log Πi=1..mPG(xi; θ) = arg maxθ Σi=1..m log PG(xi; θ), {x1,..., xm} sampled from Pdata(x) = arg maxθ Σi=1..m Pdata(xi) log PG(xi; θ) --- this is cross entropy ≅ arg maxθ Σi=1..m Pdata(xi) log PG(xi; θ) - Σi=1..m Pdata(xi )logPdata(x i) = arg minθ KL (Pdata(x) || PG(x; θ)) --- this is KL divergence Note: PG is Gaussian mixture model, finding best θ will still be Gaussians, this only can generate a few blubs. Thus this above maximum likelihood approach does not work well. Next we will introduce GAN that will change PG, not just estimating PG is parameters We will find best PG , which is more complicated and structured, to approximate Pdata.
  • 30. https://blog.openai.com/generative-models/ PG(x,θ) How to compute the likelihood? Thus let’s use an NN as PG(x; θ) Pdata(x) G θ Smaller dimension Larger dimension Prior distribution of z PG(x) = Integrationz Pprior(z) I[G(z)=x]dz
  • 31. Basic Idea of GAN  Generator G  G is a function, input z, output x  Given a prior distribution Pprior(z), a probability distribution PG(x) is defined by function G  Discriminator D  D is a function, input x, output scalar  Evaluate the “difference” between PG(x) and Pdata(x)  In order for D to find difference between Pdata from PG, we need a cost function V(G,D): G*=arg minGmaxDV(G,D) Note, we are changing distribution G, not just update its parameters (as in the max likelihood case). Hard to learn PG by maximum likelihood
  • 32. Basic Idea G* = arg minGmaxD V(G,D) V(G1,D) V(G3,D) V(G2,D) G1 G2 G3 Given a generator G, maxDV(G,D) evaluates the “difference” between PG and Pdata Pick JSD function: V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] Pick the G s.t. PG is most similar to Pdata
  • 33.  Given G, what is the optimal D* maximizing  Given x, the optimal D* maximizing is: f(D) = alogD + blog(1-D)  D*=a/(a+b) Assuming D(x) can have any value here MaxDV(G,D), G*=arg minGmaxDV(G,D) V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] = Σ [ Pdata(x) log D(x) + PG(x) log(1-D(x) ] Thus: D*(x) = Pdata(x) / (Pdata(x)+PG(x))
  • 34. V(G1,D) V(G1,D*1) “difference” between PG1 and Pdata maxDV(G,D), G* = arg minGmaxD V(G,D) D1*(x) = Pdata(x) / (Pdata(x)+PG_1(x)) D2*(x) = Pdata(x) / (Pdata(x)+PG_2(x)) V(G2,D) V(G3,D)
  • 35. maxDV(G,D) V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] maxD V(G,D) = V(G,D*), where D*(x) = Pdata / (Pdata + PG), and 1-D*(x) = PG / (Pdata + PG) = Ex~P_data log D*(x) + Ex~P_G log (1-D*(x)) ≈ Σ [ Pdata (x) log D*(x) + PG(x) log (1-D*(x)) ] = -2log2 + 2 JSD(Pdata || PG ), JSD(P||Q) = Jensen-Shannon divergence = ½ DKL(P||M)+ ½ DKL(Q||M) where M= ½ (P+Q). DKL(P||Q) = Σ P(x) log P(x) /Q(x)
  • 36. Summary:  Generator G, Discriminator D  Looking for G* such that  Given G, maxD V(G,D) = -2log2 + 2JSD(Pdata(x) || PG(x))  What is the optimal G? It is G that makes JSD smallest = 0: PG(x) = Pdata (x) V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] G* = arg minGmaxD V(G,D)
  • 37. Algorithm  To find the best G minimizing the loss function L(G): θG  θG =−η L(G)/ θG , θG defines G  Solved by gradient descent. Having max ok. Consider simple case: f(x) = max {D1(x), D2(x), D3(x)} dD1(x)/dx dD2(x)/dx dD3(x)/dx If Di(x) is the Max in that region, then do dDi(x)/dx L(G), this is the loss function G* = arg minGmaxD V(G,D) D1(x) D2(x) D3(x)
  • 38. Algorithm G* = arg minGmaxD V(G,D) L(G)  Given G0  Find D*0 maximizing V(G0,D) V(G0,D0*) is the JS divergence between Pdata(x) and PG0(x)  θG  θG −η ΔV(G,D0*) / θG  Obtaining G1 (decrease JSD)  Find D1* maximizing V(G1,D) V(G1,D1*) is the JS divergence between Pdata(x) and PG1(x)  θG  θG −η ΔV(G,D1*) / θG  Obtaining G2 (decrease JSD)  And so on …
  • 39. In practice … Minimize Cross-entropy This is what a Binary Classifier do Output is D(x) Minimize –log D(x) If x is a positive example If x is a negative example Minimize –log(1-D(x)) V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))]  Given G, how to compute maxDV(G,D)? Sample {x1, … ,xm} from Pdata Sample {x*1, … ,x*m} from generator PG Maximize: V’ = 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) Positive example D must accept Negative example D must reject
  • 40. {x1,x2, … xm} from Pdata (x) D is a binary classifier (can be deep) with parameters θd Positive examples Negative examples Maximize Minimize L = - V’ Minimize Cross-entropy Binary Classifier Output is f(x) Minimize –log f(x) If x is a positive example If x is a negative example Minimize –log(1-f(x)) {x*1,x*2, … x*m} from PG(x) V’ = Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) or
  • 41. Algorithm Repeat k times Learning D Learning G Initialize θd for D and θg for G Can only find lower bound of JSD or maxDV(G,D) Only Once In each training iteration Sample m examples {x1,x2, … xm} from data distribution Pdata(x) Sample m noise samples {z1, … , zm} from a simple prior Pprior(z) Obtain generated data {x*1, … , x*m}, x*i=G(zi) Update discriminator parameters θd to maximize  V’ ≈ 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))  θd  θd + ηΔV’(θd) (gradient ascent) Simple another m noise samples {z1,z2, … zm} from the prior Pprior(z),G(zi)=x*i Update generator parameters θg to minimize V’= 1/mΣi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) θg  θg − ηΔV’(θg) (gradient descent) Ian Goodfellow comment: this is also done once
  • 42. Objective Function for Generator in Real Implementation Real implementation: label x from PG as positive Training slow at the beginning V = Ex~P_data [log D(x) + Ex~P_G[log(1-D(x))] V = Ex~P_G [ − log (D(x)) ]
  • 43. Some issues in training GAN M. Arjovsky, L. Bottou, Towards principled methods for training generative adversarial networks, 2017.
  • 44. Evaluating JS divergence Martin Arjovsky, Léon Bottou, Towards Principled Methods for Training Generative Adversarial Networks, 2017, arXiv preprint Discriminator is too strong: for all three Generators, JSD = 0
  • 45. Evaluating JS divergence JS divergence estimated by discriminator telling little information https://arxiv.org/a bs/1701.07875 Weak Generator Strong Generator
  • 46. Discriminator Reason 1. Approximate by sampling 1 for all positive examples 0 for all negative examples = 0 log 2 when Pdata and PG differ completely Weaken your discriminator? Can weak discriminator compute JS divergence? V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] = 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) maxDV(G,D) = -2log2 + 2 JSD(Pdata || PG )
  • 47. Discriminator Reason 2. the nature of data 1 0 = 0 log2 V = Ex~P_data [log D(x)] + Ex~P_G[log(1-D(x))] = 1/m Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) maxDV(G,D) = -2log2 + 2 JSD(Pdata || PG ) Pdata(x) and PG(x) have very little overlap in high dimensional space Theoretical estimation GAN implementation estimation ≈ 0
  • 49. Evolution needs to be smooth: PG_0(x) PG_50(x) PG_100(x) Better …… …… Not really better …… Pdata(x) Pdata(x) Pdata(x) JSD(PG_0 || Pdata) = log2 JSD(PG_100 || Pdata) = 0 JSD(PG_50 || Pdata) = log2
  • 50. One simple solution: add noise  Add some artificial noise to the inputs of discriminator  Make the labels noisy for the discriminator Pdata(x) and PG(x) have some overlap Discriminator cannot perfectly separate real and generated data Noises need to decay over time
  • 51. Mode Collapse Data Distribution Generated Distribution Sometimes, this is hard to tell since one sees only what’s generated, but not what’s missed. Converge to same faces
  • 52. Mode Collapse Example 8 Gaussian distributions: What we want … In reality … Pdata
  • 53. Text to Image, by conditional GAN
  • 54. Text to Image - Results "red flower with black center" Project topic: Code and data are all on web, many possibilities! From CY Lee lecture
  • 55. Algorithm Repeat k times Learning D Learning G Only Once In each training iteration Sample m examples {x1,x2, … xm} from data distribution Pdata(x) Sample m noise samples {z1, … , zm} from a simple prior Pprior(z) Obtain generated data {x*1, … , x*m}, x*i=G(zi) Update discriminator parameters θd to maximize  V’ ≈ Σi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i))  θd  θd + ηΔV’(θd) (gradient ascent plus weight clipping) Simple another m noise samples {z1,z2, … zm} from the prior Pprior(z),G(zi)=x*i Update generator parameters θg to minimize V’= 1/mΣi=1..m logD(xi) + 1/m Σi=1..m log(1-D(x*i)) θg  θg − ηΔV’(θg) (gradient descent) Ian Goodfellow comment: this is also done once WGAN
  • 56. Experimental Results Approximate a mixture of Gaussians by single mixture
  • 57. WGAN Background We have seen that JSD does not give GAN a smooth and continuous improvement curve. We would like to find another distance which gives that. This is the Wasserstein Distance or earth mover’s distance.
  • 58. Earth Mover’s Distance  Considering one distribution P as a pile of earth (total amount of earth is 1), and another distribution Q (another pile of earth) as the target  The “earth mover’s distance” or “Wasserstein Distance” is the average distance the earth mover has to move the earth in an optimal plan. d
  • 59. Earth Mover’s Distance: best plan to move P Q
  • 60. JS vs Earth Mover’s Distance PG_50 …… …… d50 W(PG_0, Pdata)=d0 d0 d100 PG_0 Pdata PG_100 Pdata Pdata JS(PG_0, Pdata) = log2 JS(PG_50, Pdata) = log2 JS(PG_100, Pdata) = 0 W(PG_100, Pdata)=0 W(PG_50, Pdata)=d50
  • 61. Explaining WGAN Let W be the Wasserstein distance. W(Pdata, PG) = maxD is 1-Lipschitz[Ex~P_data D(x) – Ex~P_G D(x)] Where a function f is a k-Lipschitz function if ||f(x1) – f(x2) ≤ k||x1 – x2 || How to guarantee this? Weight clipping: for all parameter updates, if w>c Then w=c, if w<-c, then w=-c. WGAN will provide gradient to push PG towards Pdata Blue: D(x) for original GAN Green: D(x) for WGAN
  • 62. Earth Mover Distance Examples: Multi-layer perceptron

Editor's Notes

  • #5: SOM: http://alaric-research.blogspot.tw/2011/02/self-organizing-map.html
  • #6: SOM: http://alaric-research.blogspot.tw/2011/02/self-organizing-map.html
  • #36: Converge
  • #42: 位什麼要 1- 啊
  • #59: http://www.wxlhcc.com/product/328294.html