
Generative Adversarial Nets
Ian J. Goodfellow
∗
, Jean Pouget-Abadie
†
, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair
‡
, Aaron Courville, Yoshua Bengio
§
D
´
epartement d’informatique et de recherche op
´
erationnelle
Universit
´
e de Montr
´
eal
Montr
´
eal, QC H3C 3J7
Abstract
We propose a new framework for estimating generative models via an adversar-
ial process, in which we simultaneously train two models: a generative model G
that captures the data distribution, and a discriminative model D that estimates
the probability that a sample came from the training data rather than G. The train-
ing procedure for G is to maximize the probability of D making a mistake. This
framework corresponds to a minimax two-player game. In the space of arbitrary
functions G and D, a unique solution exists, with G recovering the training data
distribution and D equal to
1
2
everywhere. In the case where G and D are defined
by multilayer perceptrons, the entire system can be trained with backpropagation.
There is no need for any Markov chains or unrolled approximate inference net-
works during either training or generation of samples. Experiments demonstrate
the potential of the framework through qualitative and quantitative evaluation of
the generated samples.
1 Introduction
The promise of deep learning is to discover rich, hierarchical models [2] that represent probability
distributions over the kinds of data encountered in artificial intelligence applications, such as natural
images, audio waveforms containing speech, and symbols in natural language corpora. So far, the
most striking successes in deep learning have involved discriminative models, usually those that
map a high-dimensional, rich sensory input to a class label [14, 20]. These striking successes have
primarily been based on the backpropagation and dropout algorithms, using piecewise linear units
[17, 8, 9] which have a particularly well-behaved gradient . Deep generative models have had less
of an impact, due to the difficulty of approximating many intractable probabilistic computations that
arise in maximum likelihood estimation and related strategies, and due to difficulty of leveraging
the benefits of piecewise linear units in the generative context. We propose a new generative model
estimation procedure that sidesteps these difficulties.
1
In the proposed adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or the
data distribution. The generative model can be thought of as analogous to a team of counterfeiters,
trying to produce fake currency and use it without detection, while the discriminative model is
analogous to the police, trying to detect the counterfeit currency. Competition in this game drives
both teams to improve their methods until the counterfeits are indistiguishable from the genuine
articles.
∗
Ian Goodfellow is now a research scientist at Google, but did this work earlier as a UdeM student
†
Jean Pouget-Abadie did this work while visiting Universit
´
e de Montr
´
eal from Ecole Polytechnique.
‡
Sherjil Ozair is visiting Universit
´
e de Montr
´
eal from Indian Institute of Technology Delhi
§
Yoshua Bengio is a CIFAR Senior Fellow.
1
All code and hyperparameters available at http://www.github.com/goodfeli/adversarial
1