Spectral Normalization For GANs
Spectral Normalization For GANs
Spectral Normalization For GANs
S PECTRAL N ORMALIZATION
FOR G ENERATIVE A DVERSARIAL N ETWORKS
A BSTRACT
One of the challenges in the study of generative adversarial networks is the insta-
bility of its training. In this paper, we propose a novel weight normalization tech-
nique called spectral normalization to stabilize the training of the discriminator.
Our new normalization technique is computationally light and easy to incorporate
into existing implementations. We tested the efficacy of spectral normalization on
CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed
that spectrally normalized GANs (SN-GANs) is capable of generating images of
better or equal quality relative to the previous training stabilization techniques.
The code with Chainer (Tokui et al., 2015), generated images and pretrained mod-
els are available at https://github.com/pfnet-research/sngan_
projection.
1 I NTRODUCTION
Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been enjoying considerable
success as a framework of generative models in recent years, and it has been applied to numerous
types of tasks and datasets (Radford et al., 2016; Salimans et al., 2016; Ho & Ermon, 2016; Li et al.,
2017). In a nutshell, GANs are a framework to produce a model distribution that mimics a given
target distribution, and it consists of a generator that produces the model distribution and a discrimi-
nator that distinguishes the model distribution from the target. The concept is to consecutively train
the model distribution and the discriminator in turn, with the goal of reducing the difference be-
tween the model distribution and the target distribution measured by the best discriminator possible
at each step of the training. GANs have been drawing attention in the machine learning community
not only for its ability to learn highly structured probability distribution but also for its theoretically
interesting aspects. For example, (Nowozin et al., 2016; Uehara et al., 2016; Mohamed & Laksh-
minarayanan, 2017) revealed that the training of the discriminator amounts to the training of a good
estimator for the density ratio between the model distribution and the target. This is a perspective
that opens the door to the methods of implicit models (Mohamed & Lakshminarayanan, 2017; Tran
et al., 2017) that can be used to carry out variational optimization without the direct knowledge of
the density function.
A persisting challenge in the training of GANs is the performance control of the discriminator. In
high dimensional spaces, the density ratio estimation by the discriminator is often inaccurate and
unstable during the training, and generator networks fail to learn the multimodal structure of the
target distribution. Even worse, when the support of the model distribution and the support of the
target distribution are disjoint, there exists a discriminator that can perfectly distinguish the model
distribution from the target (Arjovsky & Bottou, 2017). Once such discriminator is produced in
this situation, the training of the generator comes to complete stop, because the derivative of the
so-produced discriminator with respect to the input turns out to be 0. This motivates us to introduce
some form of restriction to the choice of the discriminator.
In this paper, we propose a novel weight normalization method called spectral normalization that
can stabilize the training of discriminator networks. Our normalization enjoys following favorable
properties.
1
Published as a conference paper at ICLR 2018
• Lipschitz constant is the only hyper-parameter to be tuned, and the algorithm does not
require intensive tuning of the only hyper-parameter for satisfactory performance.
• Implementation is simple and the additional computational cost is small.
In fact, our normalization method also functioned well even without tuning Lipschitz constant,
which is the only hyper parameter. In this study, we provide explanations of the effectiveness of
spectral normalization for GANs against other regularization techniques, such as weight normaliza-
tion (Salimans & Kingma, 2016), weight clipping (Arjovsky et al., 2017), and gradient penalty (Gul-
rajani et al., 2017). We also show that, in the absence of complimentary regularization techniques
(e.g., batch normalization, weight decay and feature matching on the discriminator), spectral nor-
malization can improve the sheer quality of the generated images better than weight normalization
and gradient penalty.
2 M ETHOD
In this section, we will lay the theoretical groundwork for our proposed method. Let us consider a
simple discriminator made of a neural network of the following form, with the input x:
f (x, θ) = W L+1 aL (W L (aL−1 (W L−1 (. . . a1 (W 1 x) . . . )))), (1)
where θ := {W 1 , . . . , W L , W L+1 } is the learning parameters set, W l ∈ Rdl ×dl−1 , W L+1 ∈
R1×dL , and al is an element-wise non-linear activation function. We omit the bias term of each
layer for simplicity. The final output of the discriminator is given by
D(x, θ) = A(f (x, θ)), (2)
where A is an activation function corresponding to the divergence of distance measure of the user’s
choice. The standard formulation of GANs is given by
min max V (G, D)
G D
where min and max of G and D are taken over the set of generator and discriminator func-
tions, respectively. The conventional form of V (G, D) (Goodfellow et al., 2014) is given by
Ex∼qdata [log D(x)] + Ex0 ∼pG [log(1 − D(x0 ))], where qdata is the data distribution and pG is the
(model) generator distribution to be learned through the adversarial min-max optimization. The ac-
tivation function A that is used in the D of this expression is some continuous function with range
[0, 1] (e.g, sigmoid function). It is known that, for a fixed generator G, the optimal discriminator for
∗
this form of V (G, D) is given by DG (x) := qdata (x)/(qdata (x) + pG (x)).
The machine learning community has been pointing out recently that the function space from which
the discriminators are selected crucially affects the performance of GANs. A number of works (Ue-
hara et al., 2016; Qi, 2017; Gulrajani et al., 2017) advocate the importance of Lipschitz continuity
in assuring the boundedness of statistics. For example, the optimal discriminator of GANs on the
above standard formulation takes the form
∗ qdata (x)
DG (x) = = sigmoid(f ∗ (x)), where f ∗ (x) = log qdata (x) − log pG (x), (3)
qdata (x) + pG (x)
2
Published as a conference paper at ICLR 2018
where we mean by kf kLip the smallest value M such that kf (x) − f (x0 )k/kx − x0 k ≤ M for any
x, x0 , with the norm being the `2 norm.
While input based regularizations allow for relatively easy formulations based on samples, they also
suffer from the fact that, they cannot impose regularization on the space outside of the supports of
the generator and data distributions without introducing somewhat heuristic means. A method we
would introduce in this paper, called spectral normalization, is a method that aims to skirt this issue
by normalizing the weight matrices using the technique devised by Yoshida & Miyato (2017).
Our spectral normalization controls the Lipschitz constant of the discriminator function f by literally
constraining the spectral norm of each layer g : hin 7→ hout . By definition, Lipschitz norm kgkLip
is equal to suph σ(∇g(h)), where σ(A) is the spectral norm of the matrix A (L2 matrix norm of A)
kAhk2
σ(A) := max = max kAhk2 , (6)
h:h6=0 khk2 khk2 ≤1
which is equivalent to the largest singular value of A. Therefore, for a linear layer g(h) = W h, the
norm is given by kgkLip = suph σ(∇g(h)) = suph σ(W ) = σ(W ). If the Lipschitz norm of the
activation function kal kLip is equal to 1 1 , we can use the inequality kg1 ◦g2 kLip ≤ kg1 kLip ·kg2 kLip
to observe the following bound on kf kLip :
Our spectral normalization normalizes the spectral norm of the weight matrix W so that it satisfies
the Lipschitz constraint σ(W ) = 1:
W̄SN (W ) := W/σ(W ). (8)
If we normalize
each W l using (8), we can appeal to the inequality (7) and the fact that
σ W̄SN (W ) = 1 to see that kf kLip is bounded from above by 1.
Here, we would like to emphasize the difference between our spectral normalization and spectral
norm ”regularization” introduced by Yoshida & Miyato (2017). Unlike our method, spectral norm
”regularization” penalizes the spectral norm by adding explicit regularization term to the objective
function. Their method is fundamentally different from our method in that they do not make an
attempt to ‘set’ the spectral norm to a designated value. Moreover, when we reorganize the derivative
of our normalized cost function and rewrite our objective function (12), we see that our method
is augmenting the cost function with a sample data dependent regularization function. Spectral
norm regularization, on the other hand, imposes sample data independent regularization on the cost
function, just like L2 regularization and Lasso.
As we mentioned above, the spectral norm σ(W ) that we use to regularize each layer of the dis-
criminator is the largest singular value of W . If we naively apply singular value decomposition
to compute the σ(W ) at each round of the algorithm, the algorithm can become computationally
heavy. Instead, we can use the power iteration method to estimate σ(W ) (Golub & Van der Vorst,
2000; Yoshida & Miyato, 2017). With power iteration method, we can estimate the spectral norm
with very small additional computational time relative to the full computational cost of the vanilla
GANs. Please see Appendix A for the detail method and Algorithm 1 for the summary of the actual
spectral normalization algorithm.
1
For examples, ReLU (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011) and leaky ReLU (Maas
et al., 2013) satisfies the condition, and many popular activation functions satisfy K-Lipschitz constraint for
some predefined K as well.
3
Published as a conference paper at ICLR 2018
4
Published as a conference paper at ICLR 2018
that matches the target distribution only at select few features. Weight clipping (Arjovsky et al.,
2017) also suffers from same pitfall.
Our spectral normalization, on the other hand, do not suffer from such a conflict in interest. Note
that the Lipschitz constant of a linear operator is determined only by the maximum singular value.
In other words, the spectral norm is independent of rank. Thus, unlike the weight normalization,
our spectral normalization allows the parameter matrix to use as many features as possible while
satisfying local 1-Lipschitz constraint. Our spectral normalization leaves more freedom in choosing
the number of singular components (features) to feed to the next layer of the discriminator.
Brock et al. (2016) introduced orthonormal regularization on each weight to stabilize the training of
GANs. In their work, Brock et al. (2016) augmented the adversarial objective function by adding
the following term:
kW T W − Ik2F . (14)
While this seems to serve the same purpose as spectral normalization, orthonormal regularization
are mathematically quite different from our spectral normalization because the orthonormal regular-
ization destroys the information about the spectrum by setting all the singular values to one. On the
other hand, spectral normalization only scales the spectrum so that the its maximum will be one.
Gulrajani et al. (2017) used Gradient penalty method in combination with WGAN. In their work,
they placed K-Lipschitz constant on the discriminator by augmenting the objective function with
the regularizer that rewards the function for having local 1-Lipschitz constant (i.e. k∇x̂ f k2 = 1)
at discrete sets of points of the form x̂ := x̃ + (1 − )x generated by interpolating a sample x̃
from generative distribution and a sample x from the data distribution. While this rather straight-
forward approach does not suffer from the problems we mentioned above regarding the effective
dimension of the feature space, the approach has an obvious weakness of being heavily dependent
on the support of the current generative distribution. As a matter of course, the generative distribu-
tion and its support gradually changes in the course of the training, and this can destabilize the effect
of such regularization. In fact, we empirically observed that a high learning rate can destabilize the
performance of WGAN-GP. On the contrary, our spectral normalization regularizes the function the
operator space, and the effect of the regularization is more stable with respect to the choice of the
batch. Training with our spectral normalization does not easily destabilize with aggressive learning
rate. Moreover, WGAN-GP requires more computational cost than our spectral normalization with
single-step power iteration, because the computation of k∇x̂ f k2 requires one whole round of for-
ward and backward propagation. In the appendix section, we compare the computational cost of the
two methods for the same number of updates.
4 E XPERIMENTS
In order to evaluate the efficacy of our approach and investigate the reason behind its efficacy, we
conducted a set of extensive experiments of unsupervised image generation on CIFAR-10 (Torralba
et al., 2008) and STL-10 (Coates et al., 2011), and compared our method against other normalization
techniques. To see how our method fares against large dataset, we also applied our method on
ILSVRC2012 dataset (ImageNet) (Russakovsky et al., 2015) as well. This section is structured as
follows. First, we will discuss the objective functions we used to train the architecture, and then
we will describe the optimization settings we used in the experiments. We will then explain two
performance measures on the images to evaluate the images produced by the trained generators.
Finally, we will summarize our results on CIFAR-10, STL-10, and ImageNet.
As for the architecture of the discriminator and generator, we used convolutional neural networks.
Also, for the evaluation of the spectral norm for the convolutional weight W ∈ Rdout ×din ×h×w , we
treated the operator as a 2-D matrix of dimension dout × (din hw)3 . We trained the parameters of the
generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 3 in the
appendix section for more details of the architectures.
3
Note that, since we are conducting the convolution discretely, the spectral norm will depend on the size of
the stride and padding. However, the answer will only differ by some predefined K.
5
Published as a conference paper at ICLR 2018
For all methods other than WGAN-GP, we used the following standard objective function for the
adversarial loss:
V (G, D) := E [log D(x)] + E [log(1 − D(G(z)))], (15)
x∼qdata (x) z∼p(z)
where z ∈ R is a latent variable, p(z) is the standard normal distribution N (0, I), and G : Rdz →
dz
Rd0 is a deterministic generator function. We set dz to 128 for all of our experiments. For the updates
of G, we used the alternate cost proposed by Goodfellow et al. (2014) − Ez∼p(z) [log(D(G(z)))] as
used in Goodfellow et al. (2014) and Warde-Farley & Bengio (2017). For the updates of D, we used
the original cost defined in (15). We also tested the performance of the algorithm with the so-called
hinge loss, which is given by
h i
VD (Ĝ, D) = E [min (0, −1 + D(x))] + E min 0, −1 − D Ĝ(z) (16)
x∼qdata (x) z∼p(z)
h i
VG (G, D̂) = − E D̂ (G(z)) , (17)
z∼p(z)
respectively for the discriminator and the generator. Optimizing these objectives is equivalent to
minimizing the so-called reverse KL divergence : KL[pg ||qdata ]. This type of loss has been already
proposed and used in Lim & Ye (2017); Tran et al. (2017). The algorithm based on the hinge
loss also showed good performance when evaluated with inception score and FID. For Wasserstein
GANs with gradient penalty (WGAN-GP) (Gulrajani et al., 2017), we used the following objective
function: V (G, D) := Ex∼qdata [D(x)]− Ez∼p(z) [D(G(z))]−λ Ex̂∼px̂ [(k∇x̂ D(x̂)k2 −1)2 ], where
the regularization term is the one we introduced in the appendix section D.4.
For quantitative assessment of generated examples, we used inception score (Salimans et al., 2016)
and Fréchet inception distance (FID) (Heusel et al., 2017). Please see Appendix B.1 for the details
of each score.
In this section, we report the accuracy of the spectral normalization (we use the abbreviation: SN-
GAN for the spectrally normalized GANs) during the training, and the dependence of the algo-
rithm’s performance on the hyperparmeters of the optimizer. We also compare the performance
quality of the algorithm against those of other regularization/normalization techniques for the dis-
criminator networks, including: Weight clipping (Arjovsky et al., 2017), WGAN-GP (Gulrajani
et al., 2017), batch-normalization (BN) (Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al.,
2016), weight normalization (WN) (Salimans & Kingma, 2016) and orthonormal regularization (or-
thonormal) (Brock et al., 2016). In order to evaluate the stand-alone efficacy of the gradient penalty,
we also applied the gradient penalty term to the standard adversarial loss of GANs (15). We would
refer to this method as ‘GAN-GP’. For weight clipping, we followed the original work Arjovsky
et al. (2017) and set the clipping constant c at 0.01 for the convolutional weight of each layer. For
gradient penalty, we set λ to 10, as suggested in Gulrajani et al. (2017). For orthonormal, we initial-
ized the each weight of D with a randomly selected orthonormal operator and trained GANs with
the objective function augmented with the regularization term used in Brock et al. (2016). For all
comparative studies throughout, we excluded the multiplier parameter γ in the weight normalization
method, as well as in batch normalization and layer normalization method. This was done in order
to prevent the methods from overtly violating the Lipschitz condition. When we experimented with
different multiplier parameter, we were in fact not able to achieve any improvement.
For optimization, we used the Adam optimizer Kingma & Ba (2015) in all of our experiments. We
tested with 6 settings for (1) ndis , the number of updates of the discriminator per one update of
the generator and (2) learning rate α and the first and second order momentum parameters (β1 , β2 )
of Adam. We list the details of these settings in Table 1 in the appendix section. Out of these 6
settings, A, B, and C are the settings used in previous representative works. The purpose of the
settings D, E, and F is to the evaluate the performance of the algorithms implemented with more
aggressive learning rates. For the details of the architectures of convolutional networks deployed
in the generator and the discriminator, we refer the readers to Table 3 in the appendix section. The
number of updates for GAN generator were 100K for all experiments, unless otherwise noted.
Firstly, we inspected the spectral norm of each layer during the training to make sure that our spectral
normalization procedure is indeed serving its purpose. As we can see in the Figure 9 in the C.1,
6
Published as a conference paper at ICLR 2018
Table 1: Hyper-parameter settings we tested in our experiments. †, ‡ and ? are the hyperparameter
settings following Gulrajani et al. (2017), Warde-Farley & Bengio (2017) and Radford et al. (2016),
respectively.
Setting α β1 β2 ndis
†
A 0.0001 0.5 0.9 5
B‡ 0.0001 0.5 0.999 1
C? 0.0002 0.5 0.999 1
D 0.001 0.5 0.9 5
E 0.001 0.5 0.999 5
F 0.001 0.9 0.999 5
8 A 9
7 8 A
Inception score
Inception score
6 B 7 B
5 C 6 C
4 5
3 D 4 D
E 3 E
2 2
1 F 1 F
0 0
Weight clip.
WGAN-GP
LN
WN
Orthonormal
SN
Weight clip.
GAN-GP
WGAN-GP
BN
LN
WN
Orthonormal
SN
Figure 1: Inception scores on CIFAR-10 and STL-10 with different methods and hyperparameters
(higher is better).
the spectral norms of these layers floats around 1–1.05 region throughout the training. Please see
Appendix C.1 for more details.
In Figures 1 and 2 we show the inception scores of each method with the settings A–F. We can see
that spectral normalization is relatively robust with aggressive learning rates and momentum param-
eters. WGAN-GP fails to train good GANs at high learning rates and high momentum parameters
on both CIFAR-10 and STL-10. Orthonormal regularization performed poorly for the setting E on
the STL-10, but performed slightly better than our method with the optimal setting. These results
suggests that our method is more robust than other methods with respect to the change in the set-
ting of the training. Also, the optimal performance of weight normalization was inferior to both
WGAN-GP and spectral normalization on STL-10, which consists of more diverse examples than
CIFAR-10. Best scores of spectral normalization are better than almost all other methods on both
CIFAR-10 and STL-10.
In Tables 2, we show the inception scores of the different methods with optimal settings on CIFAR-
10 and STL-10 dataset. We see that SN-GANs performed better than almost all contemporaries
on the optimal settings. SN-GANs performed even better with hinge loss (17).4 . For the training
with same number of iterations, SN-GANs fell behind orthonormal regularization for STL-10. For
more detailed comparison between orthonormal regularization and spectral normalization, please
see section 4.1.2.
In Figure 6 we show the images produced by the generators trained with WGAN-GP, weight nor-
malization, and spectral normalization. SN-GANs were consistently better than GANs with weight
normalization in terms of the quality of generated images. To be more precise, as we mentioned
in Section 3, the set of images generated by spectral normalization was clearer and more diverse
than the images produced by the weight normalization. We can also see that WGAN-GP failed to
train good GANs with high learning rates and high momentums (D,E and F). The generated images
4
As for STL-10, we also ran SN-GANs over twice time longer iterations because it did not seem to converge.
Yet still, this elongated training sequence still completes before WGAN-GP with original iteration size because
the optimal setting of SN-GANs (setting B, ndis = 1) is computationally light.
7
Published as a conference paper at ICLR 2018
A A
B B
C 10
2 C
FID
FID
2
10 D D
E E
F 1 F
10
Weight clip.
WGAN-GP
LN
WN
Orthonormal
SN
Weight clip.
GAN-GP
WGAN-GP
BN
LN
WN
Orthonormal
SN
(a) CIFAR-10 (b) STL-10
Figure 2: FIDs on CIFAR-10 and STL-10 with different methods and hyperparameters (lower is
better).
Table 2: Inception scores and FIDs with unsupervised image generation on CIFAR-10. † (Radford
et al., 2016) (experimented by Yang et al. (2017)), ‡ (Yang et al., 2017), ∗ (Warde-Farley & Bengio,
2017), †† (Gulrajani et al., 2017)
with GAN-GP, batch normalization, and layer normalization is shown in Figure 12 in the appendix
section.
We also compared our algorithm against multiple benchmark methods ans summarized the results
on the bottom half of the Table 2. We also tested the performance of our method on ResNet based
GANs used in Gulrajani et al. (2017). Please note that all methods listed thereof are all different
in both optimization methods and the architecture of the model. Please see Table 4 and 5 in the
appendix section for the detail network architectures. Our implementation of our algorithm was
able to perform better than almost all the predecessors in the performance.
5
For our ResNet experiments, we trained the same architecture with multiple random seeds for weight
initialization and produced models with different parameters. We then generated 5000 images 10 times and
computed the average inception score for each model. The values for ResNet on the table are the mean and
standard deviation of the score computed over the set of models trained with different seeds.
8
Published as a conference paper at ICLR 2018
1.0
1.0
1.0
1.0
1.0
1.0
0.8 WC
0.8
0.8
0.8
0.8
0.8
0.8
0.6 WN
0.6
0.6
0.6
0.6
0.6
0.6
2
0.4 SN
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0 13 26 0 32 63 0 64 127 0 64 127 0 128 255 0 128 255 0 256 511
Index of
(a) CIFAR-10
Layer : 1 Layer : 2 Layer : 3 Layer : 4 Layer : 5 Layer : 6 Layer : 7
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.8 WC
0.8
0.8
0.8
0.8
0.8
0.8
0.6 WN
0.6
0.6
0.6
0.6
0.6
0.6
2
0.4 SN
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0 13 26 0 32 63 0 64 127 0 64 127 0 128 255 0 128 255 0 256 511
Index of
(b) STL-10
Figure 3: Squared singular values of weight matrices trained with different methods: Weight clip-
ping (WC), Weight Normalization (WN) and Spectral Normalization (SN). We scaled the singular
values so that the largest singular values is equal to 1. For WN and SN, we calculated singular values
of the normalized weight matrices.
Singular values analysis on the weights of the discriminator D In Figure 3, we show the
squared singular values of the weight matrices in the final discriminator D produced by each method
using the parameter that yielded the best inception score. As we predicted in Section 3, the singular
values of the first to fifth layers trained with weight clipping and weight normalization concentrate
on a few components. That is, the weight matrices of these layers tend to be rank deficit. On the
other hand, the singular values of the weight matrices in those layers trained with spectral normal-
ization is more broadly distributed. When the goal is to distinguish a pair of probability distributions
on the low-dimensional nonlinear data manifold embedded in a high dimensional space, rank defi-
ciencies in lower layers can be especially fatal. Outputs of lower layers have gone through only a
few sets of rectified linear transformations, which means that they tend to lie on the space that is
linear in most parts. Marginalizing out many features of the input distribution in such space can
result in oversimplified discriminator. We can actually confirm the effect of this phenomenon on the
generated images especially in Figure 6b. The images generated with spectral normalization is more
diverse and complex than those generated with weight normalization.
Training time On CIFAR-10, SN-GANs is slightly slower than weight normalization (about 110
∼ 120% computational time), but significantly faster than WGAN-GP. As we mentioned in Sec-
tion 3, WGAN-GP is slower than other methods because WGAN-GP needs to calculate the gradient
of gradient norm k∇x Dk2 . For STL-10, the computational time of SN-GANs is almost the same as
vanilla GANs, because the relative computational cost of the power iteration (18) is negligible when
compared to the cost of forward and backward propagation on CIFAR-10 (images size of STL-10 is
larger (48 × 48)). Please see Figure 10 in the appendix section for the actual computational time.
In order to highlight the difference between our spectral normalization and orthonormal regulariza-
tion, we conducted an additional set of experiments. As we explained in Section 3, orthonormal
regularization is different from our method in that it destroys the spectral information and puts equal
emphasis on all feature dimensions, including the ones that ’shall’ be weeded out in the training
process. To see the extent of its possibly detrimental effect, we experimented by increasing the di-
9
Published as a conference paper at ICLR 2018
8.6 SN-GANs
8.5 Orthonormal
8.4
Inception score
8.3
8.2
8.1
8.0
7.9
0.51.0 2.0 4.0 6.0 8.0
Relative size of feature map dimension (original=1.0)
Figure 4: The effect on the performance on STL-10 induced by the change of the feature map
dimension of the final layer. The width of the highlighted region represents standard deviation of the
results over multiple seeds of weight initialization. The orthonormal regularization does not perform
well with large feature map dimension, possibly because of its design that forces the discriminator
to use all dimensions including the ones that are unnecessary. For the setting of the optimizers’
hyper-parameters, We used the setting C, which was optimal for “orthonormal regularization”
22
20
inception score
18
16
14
12 SN-GANs
Orthnormal
10
0.0 1.0 2.0 3.0 4.04.5
iteration 1e5
Figure 5: Learning curves for conditional image generation in terms of Inception score for SN-
GANs and GANs with orthonormal regularization on ImageNet.
mension of the feature space 6 , especially at the final layer (7th conv) for which the training with our
spectral normalization prefers relatively small feature space (dimension < 100; see Figure 3b). As
for the setting of the training, we selected the parameters for which the orthonormal regularization
performed optimally. The figure 4 shows the result of our experiments. As we predicted, the per-
formance of the orthonormal regularization deteriorates as we increase the dimension of the feature
maps at the final layer. Our SN-GANs, on the other hand, does not falter with this modification of
the architecture. Thus, at least in this perspective, we may such that our method is more robust with
respect to the change of the network architecture.
To show that our method remains effective on a large high dimensional dataset, we also applied
our method to the training of conditional GANs on ILRSVRC2012 dataset with 1000 classes, each
consisting of approximately 1300 images, which we compressed to 128 × 128 pixels. Regarding the
adversarial loss for conditional GANs, we used practically the same formulation used in Mirza &
Osindero (2014), except that we replaced the standard GANs loss with hinge loss (17). Please see
Appendix B.3 for the details of experimental settings.
6
More precisely, we simply increased the input dimension and the output dimension by the same factor. In
Figure 4, ‘relative size’ = 1.0 implies that the layer structure is the same as the original.
10
Published as a conference paper at ICLR 2018
GANs without normalization and GANs with layer normalization collapsed in the beginning of
training and failed to produce any meaningful images. GANs with orthonormal normalization Brock
et al. (2016) and our spectral normalization, on the other hand, was able to produce images. The
inception score of the orthonormal normalization however plateaued around 20Kth iterations, while
SN kept improving even afterward (Figure 5.) To our knowledge, our research is the first of its kind
in succeeding to produce decent images from ImageNet dataset with a single pair of a discrimina-
tor and a generator (Figure 7). To measure the degree of mode-collapse, we followed the footstep
of Odena et al. (2017) and computed the intra MS-SSIM Odena et al. (2017) for pairs of indepen-
dently generated GANs images of each class. We see that our SN-GANs ((intra MS-SSIM)=0.101)
is suffering less from the mode-collapse than AC-GANs ((intra MS-SSIM)∼0.25).
To ensure that the superiority of our method is not limited within our specific setting, we also com-
pared the performance of SN-GANs against orthonormal regularization on conditional GANs with
projection discriminator (Miyato & Koyama, 2018) as well as the standard (unconditional) GANs.
In our experiments, SN-GANs achieved better performance than orthonormal regularization for the
both settings (See Figure 13 in the appendix section).
5 C ONCLUSION
This paper proposes spectral normalization as a stabilizer of training of GANs. When we apply spec-
tral normalization to the GANs on image generation tasks, the generated examples are more diverse
than the conventional weight normalization and achieve better or comparative inception scores rela-
tive to previous studies. The method imposes global regularization on the discriminator as opposed
to local regularization introduced by WGAN-GP, and can possibly used in combinations. In the
future work, we would like to further investigate where our methods stand amongst other methods
on more theoretical basis, and experiment our algorithm on larger and more complex datasets.
ACKNOWLEDGMENTS
We would like to thank the members of Preferred Networks, Inc., particularly Shin-ichi Maeda,
Eiichi Matsumoto, Masaki Watanabe and Keisuke Yahata for insightful comments and discussions.
We also would like to thank anonymous reviewers and commenters on the OpenReview forum for
insightful discussions.
R EFERENCES
Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks.
In ICLR, 2017.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML,
pp. 214–223, 2017.
Devansh Arpit, Yingbo Zhou, Bhargava U Kota, and Venu Govindaraju. Normalization propagation: A para-
metric technique for removing internal covariate shift in deep networks. In ICML, pp. 1168–1176, 2016.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. Neural photo editing with introspective
adversarial networks. arXiv preprint arXiv:1609.07093, 2016.
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature
learning. In AISTATS, pp. 215–223, 2011.
Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Mod-
ulating early visual processing by language. In NIPS, pp. 6576–6586, 2017.
DC Dowson and BV Landau. The fréchet distance between multivariate normal distributions. Journal of
Multivariate Analysis, 12(3):450–455, 1982.
Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In
ICLR, 2017.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In AISTATS, pp.
315–323, 2011.
Gene H Golub and Henk A Van der Vorst. Eigenvalue computation in the 20th century. Journal of Computa-
tional and Applied Mathematics, 123(1):35–65, 2000.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672–2680, 2014.
11
Published as a conference paper at ICLR 2018
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training
of wasserstein GANs. arXiv preprint arXiv:1704.00028, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
CVPR, pp. 770–778, 2016.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp
Hochreiter. GANs trained by a two time-scale update rule converge to a nash equilibrium. arXiv preprint
arXiv:1706.08500, 2017.
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, pp. 4565–4573, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, pp. 448–456, 2015.
Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. What is the best multi-stage
architecture for object recognition? In ICCV, pp. 2146–2153, 2009.
Kui Jia, Dacheng Tao, Shenghua Gao, and Xiangmin Xu. Improving training of deep neural networks via
singular value bounding. In CVPR, 2017.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue
generation. In EMNLP, pp. 2147–2159, 2017.
Jae Hyun Lim and Jong Chul Ye. Geometric GAN. arXiv preprint arXiv:1705.02894, 2017.
Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic
models. In ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784,
2014.
Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In ICLR, 2018.
Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. NIPS Workshop on
Adversarial Training, 2017.
Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML,
pp. 807–814, 2010.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using
variational divergence minimization. In NIPS, pp. 271–279, 2016.
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier
GANs. In ICML, pp. 2642–2651, 2017.
Guo-Jun Qi. Loss-sensitive generative adversarial networks on lipschitz densities. arXiv preprint
arXiv:1701.06264, 2017.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional
generative adversarial networks. In ICLR, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej
Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual
recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
Masaki Saito, Eiichi Matsumoto, and Shunta Saito. Temporal generative adversarial nets with singular value
clipping. In ICCV, 2017.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train-
ing of deep neural networks. In NIPS, pp. 901–909, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training GANs. In NIPS, pp. 2226–2234, 2016.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,
Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, pp. 1–9, 2015.
Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generation open source framework
for deep learning. In Proceedings of workshop on machine learning systems (LearningSys) in the twenty-
ninth annual conference on neural information processing systems (NIPS), 2015.
Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for nonpara-
metric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30
(11):1958–1970, 2008.
Dustin Tran, Rajesh Ranganath, and David M Blei. Deep and hierarchical implicit models. arXiv preprint
arXiv:1702.08896, 2017.
Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial
nets from a density ratio estimation perspective. NIPS Workshop on Adversarial Training, 2016.
David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature
matching. In ICLR, 2017.
Sitao Xiang and Hao Li. On the effect of batch normalization and weight normalization in generative adversarial
networks. arXiv preprint arXiv:1704.03971, 2017.
Jianwei Yang, Anitha Kannan, Dhruv Batra, and Devi Parikh. LR-GAN: Layered recursive generative adver-
sarial networks for image generation. ICLR, 2017.
Yuichi Yoshida and Takeru Miyato. Spectral norm regularization for improving the generalizability of deep
learning. arXiv preprint arXiv:1705.10941, 2017.
12
Published as a conference paper at ICLR 2018
(a) CIFAR-10
(b) STL-10
Figure 6: Generated images on different methods: WGAN-GP, weight normalization, and spectral
normalization on CIFAR-10 and STL-10.
13
Published as a conference paper at ICLR 2018
Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset. The
inception score is 21.1±.35.
14
Published as a conference paper at ICLR 2018
We can then approximate the spectral norm of W with the pair of so-approximated singular vectors:
σ(W ) ≈ ũT W ṽ. (19)
If we use SGD for updating W , the change in W at each update would be small, and hence the
change in its largest singular value. In our implementation, we took advantage of this fact and reused
the ũ computed at each step of the algorithm as the initial vector in the subsequent step. In fact,
with this ‘recycle’ procedure, one round of power iteration was sufficient in the actual experiment
to achieve satisfactory performance. Algorithm 1 in the appendix summarizes the computation of
the spectrally normalized weight matrix W̄ with this approximation. Note that this procedure is
very computationally cheap even in comparison to the calculation of the forward and backward
propagations on neural networks. Please see Figure 10 for actual computational time with and
without spectral normalization.
• Initialize ũl ∈ Rdl for l = 1, . . . , L with a random vector (sampled from isotropic distri-
bution).
• For each update and each layer l:
1. Apply power iteration method to a unnormalized weight W l :
ṽl ← (W l )T ũl /k(W l )T ũl k2 (20)
l l
ũl ← W ṽl /kW ṽl k2 (21)
2. Calculate W̄SN with the spectral norm:
l
W̄SN (W l ) = W l /σ(W l ), where σ(W l ) = ũT l
l W ṽl (22)
3. Update W l with SGD on mini-batch dataset DM with a learning rate α:
W l ← W l − α∇W l `(W̄SN
l
(W l ), DM ) (23)
B E XPERIMENTAL S ETTINGS
B.1 P ERFORMANCE MEASURES
15
Published as a conference paper at ICLR 2018
examples. On its own, the Frećhet distance Dowson & Landau (1982) is 2-Wasserstein distance
between two distribution p1 and p2 assuming they are both multivariate Gaussian distributions:
F (p1 , p2 ) = kµp1 − µp2 k22 + trace Cp1 + Cp2 − 2(Cp1 Cp2 )1/2 , (24)
where {µp1 , Cp1 }, {µp2 , Cp2 } are the mean and covariance of samples from q and p, respectively.
If f is the output of the final layer of the inception model before the softmax, the Fréchet inception
distance (FID) between two distributions p1 and p2 on the images is the distance between f ◦p1 and
f ◦ p2 . We computed the Fréchet inception distance between the true distribution and the generated
distribution empirically over 10000 and 5000 samples. Multiple repetition of the experiments did
not exhibit any notable variations on this score.
For the comparative study, we experimented with the recent ResNet architecture of Gulrajani
et al. (2017) as well as the standard CNN. For this additional set of experiments, we used Adam
again for the optimization and used the very hyper parameter used in Gulrajani et al. (2017)
(α = 0.0002, β1 = 0, β2 = 0.9, ndis = 5). For our SN-GANs, we doubled the feature map in
the generator from the original, because this modification achieved better results. Note that when
we doubled the dimension of the feature map for the WGAN-GP experiment, however, the perfor-
mance deteriorated.
The images used in this set of experiments were resized to 128 × 128 pixels. The details of the
architecture are given in Table 6. For the generator network of conditional GANs, we used con-
ditional batch normalization (CBN) (Dumoulin et al., 2017; de Vries et al., 2017). Namely we
replaced the standard batch normalization layer with the CBN conditional to the label information
y ∈ {1, . . . , 1000}. For the optimization, we used Adam with the same hyperparameters we used for
ResNet on CIFAR-10 and STL-10 dataset. We trained the networks with 450K generator updates,
and applied linear decay for the learning rate after 400K iterations so that the rate would be 0 at the
end.
Table 3: Standard CNN models for CIFAR-10 and STL-10 used in our experiments on image
Generation. The slopes of all lReLU functions in the networks are set to 0.1.
RGB image x ∈ RM ×M ×3
z ∈ R128 ∼ N (0, I) 3×3, stride=1 conv 64 lReLU
4×4, stride=2 conv 64 lReLU
dense → Mg × Mg × 512
3×3, stride=1 conv 128 lReLU
4×4, stride=2 deconv. BN 256 ReLU
4×4, stride=2 conv 128 lReLU
4×4, stride=2 deconv. BN 128 ReLU
3×3, stride=1 conv 256 lReLU
4×4, stride=2 deconv. BN 64 ReLU 4×4, stride=2 conv 256 lReLU
3×3, stride=1 conv. 3 Tanh 3×3, stride=1 conv. 512 lReLU
(a) Generator, Mg = 4 for SVHN and CIFAR10, and dense → 1
Mg = 6 for STL-10
(b) Discriminator, M = 32 for SVHN and CIFAR10,
and M = 48 for STL-10
16
Published as a conference paper at ICLR 2018
17
Published as a conference paper at ICLR 2018
Table 6: ResNet architectures for image generation on ImageNet dataset. For the generator of condi-
tional GANs, we replaced the usual batch normalization layer in the ResBlock with the conditional
batch normalization layer. As for the model of the projection discriminator, we used the same
architecture used in Miyato & Koyama (2018). Please see the paper for the details.
18
Published as a conference paper at ICLR 2018
C A PPENDIX R ESULTS
Figure 9 shows the spectral norm of each layer in the discriminator over the course of the training.
The setting of the optimizer is C in Table 1 throughout the training. In fact, they do not deviate by
more than 0.05 for the most part. As an exception, 6 and 7-th convolutional layers with largest rank
deviate by more than 0.1 in the beginning of the training, but the norm of this layer too stabilizes
around 1 after some iterations.
1.20
conv0 conv3 conv5
conv1 conv4 conv6
1.15 conv2
1.10
σ(W̄)
1.05
1.00
0.950 20000 40000 60000 80000100000
update
Figure 9: Spectral norms of all seven convolutional layers in the standard CNN during course of the
training on CIFAR 10.
45 100
Seconds for 100 generator updates
40
35 80
30
25 60
20 40
15
10 20
5
0 0
WGAN-GP WN SN Vanilla WGAN-GP WN SN Vanilla
(a) CIFAR-10 (image size:32 × (b) STL-10 (images size:48 ×
32 × 3) 48 × 3)
Figure 11 shows the effect of ndis on the performance of weight normalization and spectral normal-
ization. All results shown in Figure 11 follows setting D, except for the value of ndis . For WN,
the performance deteriorates with larger ndis , which amounts to computing minimax with better
accuracy. Our SN does not suffer from this unintended effect.
19
Published as a conference paper at ICLR 2018
Inception scores
7 after 10000 generator updates
Inception score
5
4
3 SN
WN
2
12 5 10 20
ndis
Figure 11: The effect of ndis on spectral normalization and weight normalization. The shaded
region represents the variance of the result over different seeds.
Figure 12: Generated images with GAN-GP, Layer Norm and Batch Norm on CIFAR-10
20
Published as a conference paper at ICLR 2018
16 30
SN-GANs
15 Orthnormal
25
inception score
inception score
14
13 20
12
SN-GANs 15
11
Orthnormal
10 10
0.0 1.0 2.0 3.0 4.04.5 0.0 1.0 2.0 3.0 4.04.5
iteration 1e5
iteration 1e5
Figure 13: Learning curves in terms of Inception score for SN-GANs and GANs with orthonormal
regularization on ImageNet. The figure (a) shows the results for the standard (unconditional) GANs,
and the figure (b) shows the results for the conditional GANs trained with projection discrimina-
tor (Miyato & Koyama, 2018)
This section is dedicated to the comparative study of spectral normalization and other regularization
methods for discriminators. In particular, we will show that contemporary regularizations including
weight normalization and weight clipping implicitly impose constraints on weight matrices that
places unnecessary restriction on the search space of the discriminator. More specifically, we will
show that weight normalization and weight clipping unwittingly favor low-rank weight matrices.
This can force the trained discriminator to be largely dependent on select few features, rendering the
algorithm to be able to match the model distribution with the target distribution only on very low
dimensional feature space.
The weight normalization introduced by Salimans & Kingma (2016) is a method that normalizes the
`2 norm of each row vector in the weight matrix8 :
T
W̄WN := w̄1T , w̄2T , ..., w̄dTo , where w̄i (wi ) := wi /kwi k2 ,
(25)
where w̄i and wi are the ith row vector of W̄WN and W , respectively.
Still another technique to regularize the weight matrix is to use the Frobenius norm:
W̄FN := W/kW kF , (26)
p qP
where kW kF := tr(W T W ) = 2
i,j wij .
Originally, these regularization techniques were invented with the goal of improving the generaliza-
tion performance of supervised training (Salimans & Kingma, 2016; Arpit et al., 2016). However,
recent works in the field of GANs (Salimans et al., 2016; Xiang & Li, 2017) found their another
raison d’etat as a regularizer of discriminators, and succeeded in improving the performance of the
original.
8
In the original literature, the weight normalization was introduced as a method for reparametrization of the
T
form W̄WN := γ1 w̄1T , γ2 w̄2T , ..., γdo w̄dTo where γi ∈ R is to be learned in the course of the training. In
this work, we deal with the case γi = 1 so that we can assess the methods under the Lipschitz constraint.
21
Published as a conference paper at ICLR 2018
These methods in fact can render the trained discriminator D to be K-Lipschitz for a some pre-
scribed K and achieve the desired effect to a certain extent. However, weight normalization (25)
imposes the following implicit restriction on the choice of W̄WN :
σ1 (W̄WN )2 + σ2 (W̄WN )2 + · · · + σT (W̄WN )2 = do , where T = min(di , do ), (27)
where σt (A) is a t-th singular value of matrix A. The above equation holds because
Pmin(di ,do ) Pdo wi wiT
t=1 σt (W̄WN )2 = tr(W̄WN W̄WNT
) = i=1 kwi k2 kwi k2 = do . Under this restriction, the
√
norm
√ kW̄WN hk2 for a fixed unit vector h is maximized at kW̄WN hk2 = do when σ1 (W̄WN ) =
do and σt (W̄WN ) = 0 for t = 2, . . . , T , which means that W̄WN is of rank one. Using such
W corresponds to using only one feature to discriminate the model probability distribution from the
target. Similarly, Frobenius normalization requires σ1 (W̄FN )2 +σ2 (W̄FN )2 +· · ·+σT (W̄FN )2 = 1,
and the same argument as above follows.
Here, we see a critical problem in these two regularization methods. In order to retain as much
norm of the input as possible and hence to make the discriminator more sensitive, one would hope
to make the norm of W̄WN h large. For weight normalization, however, this comes at the cost of
reducing the rank and hence the number of features to be used for the discriminator. Thus, there is a
conflict of interests between weight normalization and our desire to use as many features as possible
to distinguish the generator distribution from the target distribution. The former interest often reigns
over the other in many cases, inadvertently diminishing the number of features to be used by the
discriminators. Consequently, the algorithm would produce a rather arbitrary model distribution
that matches the target distribution only at select few features.
Our spectral normalization, on the other hand, do not suffer from such a conflict in interest. Note
that the Lipschitz constant of a linear operator is determined only by the maximum singular value.
In other words, the spectral norm is independent of rank. Thus, unlike the weight normalization,
our spectral normalization allows the parameter matrix to use as many features as possible while
satisfying local 1-Lipschitz constraint. Our spectral normalization leaves more freedom in choosing
the number of singular components (features) to feed to the next layer of the discriminator.
To see this more visually, we refer the reader to Figure (14). Note that spectral normalization allows
for a wider range of choices than weight normalization.
1.0
0.8
0.6
SN
s2
0.4 WN
0.2
0.0
0 25 49
Index of s
Figure 14: Visualization of the difference between spectral normalization (Red) and weight
normalization (Blue) on possible sets of singular values. The possible sets of singular values
plotted in increasing order for weight normalization (Blue) and for spectral normalization (Red).
√set of singular values permitted under the spectral normalization condition, we scaled W̄WN
For the
by 1/ do so that its spectral norm is exactly 1. By the definition of the weight normalization, the
area under the blue curves are all bound to be 1. Note that the range of choice for the weight
normalization is small.
In summary, weight normalization and Frobenius normalization favor skewed distributions of singu-
lar values, making the column spaces of the weight matrices lie in (approximately) low dimensional
vector spaces. On the other hand, our spectral normalization does not compromise the number
of feature dimensions used by the discriminator. In fact, we will experimentally show that GANs
22
Published as a conference paper at ICLR 2018
trained with our spectral normalization can generate a synthetic dataset with wider variety and higher
inception score than the GANs trained with other two regularization methods.
Still another regularization technique is weight clipping introduced by Arjovsky et al. (2017) in their
training of Wasserstein GANs. Weight clipping simply truncates each element of weight matrices
so that its absolute value is bounded above by a prescribed constant c ∈ R+ . Unfortunately, weight
clipping suffers from the same problem as weight normalization and Frobenius normalization. With
weight clipping with the truncation value c, the value kW xk2 for a fixed unit vector x is maximized
when the rank of W is again one, and the training will again favor the discriminators that use
only select few features. Gulrajani et al. (2017) refers to this problem as capacity underuse problem.
They also reported that the training of WGAN with weight clipping is slower than that of the original
DCGAN (Radford et al., 2016).
One direct and straightforward way of controlling the spectral norm is to clip the singular val-
ues (Saito et al., 2017), (Jia et al., 2017). This approach, however, is computationally heavy because
one needs to implement singular value decomposition in order to compute all the singular values.
A similar but less obvious approach is to parametrize W ∈ Rdo ×di as follows from the get-go and
train the discriminators with this constrained parametrization:
W := U SV T , subject to U T U = I, V T V = I, max Sii = K, (28)
i
Recently, Gulrajani et al. (2017) introduced a technique to enhance the stability of the training of
Wasserstein GANs (Arjovsky et al., 2017). In their work, they endeavored to place K-Lipschitz
constraint (5) on the discriminator by augmenting the adversarial loss function with the following
regularizer function:
λ E [(k∇x̂ D(x̂)k2 − 1)2 ], (29)
x̂∼px̂
23
Published as a conference paper at ICLR 2018
Moreover, WGAN-GP requires more computational cost than our spectral normalization with
single-step power iteration, because the computation of k∇x Dk2 requires one whole round of
forward and backward propagation. In Figure 10, we compare the computational cost of the two
methods for the same number of updates.
Having said that, one shall not rule out the possibility that the gradient penalty can compliment
spectral normalization and vice versa. Because these two methods regularizes discriminators by
completely different means, and in the experiment section, we actually confirmed that combina-
tion of WGAN-GP and reparametrization with spectral normalization improves the quality of the
generated examples over the baseline (WGAN-GP only).
We can take advantage of the regularization effect of the spectral normalization we saw above to
develop another algorithm. Let us consider another parametrization of the weight matrix of the
discriminator given by:
W̃ := γ W̄SN (32)
where γ is a scalar variable to be learned. This parametrization compromises the 1-Lipschitz con-
straint at the layer of interest, but gives more freedom to the model while keeping the model from
becoming degenerate. For this reparametrization, we need to control the Lipschitz condition by
other means, such as the gradient penalty (Gulrajani et al., 2017). Indeed, we can think of analogous
versions of reparametrization by replacing W̄SN in (32) with W normalized by other criterions. The
extension of this form is not new. In Salimans & Kingma (2016), they originally introduced weight
normalization in order to derive the reparametrization of the form (32) with W̄SN replaced (32) by
WWN and vectorized γ.
In this part of the addendum, we experimentally compare the reparametrizations derived from two
different normalization methods (weight normalization and spectral normalization). We tested the
reprametrization methods for the training of the discriminator of WGAN-GP. For the architecture
of the network in WGAN-GP, we used the same CNN we used in the previous section. For the
ResNet-based CNN, we used the same architecture provided by (Gulrajani et al., 2017) 9 .
Tables 7, 8 summarize the result. We see that our method significantly improves the inception score
from the baseline on the regular CNN, and slightly improves the score on the ResNet based CNN.
Figure 15 shows the learning curves of (a) critic losses, on train and validation sets and (b) the in-
ception scores with different reparametrization methods. We can see the beneficial effect of spectral
normalization in the learning curve of the discriminator as well. We can verify in the figure 15a
that the discriminator with spectral normalization overfits less to the training dataset than the dis-
criminator without reparametrization and with weight normalization, The effect of overfitting can be
observed on inception score as well, and the final score with spectral normalization is better than the
others. As for the best inception score achieved in the course of the training, spectral normalization
achieved 7.28, whereas the spectral normalization and vanilla normalization achieved 7.04 and 6.69,
respectively.
24
Published as a conference paper at ICLR 2018
Table 7: Inception scores with different reparametrization mehtods on CIFAR10 without label su-
pervisions. (*)We reported N/A for the inception score and FID of Frobenius normalization because
the training collapsed at the early stage.
Table 8: Inception scores and FIDs with different reparametrization methods on CIFAR10 with the
label supervision, by auxiliary classifier (Odena et al., 2017).
Notice that, at least for the case N (W ) := kW kF or N (W ) := kW k2 , the point of this gradient is
given by :
∇W̄ V = k∇W N. (38)
where ∃ k ∈ R
25
Published as a conference paper at ICLR 2018
0.0
7.5
0.5
7.0
1.0
Inception score
Critic loss
1.5 6.5
2.0 6.0
2.5 5.5
3.0
0 20000 40000 60000 80000 100000 5.0
0 20000 40000 60000 80000 100000
Update of the generator Update of the generator
WGAN-GP (train) WGAN-GP w/ WN (train) WGAN-GP w/ SN (train) WGAN-GP WGAN-GP w/ WN WGAN-GP w/ SN
WGAN-GP (valid) WGAN-GP w/ WN (valid) WGAN-GP w/ SN (valid)
(b) Inception score
(a) Critic loss
Figure 15: Learning curves of (a) critic loss and (b) inception score on different reparametrization
method on CIFAR-10 ; weight normalization (WGAN-GP w/ WN), spectral normalization (WGAN-
GP w/ SN), and parametrization free (WGAN-GP).
26