Document worth reading: “The GAN Landscape: Losses, Architectures, Regularization, and Normalization”
Generative Adversarial Networks (GANs) are a class of deep generative fashions which objective to be taught a purpose distribution in an unsupervised fashion. While they’ve been effectively utilized to many points, teaching a GAN is a notoriously tough job and requires a significant amount of hyperparameter tuning, neural construction engineering, and a non-trivial amount of ‘strategies’. The success in a number of smart features coupled with the dearth of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a smart perspective. We reproduce the current state-of-the-art and transcend fairly exploring the GAN panorama. We concentrate on widespread pitfalls and reproducibility factors, open-source our code on Github, and current pre-trained fashions on TensorFlow Hub. The GAN Landscape: Losses, Architectures, Regularization, and Normalization