This blog provides a brief write-up on the paper titled Adversarial Examples Improve Image Recognition. As the paper explains how to use adversarial setting to improve the training of the model for large datasets like ImageNet
Self-supervised learning consists of a learning framework designed to learn representation of data using pretext tasks. A pretext task is supervised learning setting created automatically from the input such that the cost of label is free. Read more in How? section.
This post is about an interesting paper by Arora et al. 2017. They explains a reasoning for not achieving correct equilibrium in GANs generators and discriminators. The paper points out that the choice of distance metrics to model objective may not be suitable for practical case. Also, the theoretical assumptions for computing objective may not be valid while training in practical domains. Finally, they present an new distance metric based solution from the perspective of psuedorandomness to solve this issue.
This post provides summary of the paper by Berthelot et al. 2017. They proposed a robust architecture for GAN with usual training procedure. In order to have stable convergence, they propose use to use equilibrium concept between Generator and Discriminator. The results are much imporoved in terms of both image diversity and visual quality.
In this post I provide a summary of paper by Zang et al. that won the best paper award at ICLR’17. It is quite informative in terms of understanding why some neural networks can generalize well while others can’t. They provide detailed results to check Generalization Error accross various tests.
This post shows how to setup tensorboard summaries with popular CNN architecture layers in TF. This does not only help debug but also provide insights into working of deep neural nets.