Understanding Generative Adversarial Networks
Presenter(s)
Presenter Profile Picture
Information Systems Laboratory Department of Electrical Engineering Stanford University

15th Annual Shannon Memorial Lecture
Understanding Generative Adversarial Networks
David Tse
Stanford University

Date

Abstract

Stanford professor David Tse, recipient of the 2017 Claude E. Shannon Award, will deliver the 15th annual Shannon Memorial Lecture hosted by CMRR, the Qualcomm Institute and its Information Theory and Applications Center (ITA). Claude Shannon invented information theory to understand the fundamental limits of communication. Since then, it has revolutionized the communication field. The core of information theory is an approach to research based on finding the simplest model to study a problem. Although conceived and cultivated in the context of communication, this approach to research has much broader applicability. In this talk, we illustrate this using our recent work on Generative Adversarial Networks (GANs). GANs is a novel approach to the age-old problem of learning a probabilistic model from data. Learning is achieved by setting up a game between a generator whose goal is to generate fake data that are close to the real data, and a discriminator whose goal is to distinguish between the real and fake data. Even though many increasingly complex GANs architectures have been proposed recently, several basic issues remain unanswered: 1) What is a general way of specifying the loss function of GANs? 2) What is the limiting solution of a GAN as the amount of data increases? 3) What is the generalization ability of a GAN? We answer these questions in the simplest setting of the problem. In the process, a connection is drawn between GANs, optimal transport theory, and rate-distortion theory.