Towards Robust Representation Learning and Beyond - Cihang Xie
posted on 19 October, 2021


Abstract: Deep learning has transformed computer vision in the past few years. As fueled by powerful computational resources and massive amounts of data, deep networks achieve compelling, sometimes even superhuman, performance on a wide range of visual benchmarks. Nonetheless, these success stories come with bitterness—deep networks are vulnerable to adversarial examples. The existence of adversarial examples reveals that the computations performed by the current deep networks are dramatically different from those by human brains, and, on the other hand, provides opportunities for understanding and improving these models. In this talk, I will first show that the vulnerability of deep networks is a much more severe issue than we thought—the threats from adversarial examples are ubiquitous and catastrophic. Then I will discuss how to equip deep networks with robust representations for defending against adversarial examples. We approach the solution from the perspective of neural architecture design, and show incorporating architectural elements like feature-level denoisers or smooth activation functions can effectively boost model robustness. The last part of this talk will rethink the value of adversarial examples. Rather than treating adversarial examples as a threat to deep networks, we take a further step on uncovering that adversarial examples can help deep networks substantially improve their generalization ability.