While the typical behaviors of stochastic systems are often deceptively oblivious to the tail distributions of the underlying uncertainties, the ways rare events arise are vastly different depending on whether the underlying tail distributions are light-tailed or heavy-tailed. Roughly speaking, in light-tailed settings, a system-wide rare event arises because everything goes wrong a little bit as if the entire system has conspired up to provoke the rare event (conspiracy principle), whereas, in heavy-tailed settings, a system-wide rare event arises because a small number of components fail catastrophically (catastrophe principle). In the first part of this talk, I will introduce the recent developments in the theory of large deviations for heavy-tailed stochastic processes at the sample path level and rigorously characterize the catastrophe principle. In the second part, I will explore an intriguing connection between the catastrophe principle and a central mystery of modern AI—the unreasonably good generalization performance of deep neural networks.
This talk is based on the ongoing research in collaboration with Mihail Bazhba, Jose Blanchet, Bohan Chen, Sewoong Oh, Insuk Seo, Zhe Su, Xingyu Wang, and Bert Zwart.
Chang-Han Rhee is an Assistant Professor in Industrial Engineering and Management Sciences at Northwestern University. Before joining Northwestern University, he was a postdoctoral researcher in the Stochastics Group at Centrum Wiskunde & Informatica and in Industrial & Systems Engineering and Biomedical Engineering at Georgia Tech. He received his Ph.D. in Computational and Mathematical Engineering from Stanford University. His research interests include applied probability, stochastic simulation, and statistical learning. He was a winner of the Outstanding Publication Award from the INFORMS Simulation Society in 2016, a winner of the Best Student Paper Award (MS/OR focused) at the 2012 Winter Simulation Conference, and a finalist of the 2013 INFORMS George Nicholson Student Paper Competition.