Place: Chamberlin 5280 (Zoom link also available for online participants who signed up on our mailing list)
Speaker: Ge Yang, IAIFI and MIT
Abstract: Generalization is a central problem in deep learning research because it directly affects how much data and compute it costs to achieve good performance. In other words better generalization makes better performance more accessible. In this talk, we begin by looking at a few interesting situations where modern neural networks fail to generalize. We discuss the components responsible for these failures, and ways to fix them. Then we introduce continuous neural representation and neural fields as a unifying theme. As part of the roadmap, I will lay down key technical milestones, and specific applications in control, reinforcement learning, and scene understanding.