Physical Society Colloquium
Unexpected lessons from neural networks built with symmetry
Department of Electrical Engineering and Computer
Science MIT
Physical data contains rich context that is difficult, awkward, or impossible to
articulate using off-the-shelf machine learning methods. For example, geometry
and geometric tensors used to describe physical systems are traditionally
challenging data types to use for machine learning because coordinates
and coordinate systems are sensitive to the symmetries of space. There are
many ways to imbue machine learning models with this context (e.g. input
representation, training schemes, constraining model structure); each vary in
their flexibility and robustness. In this talk, I'll give examples of some
surprising consequences of what happens when we build physical assumptions
into the functional forms of our machine learning models, i.e. imposing
physical constraints through the operations a model comprises.
Specifically, I'll discuss properties of Euclidean Neural Networks which are
constructed to preserve 3D Euclidean symmetry: 3D rotations, translations,
and inversion. Perhaps unsurprisingly, symmetry-preserving algorithms are
extremely data-efficient; they are able to achieve better results with less
training data. More unexpectedly, Euclidean Neural Networks also act as
“symmetry-compilers”: they can only learn tasks that are
symmetrically well-posed and they can also help uncover when there is symmetry
implied missing information. I'll give examples of how these properties
can help us ask scientific questions and illuminate the full implications of
our assumptions. To conclude, I'll highlight some open questions in neural
networks relevant to representing physical systems.
Friday, February 25th 2022, 15:30
Ernest Rutherford Physics Building, Keys Auditorium (room 112)
|