Keynote Lecture: Ellie Pavlick
Friday, August 15, 9:30 - 10:30 am, Room A0.01 (overflow A1.02, A1.03, C1.03)
What came first, the sum or the parts? Emergent compositionality in neural networks
Ellie Pavlick, Brown University
Associate Professor of Computer Science, Cognitive Science and Linguistics
Decades of research in cognitive science and AI have focused on compositionality as a hallmark property of the human mind. This focus can seem to frame the question as though we must classify systems as either compositional or idiomatic, cognitive or associative. In this talk, I describe a set of related but different empirical studies of how neural networks achieve, or fail to achieve, compositional behavior. I argue that these findings point to a middle ground in which traditional “symbolic” compositionality can be seen as a special case which is emergent—but nonetheless qualitatively different—from a more general associative mechanism characteristic of neural networks.
Dr. Ellie Pavlick is an Associate Professor of Computer Science, Cognitive Science, and Linguistics at Brown University, and a Research Scientist at Google Deepmind. She leads the Language Understanding and Representation (LUNAR) Lab, which seeks to understand how language "works" and to build computational models which can understand language the way that humans do. Her lab's projects focus on language broadly construed, and often includes the study of capacities more general than language, including conceptual representations, reasoning, learning, and generalization. They are interested in understanding how humans achieve these things, how computational models (especially large language models and similar types of "black box" AI systems) achieve these things, and what insights can be gained from comparing the two. We often collaborate with researchers outside of computer science, including cognitive science, neuroscience, and philosophy.