Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Large Language Models Are Good In-Context Function Learners

Konstantinos Voudouris1, Elif Akata1, Eric Schulz2; 1Helmholtz Zentrum München, 2Max Planck Institute for Biological Cybernetics

Presenter: Elif Akata

Human cognitive neuroscience has revealed that humans are capable of flexibly learning complex functions. Large Language Models (LLMs) are increasingly being compared to humans in terms of their intelligence and behavioural sophistication. Here, we use a principled framework to examine whether Large Language Models are able to flexibly learn functions in-context. We find a human-like behavioural motif, in which LLMs are better able to learn smoother, more predictable functions with less noise, and that their in-context learning accuracy approaches the theoretical maximum in the limit.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF