Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Inducing bias from bilingual brains into language models

Anuja Negi1, SUBBA REDDY OOTA2, Fatma Deniz1; 1Technische Universität Berlin, 2INRIA

Presenter: Anuja Negi

Recent studies have shown that inducing bias from neural data in language models can enhance their ability to encode brain activity and improve performance on language tasks. However, these approaches have mainly focused on a single language. Given recent evidence that semantic representations are shared across languages in the bilingual brain, we ask whether brain-informed fine-tuning can reveal latent multilingual capabilities in language models. To test this, we fine-tune pretrained monolingual Transformer models (English and Chinese BERT) using fMRI data from bilingual individuals. We find that fine-tuning improves downstream performance not only in the language used for training but also in the other language, indicating cross-linguistic generalization. Further, the encoding performance of the fine-tuned model across other participants remains the same, suggesting that the brain bias introduced by fine-tuning is shared across bilingual individuals.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF