Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Variance explained by different model components does not behave like a Venn diagram: Why variance decomposition provides misleading intuitions

Jinkang Derrick Xiang1, Marieke Mur1, Jörn Diedrichsen2; 1University of Western Ontario, 2Johns Hopkins University

Presenter: Jinkang Derrick Xiang

When explaining brain responses $Y$ with a set of predictors, the variance of $Y$ is often decomposed into portions explained by each predictor, as a reflection of their contribution. The explained variance is commonly visualized using Venn diagrams. This approach originates from Fisher's ANOVA where some of the variance of a variable $Y$ can be explained by orthogonal predictors, and the variance explained by the predictors together is the sum of the variance explained by each one alone. However, in neuroscience applications, the predictors are often correlated, which could cause the variance explained by two predictors to be smaller than, equal to or greater than the sum of each alone. Variance is not a fixed quantity of the data that can be decomposed, but should be considered in the context of all model components. We provide an alternative to the commonly used Venn diagram to visualize variance explained, and will provide an analytical framework to quantitatively conduct model selection and comparison for RSA, PCM and encoding models.

Topic Area: Methods & Computational Tools

Extended Abstract: Full Text PDF