I didn’t present at this year’s Society for Neuroscience (but was an author on a talk). But I did go to SfN for two reasons: (1) Networking and (2) an informal survey of “MVPA”.
In the context of neuroimaging, what is “MVPA”?
Well, MVPA stands for Multi-voxel pattern analysis. Or Multivariate pattern analysis. So what do those mean? Let’s break the terms down a bit. Each have “pattern analysis” in them. Pattern analysis typically involves some sort of statistical analysis of patterns — where patterns are defined as a set of traits, features, or variables to describe a whole bunch of observations.
Sometimes, these patterns are used as the basis for separating different (often known a priori) groups of observations. Other times it is for finding ways to group observations together based on common patterns.
Pattern analysis (PA) is implicitly multivariate. Thereby making one of the MVPAs—Multivariate Pattern Analysis—redundant in title.
Multivariate means that multiple dependent variables are modeled or analyzed in one go, as opposed to conducting many, many univariate tests. I’m stealing a quote from Haxby (2011, link) that succinctly gets to the advantages of multivariate:
MVP analysis can detect the features that underlie these representational distinctions at both the coarse and fine spatial scales, where as conventional univariate analyses are only sensitive to the coarse spatial scale topographies.
With multivariate analysis, you’ll get a similar (or the same) perspective as univariate approaches — but now with the added bonus of a unique perspective only multivariate approaches can give you.
Multi-voxel pattern analysis, though, is where things can get confusing. Multi-voxel does not imply multivariate — rather it is explicit with it’s title: a whole bunch of voxels used in some way. Some of these methods are, for example, ridge regression. But… ridge regression is a univariate method. Thereby making the other MVPA—multi-voxel pattern analysis—sometimes contradictory in title.
There are some fantastic reviews on “MVPA” and multivariate analyses and pattern analyses for fMRI1, 2, 3, 4, so I won’t go into detail yet on what MVPA should be, but in general it is understood as (1) classification methods, (2) multivariate methods, or (3) a combination of the two.
So I “took to the streets”, if you will5, and conducted a small scale survey of what “MVPA” means to neuroimagers. I went to as many posters and talks that explicitly used the term “MVPA” or happened to just stumble across them through the vast oceans of the poster section. I would then take note of exactly which technique was used. However, in most cases (for posters) exactly which technique used was never explicitly written; rather only the 4 letters: MVPA. I would often have to ask “Which MVPA are you using?” (which in most cases was my sole question — and for that I probably seemed like a crazy person). Here’s what I found, broken down into 3 categories. Quotes are used to paraphrase responses.
Category 1: The Definitely Multivariate:
- Support Vector Machines
- “Multivariate pattern similarity analysis” (MPSA)
- Representational Similarity Analysis (RSA)
I’d like to note that MPSA is RSA are the same technique, but now falls under two (unnecessarily different) names, because RSA is just multidimensional scaling (MDS; which means it’s three unnecessarily different names). This SfN is the first time I’d ever seen “MPSA” used in the imaging context.
Before we move on, let’s break this down. Google would suggest (on December 9, 2014) there are only 8 unique results using “MPSA” (wherein most are related to one another). However, the exact phrase of “Multivariate Pattern Similarity Analysis” can be traced to Ritchey et al., in 2012, then again in 2013 by Onat, and again by Kalm in 2013, until it was finally acronymized6 by Copara this year. And now (at least) twice at SfN. Hooray for confusingly renaming methods (nearly) as old as modern statistics themselves.
Category 2: The Definitely Multi-voxel (and ambiguously not multivariate)
- “Haxby style correlations”
- Ridge regression
- Logistic regression
- Gaussian Naive Bayesian Classifier
I do find the phrase “Haxby style correlations” quite delightful. Why am I separating these techniques from the above? Well, these techniques usually rely on aggregating results from a series of univariate analyses. The aggregation usually happens across voxels.
Before we move on to the third and most hilarious (or upsetting) category, I have a small aside: I couldn’t find any case of regularization performed correctly. Regularization is a nifty technique. Usually, regularization is a helpful method when your sample is too small to properly estimate all of your variables. So, the nifty-ness comes in by artificially inflating particular values to, essentially, pretend you have a bigger sample size. To quote Takane: the inflation of these values “works almost magically to provide estimates that are more stable than the ordinary [Least Squares] estimates.”
But, there is a danger to inflating: overfitting. Which is why in regularization methods — you have to search for the regularization parameter that is a compromise between a more stable solution and not overfitting. Often, this is done through a train-test paradigm like k-folds.
At SfN, I found only the following case: a single arbitrarily chosen regularization parameter. Tikhonov would be furious.
Category 3: The Definitely Concerning (and ambiguously ambiguous)
- “Regularized regression”
- “MVPA Regression”
- “The MVPA toolbox”
I would follow up with something along the lines of “Do you happen to know which type of analysis?”, to which the response was usually just “The MVPA Toolbox”. I didn’t bother asking which MVPA toolbox.
At this point, you’re probably thinking: “Derek,
what you’ve just said is one of the most insanely idiotic things I’ve ever heard. At no point in your rambling, incoherent response was there anything that could even be considered a rational thought. Everyone [on this internet] is now dumber for having [read] it.
And you’re right. This post is merely a spewing of complaints with no apparent direction nor solution. However, it will be the first in a series of posts over the coming months. There will be two types of posts: (1) examples of multivariate and similarly exotic neuroimaging analyses, in the hopes that (2) some sort of taxonomic structure can be derived — essentially a family tree of “MVPA” with the hopes that, some day, we can stop using the those 4 letters in that particular sequence. So let’s hope I turn this complaint into something more useful!
1: Haxby, Connolly, & Guntupalli, 2014, 2Pereira, Mitchel, & Botvinick, 2009, 3McIntosh & Mišić, 2013, and 4Shinkareva, Wang, & Wedell, 2013.
5You probably won’t, and shouldn’t, because sometimes I don’t make sense.
6Not a real word.
10I love footnotes.