To Bayes, or not to Bayes?

Bayesian Integration and the Size-Weight Illusion

It has been quite a while since I’ve posted, so I thought I’d start the semester with some interesting science, and hopefully find time to ponder, philosophize, and pontificate later in the semester.

At the 2013 annual meeting of the Society for Neuroscience (SfN), I had the pleasure to meet many people doing all sorts of interesting science. One such person was (now Dr.) Megan Peters. I met Megan at the Translational and Computational Motor Control (TCMC) satellite event, which tends to attract the same crowd as the Society for the Neural Control of Movement (NCM). We’ve kept in touch since then, and I had the opportunity to listen to Megan rehearse her dissertation defense, and it was fascinating! Her work has some interesting overlaps with the theories we think about in our lab group, but I’ll have to leave talking about those things for after she publishes them! (If you like this post, see Megan’s webpage to keep up on her endeavors!) 

As a general idea, Megan has been studying the nature of the size-weight illusion (SWI). Simply put, this illusion means that if you would lift two objects of different size but equal mass, you would perceive the smaller object as heavier than the larger. This can be measured by allowing a subject to lift a small object and calling it “10 weight units”, then giving the subject a larger object of equal mass to hold and asking them to describe how many weight units it feels like.

For her graduate work, Megan developed new approaches for analyzing the size-weight illusion and invested a lot of time exploring its relationship to Bayesian statistics in perception. Her poster at last year’s SfN conference in San Diego described one of the methods she used to analyzed the relationship between true and perceived density of real-life objects (see this related paper she collaborated on Jonathan Balzer and Stefano Soatto for more details.)

The SWI has described in some studies as being “anti-Bayesian” (i.e., Bayesian inference doesn’t apply), but it’s not a closed issue as of yet. Bayesian inference (BI) refers to the ideal way to incorporate new information into your predictions about the state of things, and takes its name from Bayes’ Rule in basic probability theory. The gist of BI is the following: suppose that instead of maintaining a determined estimate of one of the states of a system (say, the average concentration of a drug or hormone in the bloodstream) you are instead maintaining a probability distribution for that state. A probability density function (PDF) is a reflection of what we think we know about the state and how confident we are about it. Suppose you can’t measure consistently and the measurements are noisy. What is the best (i.e., optimal) way to incorporate new information into the PDF we are maintaining?

If you have any information about the inverse dynamics of the system, you can create a “likelihood function”. The likelihood is a probability distribution that relates the different possible states to the likelihood we would have received this particular measurement at those states. Bayes’ Rule basically states that your “posterior distribution” (your picture of the world that incorporates the new information) should be the product of your “prior distribution” (the picture before new information came in) and your likelihood function. A likelihood function can be one of the well-known probability distributions (like Poisson, exponential, Gaussian, etc.) or their inverse, or even a customized PDF, depending on what we think we know about the structure of the system.

This is pretty powerful stuff, and is capable of generating really good state estimates. In our lab, we use Bayesian estimation instead of traditional methods to filter our surface EMG signals. (See the figure below from one of my advisor’s SfN posters to get an idea of how an EMG time series is converted into the likelihood function p(abs(EMG)|Force) and is incorporated via Bayesian estimation into the PDF for force over time.) The well-known Kalman filter is a special case of Bayesian estimation that assumes the likelihood function is Gaussian (i.e., that the system can be approximated as linear), which is true for many systems.

Bayesian Filtering PDFs image

In the fields of perception and of sensorimotor control, there is a widely growing movement that argues strongly that Bayesian estimation is the way the brain functions. We somehow maintain probability distributions and update them by integrating new information and what we know about how off the measurements can be.

This model of perception is far from iron-clad, and the size-weight illusion for many years has been held up as a counterexample. In the size-weight illusion, for example, your perception actually seems to exaggerate the unexpected sensory information. For example, if you pick up two objects of different size without knowing anything about their masses, you would expect the larger object to be heavier, since that’s generally the case. If the two objects actually have equal mass, then the large object is less heavy than your expectation. However, instead of perceiving the large object as “heavier than the small object, but lighter than I expected”, you would perceive the larger object as lighter than the smaller object! The unexpected lightness of the large object becomes exaggerated in our perception. This seems to run counter to Bayesian inference, since Bayesian inference can only mitigate the perception of unexpected information. (See figure 1 of this paper to get an idea of what this looks like graphically.)

So is perception actually not Bayesian? Or Bayesian except for this illusion? Or is this illusion somehow obeying Bayesian integration? Learning about these questions are my first foray into looking at Bayesian estimation as more than a great engineering tool, and Megan has been a wonderful guide on this adventure. I’m planning on starting to write about how we use probability theory in our lab group for both science and engineering, but as Megan’s graduate work starts turning into more publications (and as conferences such as SfN approach), you’ll probably see more posts pop up on this blog about the role of Bayesian statistics in perception.

Stay tuned for more about sensorimotor control, Bayesian statistics, and life! Also, I’ve got a number of posts lined up for the USC Graduate School blog about how worthwhile it is to apply for fellowships as a grad student, so stay tuned there too!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s