Thursday, September 15, 2011

Cloud computing in the brain




I meant to post on this earlier this summer but got distracted... work & life have a way of doing that.

This past July saw one of my most grueling projects finally come to a close. Published in the Journal of Neuroscience and titled “How each movement changes the next: an experimental and theoretical study of fast adaptive priors in reaching" this paper goes well beyond the topic of simple motor control and into fundamental issues about how our brains learn from experience (this is the reason why it took over 2.5 years to get through the review process!).  In particular we show how neurons in the brain might do something akin to cloud computing.

Confused? Let me explain.

First I'll have to talk about a little hard science for some background.  What my co-author and I found was that whenever we move, our brain keeps track of where we go.  Over time, the motor control regions of the brain begin to generate an internal model of the recent movements we've made.  It then uses this memory when we make future actions.  For example, if I keep reaching for this soda can on my desk (and return it to the same position), then over time my brain retains the history of all the reaches I've made to that same spot on my desk.  Over time my reaches to the soda get a little more accurate.  So in a sense, practice makes perfect.

Now, here's the kicker... if I suddenly want to reach for something else ("Hey, that cookie over there looks mighty tasty!") then my brain biases this new movement in the direction of the soda can. We don't really notice it that much, but it's detectible with the fancy machines that we use to monitor your movements.  The more often I reach for that soda can, the more biased my reach for the cookie gets.

It turns out that this type of learning is really sophisticated and follows what appears to be rigid statistical principles.  What I mean is that our brain somehow encodes our recent actions as a prior probability distribution (think of the "bell curve" you've heard about).  It then integrates this prior with all the incoming sensory information you're getting from your eyes, your hand, etc.  This integrated information is then used when you make your next reach.  The stronger the prior, the more biased your future actions will be.  The stronger the sensory input is, the less biased you'll be.

For the math-nerds out there, this is a form of adaptive Bayesian inference.  I wont go in to the awesome details of Bayesian statistics because I don't want to lose 90% of whoever it is actually reads these posts ("Hi Mom!"). This is a branch of mathematics that's used to filter Spam from your inbox, improve images of the stars from telescopes, optimize airline travel, make video games more difficult, and almost everything else that's cool these days.  Needless to say, these are some pretty freaking sophisticated computations that our brains are doing almost effortlessly.  And not just for any high level cognitive process (I mean this process worked in both Shakespeare's brain and in Pauly D's brain)... but for something as simple as reaching for a can of soda.

Okay... hopefully I haven't lost anybody.  Because here's the truly insane part.

Through simulations of neural tissue, my co-author and I found that this really cool mathematical computation likely happens through a form of "cloud computing" in the brain.  For those of you who don't know, cloud computing is the process by which a computation is broken down into a set of little chunks and then distributed to a whole bunch of computers that live in the vast ether of the internet (Note: So technically what I'm talking about is more similar to a computer clusters, but "cloud computing" is the hip new thing these days).  You know those nasty things called "botnets" that take down servers in foreign countries or send you all that spam in your inbox... they're cloud computing gone bad.

We found that a similar principle might work in the brain.  There might not be a single neuron or group of neurons that store this statistical prior.  Instead, we were able to show how this memory can naturally emerge in the dynamics of the information passing between neurons.  Thanks to Hebbian learning ("neurons that fire together wire together") our brain is able to store little bits of information distributed across a mass of connected cells.  Basically populations of neurons remember their collective pattern of activity from the recent past.  Over time, this distributed collective memory shapes the way the network responds to new inputs.  Eventually, this learning exhibits very sophisticated properties that look almost exactly like human behavior, as well as the expectations of statistical theory.  So the whole is, in fact, greater than the sum of its parts.

Let's put this all together shall we?  We've got a group of neurons in your brain that are building a complex statistical model of everything you just recently did. But this model isn't encoded by the activity of any one cell or even in the response properties of a group of cells.  Instead this model simply exists in the abstract dynamics of how these neurons talk to one another.  Mind blown yet?

Now to be completely honest this is hardly the first time that someone has come up with the idea that information is broken down and stored across a network of neurons (in fact there's a formal name for this called "sparse coding").  But what's interesting from our study is that complicated, mathematically principled information can just emerge naturally in the brain thanks to the fact that neurons are recurrently connected (i.e., send information back and forth) and they have associative learning.  This dramatically increases the complexity of information that our brain can store.  Information isn't just encoded in how the cells fire, but also in how they talk to each other as a group. It's a sort of meta-level type of information storage.

Let's end with this... our simulation used about 180 simulated neurons (with 32,400 connections) and we were able to do some pretty fancy mathematical processing.  The human brain has more neurons than there are stars in the Milky Way.  Each of these neurons has, on average, 10,000 connections or so.  A conservative estimate puts it as about 100 billion neurons with about 100 trillion axons.   Think about just how complex of a biological computer that this system could hold!

Yup.... crazy stuff like this is why I do science!

6 comments:

  1. This is completely awesome! I need to read it again to keep trying to fully make sense of it, but it's awesome! Thanks so much for explaining.

    ReplyDelete
  2. Glad you liked it Dr. Ussishkin. Maybe I can explain more over cocktails at some point in the future.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. hi..Im student from Informatics engineering, this article is very informative, thanks for sharing :)

    ReplyDelete