Category Archives: Science

Neural Implants: A Wolf in Sheep’s Clothing

On Wednesday, March 11, USC was visited by Dr. Philip Alvelda, the current Program Manager of the Biological Technologies Department of DARPA. He gave a talk to a crowd of (primarily engineering) grad students and professors about “Cortical Modems”, DARPA’s vision of the future of assistive technology: a direct neural interface that costs $10. (You can see a similar presentation here, but be forewarned, they didn’t tape the slides! Humanity Plus magazine published an article about this Silicon Valley presentation, found here.) At first, I found his talk very inspiring, but as he progressed, I was terrified by the ethical implications of the technology, and I wasn’t the only one with profound questions. During his talk, Dr. Alvelda espoused his commitment to the ethical issues involved, and requested that researchers get involved in this aspect of it. (What we can do, versus what we should do.) But his persistent dodging of the crowd’s ethical concerns left me deeply concerned about the future that lies in store if this technology gets traction.

IdeasCurrently, using optogenetics and optical technology, scientists are able to observe the fluorescing of the ~85,000 neurons of the zebrafish brain in real-time as they activate and deactivate. If we could get that level of resolution on particular human brain areas, such as primary visual (VI) or primary auditory (AI) cortex, we should be able to create interfaces with that part of the brain. (This technology does not exist yet.)

The obvious clinical application of such a technology is in the field of prosthetics. Cochlear implants, for example, have been a tremendous boon to the deaf community (depending who you ask), but they have their limitations. They function by stimulating the auditory nerve, replacing the function of the hair cells. However, they don’t have the same bandwidth as normal hearing. They allow a user to understand speech, but music is incomprehensible. Also, if there is damage to the auditory nerve, cochlear implants won’t be able to help.

Retinal prostheses (not yet commercially available) function in a similar fashion. If the ganglion cells of the eye and the optic nerve are intact, retinal prostheses can simulate the function of the retina, but if the optic nerve is damaged, nothing can be done.

Neural interfaces, at the very least, could change the way we think about sensory prosthetics, but may have the potential to change the face of society in deeply frightening ways. Dr. Alvelda proposed that if we were able to create imaging technology that can observe brain activity at a high enough resolution, we would be able to read and write perceptual information directly from/into brain activity. In other words, we should be able to project images or sounds (or any other perception) directly to the brain! Restore sight to the blind and hearing to the deaf! (And balance to the clumsy?)

As stirring as this vision may sound, the frightening part enters when you start thinking about how the development of this technology will likely come about in practice. Excellent clinical technologies can be expensive to develop and often have a very limited market. For example,  iDigitalTimes reported in 2013 that the i-Limb prosthesis from Touch Bionics Inc. can cost $38K-$120K, depending on how much of the arm is being replaced.

So how does one get funding for such a project? The easiest way is to expand the market to, well, everybody. A technology that anyone could benefit from is more likely to gain the attention of government funding agencies, not to mention the potential interest of venture capital. Aiding the clinical population becomes a side benefit, despite being the initial reason for developing the technology.

On one hand, you can start to see how this could begin to realize some of the coolest things science fiction writers and filmmakers have imagined about our future. Picture a world where augmented reality was integrated into your senses of sight and sound, rather than having to wear devices like the Oculus Rift, Google Glass, or headphones!

On the other hand, though, are the potential abuses of this technology. The abuse of human-machine interfaces and of genomics is an area that has been well explored in film and literature. The picture they paint is frightening, and raises a slew of ethical questions. Consider just the following examples:

  • In The Matrix, the minds of humanity are enslaved by visual and auditory neural implants. What’s to stop a tyrannical government (or terrorists) from hacking into our brains and controlling our minds?
  • In GATTACA, the advent of genetic technology leads to a society that practices eugenics and genetic discrimination. Would the advent of neural interfaces lead to a technological elite?
  • Similarly, in None So Blind, an award-winning short story by Joe Haldeman, a surgical operation is invented that dramatically increases human intelligence at the small cost of your eyeballs. The last line of the story is “The rest of us [i.e., not the protagonists] have to choose which kind of blindness to endure.” Aside from the creation of a technological elite, at what point is the cost of modification too high?

Any one of these worlds would be horrific to live in. It would be the kind of place that crushes the human spirit and foments rebellion and/or crime. (It could be argued that rebellion/crime are warranted when a ruling system is sufficiently corrupt!)

These are serious concerns that have worried the creative community for decades (if not centuries). They need to be taken seriously by researchers and by financial backers (be they public or private), and not dismissed lightly. The history of technology shows that there are always people willing to abuse new technologies, and that pioneering researchers tend to throw themselves headlong into a new field without enough caution about the consequences. (A recent case study: the horrific deaths associated with the first stages of genetic therapy.) It’s not encouraging when someone asks a serious ethical question, like the concern about brain-hacking, only to be answered with “How is that different than the radio making me listen to ads I don’t want to hear?”

Neural interfaces are a potentially amazing solution for certain clinical populations. But the negative social ramifications are so powerful that it might be better if, just for once, the community of science and tech researchers says “This isn’t worth the risks.”

Advertisements

#SfN14 Wednesday: Rehabilitation for Movement Disorders

I’m going to continue posting about things I saw at #SfN14 in the upcoming weeks, both here and on the Medium.com site for PLOS’s coverage of the conference.

One of the two interesting posters I saw today was unpublished work. But like my own work, it is unpublished because it is an in-progress engineering feat with tremendous potential. I was taken with their project, since they are attacking a similar problem as our lab with a different approach.

Developmental movement disorders are a lifelong issue for the patients diagnosed with them. They are not currently curable, but in many cases, treatments are available that can aid the lives of people with movement disorders.

In our lab, we are trying to develop treatments and assistive devices for children with dystonia, which has a relatively high incidence, as pediatric movement disorders go. Current treatments that are available tend to be focus on either relieving symptoms (e.g., botox injections) or affecting brain activity (e.g., levodopa treatment, deep-brain stimulation). At the moment, we are doing a lot of analysis on how well deep-brain stimulation works for children with dystonia, as well as examining vibratory biofeedback therapy. We’ve looked into non-invasive brain stimulation methods as a form of symptom relief with mixed results.

The poster I saw today, however, was both striking in its approach and appealing to my sensibilities. The lab group, based at UCSD, is developing a form of biofeedback therapy that really uses every non-invasive method we have our hands on to treat subject. (They are working with people who have Parkinson’s, but that is related to the same brain area as dystonia: the basal ganglia.) What they have accomplished is really less important than what they plan to accomplish. They are trying to combine EEG, EMG, and haptic feedback from small, commercially available PHANTOM haptic robots with a virtual reality display for rehabilitation tasks.

Biofeedback therapy is already used in rehabilitation contexts, such as psychotherapy. BIOPAC, who had an exhibition at the conference this year, worked with USC’s Institute of Creative Technology to develop Virtual Iraq, a multimodal virtual reality simulation of Iraq battle zones for the treatment of PTSD in our military veterans. In addition, there is an overwhelming amount of evidence that manipulating sensory feedback can facilitate motor learning. This approach also has the benefit of being a non-invasive technology, which is something I feel strongly should be attempted when possible. Invasive procedures always bear with them risk and emotional trauma, and it tends to be unclear whether it’s worth doing that to children.

I was really excited about the progress the research group has made on this project. While they are only currently examining how their setup can help patients with Parkinson’s, my feeling is that the technology has broader application in treating movement disorders.

My own exposure to dystonia research has been limited to pediatric cases, but there is one form of dystonia called focal hand dystonia (colloquially known as “writers’ cramp”). This form of dystonia forces the hand to take abnormal postures whenever you try to make voluntary movements of the hand. This disorder is known to happen in musicians and athletes, and can cripple careers. Normally, the only way to get rid of it is to stop playing and hope it goes way, but this technology has the potential to facilitate recovery. I’m anxiously looking forward to seeing how their work develops!

#SfN14 Day Two: sensorimotor learning

During the morning session of #SfN14 today, I was able to survey some of the posters in the “sensorimotor learning” poster session. One poster that grabbed my attention was presented by Peter Butcher, who works for Jordan Taylor at Princeton University. Their work was exploring what kinds of sensory feedback are actually associated with sensorimotor adaptation.

“Sensorimotor adaptation” refers to how we accustom ourselves to changes in the environment. This is often probed using experiments that mess with sensory feedback or that apply unusual forces to subjects as they attempt to accomplish tasks. One of the most common paradigms for sensorimotor adaptation experiments is the “curl-field” paradigm. In these paradigms, subjects hold a robotic arm during reaching tasks, but the arm is programmed to apply a velocity-dependent force perpendicular to the direction of movement, and subjects have to compensate for this force to accomplish their tasks.

In the experiment Peter was talking about in his poster, subjects were required to move a cursor into a target. Two types of feedback were given: either end-of-movement visual feedback was given, or an integer value was given such that a score of 100 means you’re in the target. Only the case with visual feedback displayed adaptation; reward feedback does not cause the aftereffect you typically see when you return to full visual feedback. So the researchers asked: what is it about visual feedback that induces adaptation? They were able to determine that getting feedback about what direction the target is relative to your current position(without distance information) drives adaptation, while the converse (distance information without directional feedback) does not seem to cause adaptation. (Of course, having both kinds of feedback together gave the best adaptation results.)

What does this mean about how we learn? From a rehab perspective, understanding which aspects drive learning and adaptation can help shape the way we train and retrain movements.

The Translational Session of the TCMC Satellite – Part 1 #SfN14

I was unfortunately unable to attend the afternoon “computational” session of the TCMC satellite, but I found the morning session very stimulating! Instead of going into detail on any of the talks, I think I’m going to summarize the findings they discussed, and I’ll go into more detail on another occasion (or upon request). I don’t know which of the authors was the speaker in some of the talks, but you can find the event’s schedule here.

There were six 20-minute talks, but I will just talk about two I found interesting. The first talk was a study of whether motor memories are context-dependent. Previous research has shown that memory of fear conditioning in rats can demonstrate a context dependence. The researchers of this study tried to examine whether the state of the brain during motor learning can affect the development of motor memory. They used transcranial direct current stimulation (tDCS) to affect the state of the brain. tDCS basically means that two electrodes are applied to the head and a current is run between them, which flows through the skull and into the brain. They used a very typical task for studies in my field call a “curl-field task”. Subjects are asked to push a robotic manipulandum forward, but the robot applies a velocity-dependent force perpendicular to the direction of movement. These researchers used tDCS to associate an up- or down-regulation of somatosensory cortex with curl fields in opposite directions. Then, in error-clamp trials (where only tDCS is applied and movement of the robot manipulandum is constrained to be error-free by only allowing movement of the manipulandum toward the target), the subjects performed the tasks associated with the different types of tDCS applied as if the appropriate curl fields were active!

During questions, there was one researcher who prefers to use transcranial magnetic stimulation (TMS) who pointed out that when you use tDCS, you don’t really know what brain areas you are affecting because the current flows over relatively large areas of the brain. While this may be true, I was extremely interested in this study because in our lab, we have attempted to use tDCS to treat dystonia in a number of studies that have had mixed to negative results. If dystonia is related to a failure in motor learning, perhaps it is indeed possible to use tDCS to help patients retrain themselves?

Returning Absent Abilities: Developmental Dystonia Research and Device Development

The first times I visited my advisor’s pediatric neurology clinic at Children’s Hospital Los Angeles as a new graduate student, I found the experience both beautiful and heartwrenching. My advisor, Dr. Terry Sanger, specializes in pediatric movement disorders. At that point in my life, I’d had limited exposure to people who live with any sort of motor or cognitive disability. The kindness and caring of the doctors and the parents was deeply moving, but developmental movement disorders are a difficult challenge to live with.
Continue reading

BME on the Brain (Intro to #SfN14 Blogging)

(adapted from a previous post on Medium in the collection Collaborative Coverage of SfN 2014 by PLOS Neuro Community Bloggers)

I am where I am today, a neuroscience-oriented BME grad student, because of one of my father’s (and my) favorite movies: The Empire Strikes Back. (If you haven’t seen it, I’m about to say some spoilers, so skip this paragraph!) The struggle between good and evil resonates for any young child, but that’s not what caught my attention. After Luke escapes Darth Vader sans left hand, the close of the movie shows Luke having a prosthetic hand attached that looks and behaves exactly like his original hand. When I saw that, I said to myself, “I want to invent that someday”, and a biomedical engineer was born. Continue reading

To Bayes, or not to Bayes?

Bayesian Integration and the Size-Weight Illusion

It has been quite a while since I’ve posted, so I thought I’d start the semester with some interesting science, and hopefully find time to ponder, philosophize, and pontificate later in the semester.

At the 2013 annual meeting of the Society for Neuroscience (SfN), I had the pleasure to meet many people doing all sorts of interesting science. One such person was (now Dr.) Megan Peters. I met Megan at the Translational and Computational Motor Control (TCMC) satellite event, which tends to attract the same crowd as the Society for the Neural Control of Movement (NCM). We’ve kept in touch since then, and I had the opportunity to listen to Megan rehearse her dissertation defense, and it was fascinating! Her work has some interesting overlaps with the theories we think about in our lab group, but I’ll have to leave talking about those things for after she publishes them! (If you like this post, see Megan’s webpage to keep up on her endeavors!)  Continue reading