There has been a huge amount of progress in the area of using brain probes to read our intentions, and then relay those to a limb that would otherwise be paralyzed (or, for that matter, to a prosthetic limb). I've written about related subjects in the past, both for EE Times and the IEE (now IET), but was really impressed to see that there has now been a full demonstration of the technology (albeit, in a monkey) and that there are new clinical trials underway to show how implanted brain probes can help real human patients. There's also an important trend towards wireless implants. If you're interested in this, you might want to read my new piece in New Scientist magazine on the subject.
Some cool research I didn't have space to cover in this feature is described in a new paper by Jose Carmena and Karunesh Ganguly about how our brains can learn to cope with this kind of implant. Until recently, most experiments with brain probes required regular, and often laborious, manual ‘tweaking’ of chip parameters. But it turns out that the brain can adapt to both the neural probe system (the bit that is connected in the brain and is interpreting the brain signals) and the behavior of whatever it is controlling. This is true whether the output is connected to something virtual, robotic, prosthetic, or part of the user’s own damaged body.
What is critical is that the mapping between the brain signals and actuator movement stays constant over time: i.e. that you don’t change the way the electronics work or which muscles they are controlling. In this case, the brain will create a stable memory of how best to create neural signals to drive the probe and what’s connected to it, potentially allowing a patient to build up a sophisticated set of motor skills.
Of course, it does make sense that this would be true, given what we know about plasticity in the brain, but now we know for sure. This is yet another reason to suggest that if we keep plugging away with these technologies we will eventually reap real benefits for real people.