When I first heard the story about the Feelspace or North belt, one of the things that excited me the most was that it demonstrated how we learn from all the stimuli we are regularly in contact with, even those that don't seem very special. We learn that does and doesn't 'feel right' from the swish of our winter coats to the sound of our shoes on the ground to the vibration of our cars. We don't just do passive sensing, but active: we interact with the world in routine ways and can tell something about our environment by the way it reacts to us.
Another thing that I thought was very interesting was that it showed how we extend ourselves with technology already. One of the things that occurred to me in the lecture where I first heard Peter König talk about this work was that taxi drivers might use their car as a kind of big North belt, and improve their sense of direction (in the car) by associating vibrations, the feeling of turns etc. with their position. So, for instance, if you put a London cabbie in a car, blindfolded, you might expect him or her to have a much better idea of where they were after five or ten minutes than your average driver. (London cabbies are good because they have to do the knowledge, i.e. learn central London like the back of their hands, and they've been studied by neuroscientists who found they tend to have a larger-than-normal hippocampi. See PNAS 2000, Maguire et. al. if you want more.)
Thinking about both of these, one thing seems critical: the laws of physics. Not understanding them, but internalising the bits that relate to our everyday interactions with the environment. My argument is simply that the closer the artificial stimuli that we feed into the body fit with what we are expecting from this physical framework, the more quickly and easily we can digest that information. This is what I think an 'intuitive' display is.
So for instance, when I was at Charles Spence's lab in Oxford, we discussed his findings that the skin wasn't a great means of getting information into the body. It can barely sense two separate stimuli at the same time, he said, never mind three. Interesting, but what kind of information is skin supposed to be passing on, what should it be good at? We don't use our skin to 'see' shapes or to count, we generally use it to monitor things that are moving on our bodies: normal, benign things like hair and clothes, and less benign things like plants or animals that could do us damage.
Interestingly, Spence said he didn't have a model for what skin should be good at. As an experimental psychologist, his job is just to measure what it could and couldn't do and to try to find interactions that were useful. To me, this is unsatisfactory: without a model, the state-space of possible 'useful interactions' is enormous, and the chances of happening on the best ones are limited. Only with a clearer idea of what is going on can we start engineering more intuitive interfaces.
Check out the experiments by Ramachandran on body schema and getting people to sense a 'phantom nose'. It seems the brain is hardwired to treat any synchronous input as related and causal. I don't think there's anything special about people learning to use their inbuilt 'natural' senses - the developing brain must learn to use them in exactly the same way as artificial senses are learned.
We are probably primed to handle some of them (like sight and sound), but the brain will work with whatever input it gets, regardless of whether it's from an 'artificial' source. To the brain, your cochlea is just as external as a hearing aid or a North belt.
Posted by: Anonymous | Wednesday, 04 April 2007 at 11:14 AM