BrainStream: Controlling a wheelchair with your mind and subvocally-controlled, augmented-reality ge

Sep 07, 2007 13:32


I've been waiting to write about this all morning (via BoingBoing). It seems a bunch of intrepid researchers at the Rehabilitation Institute of Chicago have taken work done on subvocal recognition (SVR) and applied it in such a fashion that one can drive/control a wheelchair through "thinking" about where they want it to move.

(NB: I've contributed heavily to the SVR wikipedia entry and liberally link to said service below. I acknowledge that most entries are not that of expert research so YMMV.)

I first got excited about SVR after reading Cory Doctorow's "Down and Out in the Magic Kingdom" in which the protagonist makes phone calls and interacts with his PDA subvocally. That is to say, the device detects electric signals sent from his brain to his larynx which would normally be translated into speech in the vocal tract. NASA and DARPA both had done some work in the past few years through the subject wearing a collar which detects the signals.

I think my endgame with SVR is really what Cory had in mind in "Down and Out" - be able to not only talk silently on the phone but to control a PDA, surf the web etc.

The phone hardware or service would convert the signals to speech - think of it; ring-tones now and in the future voice profiles - make your voice sound like anyone in any language. Somehow I don't think the Intelligence Community would appreciate that. It makes it difficult to determine who is who on the phone and would kill Voice Stress Analysis. I'm guessing the FCC and/or the ITU would want to have some say in how that works. Other issues include protecting ones own voice profile and the damage this could do to voice-based biometrics.

I wouldn't worry - voice recognition as it stands today is still complete rubbish (kestrell any thoughts on that from the accessibility side of things?). I still think the need to continually train any new piece of voice recognition software is a waste of time and they should all be required to adhere to a voice profile standard. Said standard would, in theory, allow you to train something once and export to an open format importing it into the same software (got to love the lack of portability of licenses!) or any other similar products. At a minimum, it should at least bring the user half-way to training the program to work well. The big risk again is someone stealing this profile to break voice-based biometrics. Surely that can be mitigated with some nifty encrypted token of some-sort with decent key recovery.

There has been some interesting work done on voice translation technology for the soldiers in the streets of Iraq that would have commercial applications in the future. Think of babelfish meets voice recognition and be able to speak with anyone regardless of their native language. Can you say "Universal Translator" ?

But back to the sci-fi. Combine subvocal recognition with augmented reality, GPS and gesture recognition (think Tom Cruise's manipulation of a multi-input virtual screen in "Minority Report") tied into a Net-connected PDA with a Head-Mounted Display (HMD). Hot damn.

The SVR would allow you to interact with the PDA silently. The GPS combined with augmented reality as visualized through the PDA's HMD would allow the user to view the entire physical world as annotated by Semantic Web, metadata addicts similar to some of the art installations touched upon in William Gibson's "Spook Country" (current reading). The gesture recognition which would be determined through sensors on clothes as well as the HMD would allow one to interact with the augmented reality to manipulate the objects only existent in the geotagged cyberspace environment. Absolutely mind-blowing.

Ye gods I love taking lunch to paddle down a brain-stream.

brainstream

Previous post Next post
Up