Tenth International Conference on Cognitive and Neural Systems:
http://cns-web.bu.edu/cns-meeting/2006conference.htmlhttp://cns-web.bu.edu/cns-meeting/2006schedule.html This review consists of 3 parts:
1) Plenary lecture:
Ken Ford (Florida
Institute for Human and Machine Cognition)
"Toward cognitive prostheses"
2) NSF Science of Learning Centers program and how its impact feels at the conference
3) Various results (mostly reviewed in my LJ already)
1) Eyeglasses as an example of a brilliant ocular prosthesis: human-centered, fit for a particular person, etc, etc. Ford wants to make systems that can be regarded as cognitive or perceptual prostheses..
1.1) OZ cockpit display (the starting example):
we can have this aircraft instrument panel
http://radio.weblogs.com/0105910/2003/08/30.htmlinstead of traditional aircraft instrument panels:
http://en.wikipedia.org/wiki/Cockpit "OZ decreases the time it takes for a pilot to understand what the aircraft is doing from several seconds to a fraction of a second". This is a huge difference for pilots in emergency, combat, etc, or to novice pilots. Ford discussed possible strategies of introducing this design to the real world. He wants to start with land-based controllers who need to simultaneously control multiple pilotless military aircraft, where the advantage of this design is obvious, and there is no risk of human fatalities which could be blamed on the new interface and no conservatism of the pilot community to overcome. For some reason, he did not mention the possibility of having the new display and the traditional display side-by-side. A paper mentioning some details of OZ cockpit interface on the second and third pages is here:
http://www.computer.org/portal/cms_docs_computer/computer/homepage/misc/IHMC/r1061.pdf 1.2) Tongue - a "human USB port".
It turnes out that earlier rumors about using electrodes on a tongue as an input channel are true, and that this method actually works with vision (full synaesthesia effects are claimed). They think not so much of medical application, but more of Defense and Space applications, i.e. that infrared camera to tongue interface may be less burdensome than night vision goggles).
1.3) 3D corset, e.g. for a helicopter pilot to sense "feelings on the skin of the helicopter".
1.4)
Personal Satellite Assistant -- a ball designed to move and operate independently in the microgravity environment of space-based vehicles. It has sensors for measuring gases, temperature, and air pressure and is able to perform video conferencing and can communicate with electronic support devices. (Impressive videos.)
2) There are tons of money in that new NSF program on the Science of Learning. I think it is prompted by the feeling that Americans are losing competitiveness in the global economy, and this is an attempt to find and use neuroscience-based methods to make learning in the American schools more efficient.
Three centers are already created under this program: CELEST (Boston), Learning in Informal and Formal Environments (University of Washington), the LearnLab center (Pittsburg). I think that CELEST received $20 million (I am not sure what is the time period covered by this amount), and that an NSF program officer expressed intention of spending up to $50 million per year. They want to view these centers as "national resources" and want to enable them to invite people.
Obviously, when that much money are involved there are positive and negative sides of the impact. For example, there was no tutorial day in the conference, instead there was a "CELEST Workshop on Auditory and Visual Attention". There was an "NSF Science of Learning Centers symposium" and in general there was impression that the program is influenced by the presence of sponsors with big money. For example, no posters with "wild ideas" were present this time. There were also rumors of infighting between some scientists in connection with this new situation.
However, it's not all bad. Something is lost and something is gained here, in terms of new opportunities for research. For example, Pittsburg center activity involves creation of online tools and courses, and actually measuring what works and what does not in a real classroom. Testing what works and what does not in a real classroom is something people in this field never do, which is why American education is dominated by the methods which suck big time, so this approach is quite revolutionary:
http://www.learnlab.org/ This is great, and the idea to try to base future educational reforms on empirical data rather than on ideology should raise some hopes about the future of American education, but at the same time their project feels very industrial (one can feel corporate spirit there, rather than traditional academic spirit)..
3) Various results:
3.1) Curvelets, a strange version of "sharp-looking, angular" wavelets:
http://anhinga-anhinga.livejournal.com/39755.html 3.2) Reinforcement learning with spikes:
http://anhinga-anhinga.livejournal.com/40266.html 3.3) What might eventually turn to be the main event of 10th ICCNS: Max Versace (a graduate student of Grossberg) finally did something to unify Grossberg's ART ("adaptive resonance theory") and spikes and synchronous oscillations:
http://anhinga-anhinga.livejournal.com/41032.html 3.4) Retrograde memory enhancement and stress:
http://anhinga-drafts.livejournal.com/3193.html 3.5) Norepinephrine and tradeoff between exploitation and exploration:
http://anhinga-drafts.livejournal.com/3432.html