Dec 27, 2011 17:12
So I've been reading Inside Jokes the past few days, and one thing that I find both interesting and alarming in their overview of cognitive theories is just how much each theory relies on contemporary technology, not just as a framework but in a way that indicates that the authors of each theory were mentally constrained by what was available. For example, the release theory of humour, the idea that humour is a release of nervous energy/tension, relies heavily on a gasoline model of cognition, where over time cognitive energy can 'build up' over time in the hypothetical pipes of our brains. And the frame/script model of cognition, where we have a bunch of scripts pre-built from common features of our previous experiences, is considered an example of 'just-in-case' processing, a kind of model processing that was widely in practice at the time. The alternative that Dennet et al are proposing? Just-in-time spreading activation, which is a modelling process borrowed from current economics and which is used extensively in inventory management in large companies. And of course our current models of cognition involve computation and neural nets and so forth.
All of this of course, points to the idea that part of the reason we've been having so much trouble with computational modelling of cognition is that computation might be the wrong metaphor. More wrong than the pipes-and-fuel model, or the gears-and-cogs models of the past? Probably not, considering how much more stuff we can do with computation in general. But I definitely wouldn't rule out another paradigm shift or two before we hit on a sufficiently accurate model of cognition that we can actually do stuff with it.
cogsci