Feb 06, 2009 17:03
Cox, "Perpetual Self-Aware Cognitive Agents."
A substantial disappointment after the title, this article discusses symbolic-logic based planning systems which use "meta-cognition," thinking about their own thinking, to update their understanding of the world (or, rather, the model worlds which they inhabit) as they deal with their environment. Exemplary instances of GOFAI nonetheless, the first of the systems discussed in the paper, "Meta-AQUA," simply interprets a simple story given to it, noticing and accounting for "unexpected" events- thus, when some stage of a story doesn't agree with what the system would predict based on its knowledge base, it updates this knowledge base so that it agrees with the story. The second system discussed, "INTRO," seems slightly more interesting in that it actually acts within a model world, using essentially the same mechanism but with a second "planning" module which translates into actions in its environment; thus, it can notice "unexpected" events in its environment and generate appropriate responses to them. However, the author admits that currently INTRO has no means of deciding whether the unexpectedness of an event requires a special response in the environment or an update to the system's knowledge base, and this is currently hard-coded. Furthermore, it is nowhere explained how these systems figure out what knowledge explains what anomaly or what action is appropriate for what situation, and I suspect that this too may simply be hard-coded in. Thus my continuing distrust of GOFAI.