Apr 28, 2007 18:37
Yesterday, after the advice-seeking, I presented David with my ideas for automatic model induction.
He drew a line on the whiteboard. On the leftmost end, he drew a bag with different cognitive-model-types (neural networks, ACT-R, Bayesian networks); at the right end, fully instantiated models.
His question was: how far to the left are you willing to push this idea?
This was a good question, to which I did not have an answer.
He thinks my idea is very exciting, because human expert modelers prune the search space too much (often to a single model, i.e. single hypothesis), and stick with it as long as the data isn't enough to reject it. Given the huge underdetermination in psychology, this methodology is pretty far from optimal. Also, scientists tend to stick to their paradigm, always using the same kinds of model. By searching through many kinds of models, my idea has the potential to improve methodology.
However, this model search is computationally a very hard problem, unless I specify more constraints. My usual answer to this would be: let's do some task analysis, and copy what humans do. However, in this case, he would say that we don't gain anything by having computers do the work. But I think a halfway is possible: yes, by using heuristics, we do lose some generality, but since computers can crunch more data than humans, they can look through a wider range of models, and this way improve the quality of models that are being proposed today.
Another issue was how experts select among the multiple well-fitting models. A lot of tacit knowledge goes into this (sometimes, you need to have read an obscure paper in order to prefer or disprefer a certain model). There was no proposal to automate this.
cogsci,
automated_science