Back in Providence

Jan 18, 2006 01:26


At her suggestion, I'm keeping my mind on my work, which is interesting: preparing this semester's Introduction to Artificial Intelligence course. It's all east coast artificial intelligence--meaning that it deals with explicit symbolic representations and rules, often with discrete inputs and results, as opposed to west coast AI, which is more about statistical methods and connectivist (artificial neural net) systems in which representation is often encoded encoded indirectly into smaller units. In the latter systems, the "intelligence" is an emergent property of the system. In the former--the subject matter of this class--the intelligence is something you can track just by looking at what's in front of you.

I have a more West Coast mentality, as far as AI goes. Especially after last semester's Computational Cognitive Science class, I'm all primed for anything Bayesian--which is a little more flexible than this other stuff--and even before the class I was biased towards emergence and skeptical towards explicit representation.

But I'm learning to love this material fast. Only a small part of it carries over from last year's course--search algorithms and, to some extent, constraint satisfaction problems, which are both key. But this semester we're getting deep into propositional and first-order logic, and good algorithms for theorem solving in those logics, which are fascinating and make you feel like if you could just dedicate twenty hours to setting up the problem right, you could get a computer to save the world. These are methods that actually have been used to solve all sorts of real-world, practical problems effectively, as well as totally abstract mathematical proofs. It's just sound thinking done automatically. Offloaded cognition. And what the world really needs, I think, is more and better cognition.

Seriously. Knowledge is power.

Another need subject of the course: decision trees, which are just a cheap and easy way to represent knowledge about the world, but which can be learned by an easy (although, I think, with an uncomfortable amount of assumptions about the hypothesis space built in?--I'm still trying to work through exactly how powerful they are, as a representation). Two interesting things: first, the learning algorithm is explicitly designed to account for parsimony, or Ockham's razor--the point of decision trees as a system of representation is that they allow for the quick compression of relevant knowledge about a problem.

The other neat thing is that they don't have to be perfect because of something called ensemble learning: taking one just OK learning algorithm and packing many copies of it together, looking at different variations on the training data--seeing the same world in different ways--and getting a sort of aggregate opinion from all of them. The names for these methods are things like boosting and "bagging" (short for bootstrap aggregating). This allows a crunchy symbolic system to perform much more like a hippy connectivist or statistical system--

Meinolf (the professor) says that essentially, bagged decision tree learned perform just about as well as neural nets--meaning they do about second or third best, on virtually every problem. Insert rant about bias/variance tradeoff borrowed from Griffiths here.

I think pretty much all of the above was lost on most of you. But this is stuff that really interests me--I'm not sure I could do justice to why. But maybe if you think about it this way: if you consider a single human learner, who has some capacity for learning, but is much more effective when combined with the long and broad tide of learners that have come before to pass around knowledge and evaluate it and reevaluate it--there's an analogue in ensemble learning. And it bridges a gap between individualist and social attitudes toward knowledge (and truth?)

It's a great group of TA's--I'm psyched. As I've mention, the Aristides Connection is there. Also, an and perhaps the only other undergrad who is as interested in AI and machine learning as me is TAing with me--this is the first day I've had any real conversations with him, but I think we'll hit it off.

My only worry is that I have done today what I was hoping to do when I was feeling more and more pressured and uncomfortable at the end of break--found solace in my work as an escape from the doubts and conflicts I have in my personal life. Those are ominous and classically condemned priorities.

artificial intelligence, east coast, logic, west coast, cs141

Previous post Next post
Up