I've this long list of things I've been meaning to blog about over the past few months. I recall making a rather drunken post from BundyFest at the time, but I thought I might try a more sober round up of the event. In brief, Alan Bundy, my PhD supervisor is 61 so we had a symposium to celebrate his 60th Birthday and the opening of the "Informatics Forum" - the rather fancy title of the new Departmental building created for Informatics in Edinburgh, after their previous department burned down in mysterious circumstances.
Alan has worked primarily in Automated Reasoning a field which, one the whole, I would say really took off in 1965 when
Alan Robinson published "A Machine-Oriented Logic Based on the Resolution Principle". Alan Bundy started working in what was then the Metamathematics Unit at the University of Edinburgh in 1971 and this became a part of the the Department of Artificial Intelligence in 1974. It's worth remembering how young the field was at this time. Automated Reasoning was only 10 years on from its first major result (Robinson's Resolution paper mentioned above) and it was only 20 years since
John McCarthy coined the term "Artificial Intelligence". So it's not surprising that Alan Bundy's early work also involved what are now considered separate subfields such as Machine Learning, Automated Software Engineering and Natural Language. At the symposium a number of his colleagues and students were invited to talk and they were spread across all these fields.
I would identify two major themes from the talks and panel sessions held at the symposium:
The question of What happened to Strong General Artificial Intelligence was kicked off by Fausto Giunchiglia in the first panel session. I'm adapting the terms somewhat but the dream in the 1950s and 60s was, more or less, to produce an artificial human-like intelligence. "General Intelligence" refers to the desire to produce a flexible intelligent program which can adapt to a wide range of circumstances. Strong Artificial Intelligence is, broadly speaking, the belief that if a program appears intelligent (passes the Turing Test, for instance) then it actually would be intelligent. In 1973 the Lighthill report happened (which was still a sufficiently open wound in 1993 that I had lectures on it as part of my MSc course) which essentially stated that the dream of General Intelligence was unachievable, that all the money so far invested in AI had been wasted and which plunged (at least British) AI into what has become known as the "AI Winter". Since then the field has tended to focus on specific intelligence - producing smart techniques appropriate to solving particular problems and has steered, almost pathologically, clear of claims that might be interpreted as any sort of ambition to create General Intelligence.
Kevin Warwick is, of course, the exception to this rule but the culture is sufficiently ingrained that I not only did I cringe when I recently had to post an item to the AISB website in which Kevin, bless, hinted that a program might pass the Turing Test in Reading this October, but I sat there murmuring "idiot" and "AI winter" under my breath.
Anyway, I think it became reasonably clear, that most of us, secretly, would like to think that the work we do represents steps towards "General Intelligence" and a lot of arguments, both old and new, surfaced. For instance the arguments, originally put forward by
Rodney Brooks in the 80s (particularly in his classic paper "Intelligence without Representation") that, what is essentially, the classic Automated Reasoning approach to Artificial Intelligence is fundamentally flawed. I could rehash the argument but suffice it to say Automated Reasoning treats reasoning as the manipulation of symbols at some level (if "A" is true and "A implies B" then "B" is true type stuff). Brooks argued that it is only an illusion that human reasoning works like that and in fact intelligence arises at the level of neurons firing in response to stimuli. An oft used analogy is that we didn't make aeroplanes by copying the way birds fly and that you couldn't build your own flying machine simply by examining the external features of an aeroplane. A despressing fact in this argument for us GOFAI (Good Old-Fashioned AI) people is that all the recent major advances in Natural Language processing have relied, essentially, on statistical methods not symbolic manipulation.
Simon Colton, who I love dearly, took this opportunity to tell us we are all tackling the wrong problem anyway. Only logicians are concerned with the problem of "How do I prove this statement is true" - which is essentially the problem that Automated Reasoning tackles. Everyone else wants to be told "something interesting that follows from these other things". Incidentally Simon is currently recovering from the shock of discovering that not only am I a Live Roleplayer (his shock of last year) but also a Doctor Who fan.
Of course the other reason, ungeneral intelligence has become so prevalent is that its is successful and has widespread industrial application. Techniques such as Expert Systems, Genetic Algorithms and Neural Networks, not to mention programming languages like LISP, Prolog and ML have proved to have widespread applications and have assumed existences beyond the field of Artificial Intelligence. An interesting point, raised by Fausto, was that a lot of the problems we had thought were about making objects more intelligent have shifted to making the environment more intelligent. For instance, getting a car to detect that the road is bending to the right is quite complex but if the road tells it that its bending to the right the problem of automated car control becomes a lot simpler...
The other big theme to emerge was the challenge that Ontology reasoning presents a Grand Challenge to Artificial Intelligence. Ontologies are big heirarchial databases of knowledge (all birds can fly, penguins are a type of bird - that sort of thing). They have been proposed as the enabling technology for the semantic web (so instead of web searching on keywords alone if you search for "bird" you'll also get results about "penguins" because the system knows they are a sort of bird). Some extremely large ontologies have been built, particularly in the field of medicine where they have very real and immediate data organisation problems. Problem is most of these large ontologies are inconsistent (the penguins can fly problem, some of you may have noticed above, among other things) and automated reasoning is basically useless in the face of inconsistency. Once you're facts are inconsistent you can prove pretty much anything is true that you care to.
Other interesting factlets:
Ewen MacLean wrote a Jazz piece based on rippling (probably Alan Bundy's most famous automated reasoning technique) which was performed by what, I gather, are some of Scotland's best Jazz musicians, at the reception. I don't pretend to understand how the music related to rippling though.
Everything was being filmed for posterity but that meant everyone taking the microphone to ask a question was supposed to identify themselves. Much amusement was thereby had every time Alan Bundy asked a question and started it by saying "I'm Alan Bundy".
Alan's PhD supervisor
Reuben Goodstein was, in turn, supervised by Wittgenstein. I've always been rather pleased by the fact that, in PhD terms, I am a descendent of Wittgenstein and, of course, via Wittgenstien of Bertand Russell.
The most important thing Alan ever told me was that really intelligent people worry more about understanding things than looking stupid. Therefore they ask questions whenever they don't understand something. I'm not as good at this as I should be.