100 Current Papers in Artificial Intelligence, Automated Reasoning and Agent Programming. Number 5
Adam Sadilek and Henry Kautz. Location-Based Reasoning about Complex Multi-Agent Behaviour. Journal of Artificial Intelligence Research 43 (2012) 87-133
DOI:
doi:10.1613/jair.3421Open Access?: Yes!
The problem this paper addresses is that of inferring what is going on in some kind of group activity based on noisy data.
I'm incredibly shallow, so I'll confess that I was largely attracted into reading it because it involved playing capture the flag. They used the game as a case study and it quite neatly shows the problem they are addressing. Suppose you have a lot of real-time GPS data showing the players in capture the flag. Can you infer from when a player has been captured and/or freed? Or even when a player is trying to capture or free another? This is in the presence of GPS data that can be inaccurate up to 10 metres and sometimes just not present.
The researchers went about this by getting a whole load of people to play capture the flag FOR SCIENCE (wired up with GPS units) and then analysed the resulting data.
They modelled the game this using something called Markov Logic which (to keep things simple) is a logic that assumes that you don't necessarily have all the information, that some things may happen in a hidden fashion. Markov logics involve assuming each agent (in this case player of the game) has some state and that there are events (some hidden) that change an agent's state. So in terms of capture the flag the two states free and captured were of interest and the events capture and free could change the states.
They encoded the game in this fashion with rules like:
- A player transitions from a free state to a captured state only via a capture event.
- If players a and b are enemies, a is on enemy territory and b is not, a is not captured already, and they are close to each other, then a probably captures b.
- If a player is captured then he or she must remain in the same location.
The idea of probably is modelled by weights (a number between 0 and 1) which capture how likely the event is to happen. One of the objects of the procedure in the paper is to work out what the weights should be for the rules in the capture the flag problem. This was done by having a record of the actual captures and frees that happened in the capture the flag games. This was used to train the system on three of the games in order to work out what the weights should be, and then run the system on the fourth game to see if it could accurately spot the captures and frees. I'll frantically hand wave here to say that the weights together by looking for consistency (i.e. if a player keeps moving they can't have been captured) allow the logic to make deductions about what is going on.
The logical rules are also extended with the notion of attempted but failed captures and frees. This allowed the system to make deductions that some player was attempting a capture, as well as simply deducing that a capture had taken place.
The authors then evaluate their approach against several others to show that it is siginificantly more accurate in analysing the GPS data.
This entry was originally posted at
http://purplecat.dreamwidth.org/72092.html.