One of the key problems in AI is how an agent can best learn in a largely unsupervised manner via interactions with its environment. Games provide an excellent way to test approaches to this problem. They provide ready made environments of variable complexity, offering dynamic and unpredictable challenges. They enable the emergence of open-ended intelligent behaviour and provide natural metrics to measure the success of that behaviour.
Two main ways to train agents given no prior export knowledge are temporal difference learning, and evolution (or co-evolution). We'll study ways in which these methods can train agents for games such as Othello and Ms Pac-Man. The results show that each method has important strengths and weaknesses, and understanding these leads to the development of new hybrid algorithms such as EvoTDL, where evolution is used to evolve a population of TD learners. Examples will also be given of where seemingly innocuous changes to the learning environment have profound effects on the performance of each algorithm. Choice of architecture (e.g. type of neural network) is also critical.
The main conclusion is that these are powerful methods capable of learning interesting agent behaviours, but there is still something of a black art in how best to apply them, and there is a great deal of scope for designing new learning algorithms. The talk will also include live demonstrations.
Official Website: http://www.goldsmiths.ac.uk/cccc/whitehead/index.php
Added by Kevan on November 16, 2007