AAMAS 2011 - May 2-6, 2011 - Taipei, Taiwan


Full-Day Tutorials

CoopMAS tut


Cooperative Games in MAS Chalkiadakis, Elkind, Wooldridge Cooperative (or coalitional) games provide an expressive and flexible framework for modeling collaboration in multi-agent systems. However, from a computational perspective, cooperative games present a number of challenges, chief among them being how they can be succinctly represented and how to reason efficiently with such representations. In this tutorial, we survey work on several aspects of cooperative games and their applications to multi-agent systems. We assume a basic knowledge of AI principles (e.g., rule-based knowledge representation, very basic logic), but no knowledge of game theory or cooperative games. We introduce the basic models used in cooperative game theory, and the relevant solution concepts. We then describe the key computational issues surrounding such models, and survey the main approaches developed over the past decade for representing and reasoning about cooperative games in AI and computer science generally. We then discuss the aspects of cooperative games that are particularly important in multi-agent settings, such as uncertainty and decentralized coalition formation algorithms. We conclude by presenting recent applications of these ideas in multi-agent scenarios.

Dec. Making


Decision Making in MAS Doshi, Rabinovich, Amato, Spaan Choosing optimally among different lines of actions is a key aspect of autonomy in agents. The process by which an agent arrives at this choice is complex, particularly in environments shared with other agents. Drawing motivation, in part, from search and rescue applications in disaster management, the tutorial will span the range of multiagent interactions of increasing generality, and study a set of optimal and approximate solution techniques to time-extended decision making in both noncooperative and cooperative multiagent contexts. This self-contained tutorial will begin with the relevant portions of game theory and culminate with several advanced decision-theoretic models of agent interactions.
The tutorial is aimed at graduate students and researchers who want to enter this emerging field or to better understand recent results in this area and their implications on the design of multi-agent systems. Participants should have a basic knowledge of probability theory, and preferably, utility theory.

Sec. Games


Security Games Kiekintveld, Gatti, Jain Game theory is an increasingly important paradigm for modeling and decision-making in security domains, including homeland security resource allocation decisions, robot patrolling strategies, and computer network security. Several deployed real-world systems use game theory to randomize critical security decisions to prevent terrorist adversaries from exploiting a predictable security schedule. The ARMOR system deployed at the LAX airport and the IRIS system deployed by the Federal Air Marshals Service were first presented at the AAMAS conference.
This tutorial will introduce a wide variety of game-theoretic modeling techniques and algorithms that have been developed in recent years for security problems. Introductory material on game theory and mathematical programming (optimization) will be included in the tutorial, so there is no prerequisite knowledge for participants. After introducing the basic security game framework, we will describe algorithms for scaling to very large games, methods for modeling uncertainty and attacker observation capabilities in security games, and applications of these techniques for randomized resource allocation and patrolling problems. At the end we will highlight the many opportunities for future work in this area, including exciting new domains and fundamental theoretical and algorithmic challenges.

Social Laws


Social Laws for MAS Ã…gotnes, Van der Hoek, Wooldridge The tutorial gives an overview of the state-of-the-art in the use of social laws for coordinating multi-agent systems. It discusses questions such as: how can a social law that ensures some particular global behaviour be automatically constructed? If two social laws achieve the same objective, which one should we use? How can we construct a social law that works even if some agents do not comply? Which agents are most important for a social law to achieve its objective? It turns out that to answer questions like these, we can apply a suit of tools available from the interdisciplinary tool chest of multi-agent systems. The tutorial also gives instruction in research practices and methodology in multi-agent systems: what are key research questions of interest, and what are some of the most important methods employed in this interdisciplinary field?

Half-Day Tutorials

El. Negotiation


Agent-Mediated Electronic Negotiation La Poutre, Robu, Fatima, Ito This tutorial aims to give a broad overview of state of the art in agent-mediated negotiation. The tutorial will focus on the game-theoretic foundations of electronic negotiations. We review the main concepts from both cooperative and competitive bargaining theory, such as Pareto optimality, the Pareto-efficient frontier as well as utilitarian, Nash and Kalai-Smorodinsky (egalitarian) solution concepts. We discuss and compare games with complete and with incomplete information. Next, we exemplify these concepts through some well-known sequential bargaining games, such as the ultimatum game.
A particular emphasis will be placed on multi-issue (or multi-attribute) negotiation - a research area that has received significant attention in recent years from the multi-agent community. We discuss some of the challenges that arise in modeling negotiations over multiple issues, especially when no information (or only incomplete information) is available about the preferences of the negotiation partner(s), as well as some of the heuristics employed in AI and machine learning research to solve this problem. The second part of the tutorial focuses on multi-issue negotiations which may have realistic limitations like time-constraints, computational tractablility, private information issues, online negotiations, etc.



Multi-Agent Reinforcement Learning I: Algorithms and Analysis Methods De Jong, Kaisers, Melo, Nowe, Tuyls Participants will be taught the basics of single-agent reinforcement learning (RL) and the associated theoretical convergence guarantees, related to Markov Decision Processes (MDP). We will then outline how these guarantees are lost in a setting where multiple agents learn and introduce a framework, based on game theory and evolutionary game theory (EGT), that allows thorough analysis and prediction of the dynamics of multi-agent learning. We also discuss a fundamental question that designers of multi-agent learning algorithms are confronted with, i.e., what is it we want the agents to learn?
Fairness is shown to be an important consideration here, especially in case systems are designed to collaborate with human agents. Finally, the last part of the tutorial will focus on reward-free multi-agent scenarios, in which the agents learn a task by observing other agents perform it. We introduce several social learning mechanisms that have been gathering increasing attention and that may lead to different outcomes than individual RL.
The tutorial is offered in two half-day parts. Participants can register for each separate part (at the cost of a half-day tutorial), or for both parts (at the cost of a full-day tutorial).



Multi-Agent Reinforcement Learning II: Learning with and from Other Agents De Jong, Kaisers, Melo, Nowe, Tuyls See T7 above.


Copyright © 2011 IFAAMAS