Heterogeneous Agent Systems

Two weeks introductory course at
European Summer School in Logic, Language and Information 2002,
(Aug. 5-16, 2002, Trento, Italy)

 

 

Lecturers: J. Dix, Th.Eiter


 

Overview      Lecture Notes s


 

Here is a detailed plan of the various chapters:
     
Chapter 1: Introduction, Terminology, Fundamental Architectures (1 Lecture)
     
We illustrate the difference between Multi-Agent Systems and classical distributed AI, give a first definition and introduce in the terminology.
1.1 General: Distributed AI I vs. Multi-Agentsystems.
1.2 Intelligent Agents: Intelligent interactive Agents, Definition of an Agent, Properties of the environment, reactive/proactive/social, Agent vs. Object-Orientation, PAGE-Description.
1.3 Formal Definitions: Abstract View of an agent: mathematical functions action: S* --> A, env: SxA --> Pot(S), hist, see: S --> P, next: IxP --> I.

1.4 Reactive Agents: Formal Model, Behaviour, Inhibition Relation, Complexity, Pro/Contra, Example: Polar Lander, Cooperative Programming,
1.5 BDI-Agents: Stubborn vs hesitant Agents (Factor gamma), BDI-Structure
1.6 Layered Architectures: Fig. 2.1 [Fig. 1.6 on p. 62], Touring Machine Fig. 2.2 [Fig. 1.7 on p. 63], Autonomous Vehicle, Planning/Modelling/Control Layers, Interrap: Fig. 2.3 [Fig. 1.8 on p. 65]
     
Chapter 2: Distributed Decision Making (2 Lectures)
     
We consider the most important techniques for decision making: Voting, auctions and Bargaining.
2.1 Evaluation Criteria: Social welfare, Pareto Efficiency, Individual Rational, Stability: Nash Equilibrium.
2.2 Voting: Arrows Theorem, Ways out, Binary Protocol: Fig. 2.1 [Fig 5.1 on p. 206], Borda Protocol: Fig. 2.2 [Table 5.2 on p. 206].
2.3 Auctions: (4 Types (first price open cry--second price sealed bid), private/common/correlated value, Dominant Strategies, Profit for the Auctioneer, Non optimal allocations, Lying and Counterspeculation at Vickrey, Lookahead).
2.4 Bargaining: axiomatic/strategic, Discountfactor: Table 2.1 [Table 5.4 on p. 222], Table 2.2 [Table 5.5 on p. 222], bargaining costs.

     
Chapter 3: Contract Nets and Coalition Formation (1 Lecture)
     
We introduce two of the most important principles developed for Agent systems: Contract Nets (how to determine which contracts should be taken) and Coalition Formation (how to determine if and with whom agents should work together).
3.1 Contract-Nets: Tasc-Allocations Model, IR-Contract, Decision making: MC_add, MC_remove, Gradient-Descent, Maximising social welfare, anytime Algorithm. 4 Types of Nets: O, C, S, M. Problem with local Maxima. OCSM-Contracts: Gradient-Descent without Backtracking.
3.2 Coalition Formation: Nash vs Strong Nash,
3.2.1. Coalition Formation for CFG's: Def. CFG, Super additive Games, Search through the CS-Graph: Fig. 3.1 [Fig. 5.3 on p. 244] , Approximations, Last two Layers: CS-Search 1: Fig. 3.2 [Fig. 5.4 on p. 246] Improving the bound by breadth-first search from the top,
3.2.2. Parallelization of the search:,
3.2.3. Payoff Division: 3 Axioms, Shapley-Value)

     
Chapter 4: Agent Programming Languages (1 Lecture)
     
We  will, starting with Yoav Shoham's  seminal work on agent-oriented programming , briefly present several  high-level agent programming languages which have been developed since, and in particular such languages which employ  logic as a knowledge representation and reasoning  tool.  
4.1 Agent Oriented Programming: The AOP paradigm; AOP versus object-oriented programming; AOP framework; generic  interpreter.
4.2 AGENT-0: Agent-0 language design;  example.
4.3 PLACA Agent-0 limitations; PLACA refinement; mental change rules. 
4.4 AGENT-K:    KQML; enhance Agent-0 with KQML pragmatics.
4.5 Concurrent MetaTem:   Imperative future paradigm;  temporal logic;  MetateM programs;  concurrent MetateM; communication and interfaces. 
4.6 AgentSpeak:   agent architecture;  agent operation cycle. 
4.7 3-APL:   agent architecture; practical reasoning rules. agent operation cycle; comparison of different agent interpreters. 
4.8 ConGolog:   situation calculus; ConGolog agents. 
     
Chapter 5: IMPACT Architecture
     
We will introduce two running examples, discuss the underlying agent and server architecture and introduce the language in which services offered by agents can be expressed.
5.1 Szenarios: CFIT, STORE,
5.2 Agent/Server Architecture: Transducers, Wrappers, Mediators, Verb and Noun-Term , Hierarchies, Service names, Distances, Metrics, Matchmaking, composite Distances.
5.3 The Code Call Mechanism:) As one of our main motivation is to "agentize" legacy code, we discuss how we abstract from software (state of an agent) and how we encapsulate real data into a format that is amenable in logic (Code Calls). We also introduce the message box (which is part of each agent), Integrity Constraints
5.4 Message Box:Definition of Msgbox, Associated Functions, Formats.

     
Chapter 6: Actions and Agent Programs
     
After introducing actions and 3 notions of concurrently executing actions, we introduce status action atoms and define what an agent program is. The rest of the lecture is devoted to define various semantics for agent programs: Feasible, Rational, and Reasonable Status Sets.
6.1 Action Base: Actions, action atoms, precondition, add/delete lists.
6.2 Execution and Concurrency: Concurrency notion conc, executability, Weakly-, Sequential-, Full- Concurrent Execution.
6.3 Action Constraints: Satisfaction of action constraints.
6.4 Agent Programs: Syntax: action status atoms, agent rules, safety of rules, agent decision cycle.
6.5 Status Sets: deontic/action consistency, deontic/action closure, Operator App,
6.6 Feasible Status Sets: definition of feasibility.
6.7 Rational Status Sets: definition of rationality.
6.8 Reasonable Status Sets: definition of reasonable.

     
  Chapter 7: Implementing Agents: An Application  
     
  In this lecture, we determine a syntactically defined class of agents which can be efficiently implemented (in polynomial time): regular agents. They satisfy 4 properties: strong safety (to ensure that ccc's can be evaluated in finite time), conflict-freeness (to ensure that agent programs are consistent), deontic stratification (to ensure there are no loops through negation), boundedness (to ensure a program can be unfolded). We also give an application.
7.1 Weakly Regular Agents: Strong Safety, Binding Patterns, Finiteness Table, Conflict Freedom, Deontic Stratification.
7.2 Regular Agents:Unfolding.
7.3 IADE: GUI.
7.4: Gofish Post Office.

     
  Chapter 8: Planning in Agent Systems  
     
In this lecture, we consider how planners can be incorporated into agent systems for their mutual benefit.
8.1 Planners vs. Agents Agents: Assumptions underlying planners.
8.2 HTN Planning: Methods and operators.
8.3 Agentising SHOP: Agentised methods/operators. Ashop algorithm. Soundness/completeness.
8.4 NEO Domain: Data sources involved, phases of the evacuation process.

     
  Chapter 9: Extensions of the basic framework  
     
  This chapter describe extensions of our approach including Beliefs, Uncertainty and Time. They will be described only on a high level, without going too much into technical details.


9.1 Meta Agent Reasoning
We extend agent programs by beliefs: agents are allowed to have beliefs about others and to use them in their programs: meta agent programs. We illustrate the notion of a belief status set and show that we can implement meta agent programs by ordinary programs using extended code calls.
9.1.1 Belief Language and Data Structures:Belief atoms, Belief Language. Belief- (Semantics-) Table. 9.1.2 Meta Agent Programs and Status Sets:We introduce meta agent programs and illustrate how the semantics looks like.9.1.3 Reduction to ordinary Programs: We show how meta agent programs can be transformed into ordinary agent programs and thus implemented.

9.2 Probabilistic Agent Reasoning
We extend agent programs so that they can deal with uncertainty: code calls now return random variables.
9.2.1 Probabilistic Code Calls: The machinery with random variables. 9.2.2 Probabilistic Agent Programs:We introduce probabilistic programs and illustrate their semantics. 9.2.3 Kripke Style Semantics: We show how to overcome some limitations of the semantics introduced in 6.2. The price to pay is an exponential Kripke semantics.

9.3 Temporal Agent Reasoning
We extend agent programs by incorporating time: agents are allowed to make commitments into the future.
9.3.1 Timed Actions: Actions have a duration and the state needs to be updated at prespecified points, checkpoints. 9.3.2 Temporal Agent Programs:We introduce temporal annotations and temporal programs. 9.3.3 Semantics:The semantics of temporal programs is described.
  Get slides for first week in postscript
  Get slides for first week in pdf
  Get slides for second week in postscript
  Get slides for second week in pdf
  Get powerpoint presentation of the last lecture
     
     
     
        Juergen Dix