Invited Lecture, Oct. 2000, Ushuaia, Argentine

This course is given at Argentinean Conference on AI

Dates (tentative)


Research in "Multi-Agents" is a still growing area which adresses the need to move from the development of massive programs containing millions of lines of code, to smaller, modular, pieces of code, where each module performs a well defined, focused task (rather than thousands of them). "Software agents" constitute the latest innovation in this trend towards splitting complex software systems into components.

Although there have been developed in the last years a huge variety of techniques and methods related to agents, a well-defined theoretical foundation unifying the different facets under one umbrella is still missing.

Consequently, there is no real textbook for Multi-Agent Programming available yet. While preparing a graduate course on Multi Agents, I decided to use the books of G. Weiss, Multi-Agent Systems, MIT Press 1999 and Heterogenous Agent Systems ( Draft), by VS. Subrahmanian et. al., MIT Press 2000. While the first book is a collection of individual articles devoted to several important general techniques (planning, searching, decision making, learning, distributed AI), the second book describes a particular, but general approach to Multi-Agency, with unified notation and detailed theoretical foundations.

I will try to give in my course a rough picture of what the IMPACT (Interactive Maryland Platform of Agents Collaborating Together) approach consists of. In particular, we will give detailed and precise answers to the following important questions:

Q1: What is an agent?
Q2: If program $P$ is not considered to be an agent, how can it be "agentized"?
Q3: What kind of software infrastructure is required for multiple agents to interact with one another?

The IMPACT approach is particularly concerned with the following points:

1. A theory of agents must take into account that data is stored in a wide variety of
data structures and is manipulated by an existing corpus of algorithms.
2. A theory of agents must not depend upon the set of actions that the agent performs.
Rather, the set of actions must be a parameter that is taken into account in the semantics.
3. Every agent should execute actions based on some clearly articulated decision policy.
4. Agents must be efficient, relative to an oracle to handle API calls to the
underlying code base on top of which the agent is built.
5. Agents must be able to reason with beliefs, time, uncertainty and security.

We will discuss these issues while illustrating the IMPACT approach.

PREREQUISITES: Participants should have some knowledge in formal methods: first order predicate logic, least fixpoints, a little bit of universal algebra. Logic Programming concepts are not required, but can help.

Overview of the Lecture

1. Lecture:IMPACT Architecture
We will introduce three running examples, discuss the underlying agent and server architecture and introduce the language in which services offered by agents can be expressed.
1.1 Three Szenarios: CFIT, CHAIN, STORE,
1.2 Agent Architecture: Transducers, Wrappers, Mediators.
1.3 Server Architecture: Verb and Noun-Term , Hierarchies, Service names, Distances.
1.4 Service Description Language: Metrics, Matchmaking, composite Distances.)

Get Slides
2. Lecture:The Code Call Mechanism
As one of our main motivation is to "agentize" legacy code, we discuss how we abstract from software (state of an agent) and how we encapsulate real data into a format that is amenable in logic (Code Calls). We also introduce the message box (which is part of each agent), Integrity Constraints and illustrate how services are implemented.
2.1 Software Code Abstraction: Types, Functions, Composition Operators, State of an Agent.
2.2 Code Calls: Code Calls, Variables, Code Call Atoms, Code Call Conditions, Safety.
2.3 Message Box:Definition of Msgbox, Associated Functions, Formats.
2.4 Integrity Constraints: Definition.
2.5 \sdl and Code Calls: Service Rules, Service Definition Program.

Get Slides
3. Lecture: Actions and Agent Programs
After introducing actions and 3 notions of concurrently executing actions, we introduce status action atoms and define what an agent program is. The rest of the lecture is devoted to define various semantics for agent programs: Feasible, Rational, and Reasonable Status Sets.
3.1 Action Base: Actions, action atoms, precondition, add/delete lists.
3.2 Execution and Concurrency: Concurrency notion conc, executability, Weakly-, Sequential-, Full- Concurrent Execution.
3.3 Action Constraints: Satisfaction of action constraints.
3.4 Agent Programs: Syntax: action status atoms, agent rules, safety of rules, agent decision cycle.
3.5 Status Sets: deontic/action consistency, deontic/action closure, Operator App,
3.6 Feasible Status Sets: definition of feasibility.
3.7 Rational Status Sets: definition of rationality.
3.8 Reasonable Status Sets: definition of reasonable.

Get Slides
4. Lecture: Regular Agents
In this lecture, we determine a syntactically defined class of agents which can be efficiently implemented (in polynomial time): regular agents. They satisfy 4 properties: strong safety (to ensure that ccc's can be evaluated in finite time), conflict-freeness (to ensure that agent programs are consistent), deontic stratification (to ensure there are no loops through negation), boundedness (to ensure a program can be unfolded). We also give experimental results of checking these properties.
4.1 Weakly Regular Agents: Strong Safety, Binding Patterns, Finiteness Table, Conflict Freedom, Deontic Stratification.
4.2 Properties of Weakly Regular Agents: Canonical Layering, computation procedure.
4.3 Regular Agents:Unfolding.
4.4 Compile-Time Algorithms: Check\_WRAP, Check\_Regular, Reasonable-SS.
4.5 IADE: GUI.
4.6 Experimental Results: Performances of algorithms.

Get Slides
5. Lecture: The next three Chapters describe extensions of our approach including Beliefs, Uncertainty and Time. They will be described only on a high level, without going too much into technical details.

Chapter 5. Meta Agent Reasoning
We extend agent programs by beliefs: agents are allowed to have beliefs about others and to use them in their programs: meta agent programs. We illustrate the notion of a belief status set and show that we can implement meta agent programs by ordinary programs using extended code calls.
5.1 Belief Language and Data Structures:Belief atoms, Belief Language. Belief- (Semantics-) Table.
5.2 Meta Agent Programs and Status Sets: We introduce meta agent programs and illustrate how the semantics looks like.
5.3 Reduction to ordinary Programs: We show how meta agent programs can be transformed into ordinary agent programs and thus implemented.
Get Slides

Chapter 6. Probabilistic Agent Reasoning
We extend agent programs so that they can deal with uncertainty: code calls now return random variables.
6.1 Probabilistic Code Calls: The machinery with random variables.
6.2 Probabilistic Agent Programs:We introduce probabilistic programs and illustrate their semantics.
6.3 Kripke Style Semantics:We show how to overcome some limitations of the semantics introduced in 6.2. The price to pay is an exponential Kripke semantics.
Get Slides

Chapter 7. Temporal Agent Reasoning
We extend agent programs by incorporating time: agents are allowed to make commitments into the future.
7.1 Timed Actions: Actions have a duration and the state needs to be updated at prespecified points, checkpoints.
7.2 Temporal Agent Programs:We introduce temporal annotations and temporal programs.
7.3 Semantics:The semantics of temporal programs is described.
Get Slides

visitors since March 10, 2000.

This Page was created by Juergen Dix