Monitoring Agents Using Planning
 




Content:



People

 

Project Overview



Multi-Agent systems have been recognized as a promising paradigm for distributed problem solving, and numerous multi-agent platforms and frameworks have been proposed, which allow to program agents in languages ranging from imperative over object-oriented to logic-based ones. A major problem which agent developers face with many platforms is verifying that a suite of implemented agents collaborate well to reach a certain goal (e.g., in supply chain management). Tools for automatic verification are rare. Thus, common practice is geared towards extensive agent testing, employing tracing and simulation tools (if available).

We present a  monitoring  approach which aids in automatically detecting that agents do not collaborate properly. In the spirit of Popper's principle of falsification, it aims at refuting from (possibly incomplete) information at hand that an agent system works properly, rather than proving its correctness. In our approach, agent collaboration is described at an abstract level, and the single steps in runs of the system are examined to see whether the agents behave "reasonable," i.e., "compatible" to a sequence of steps for reaching a goal.

Even if the internal structure of some agents is unknown, we may get hold of the messages exchanged among them. A given message protocol allows us to draw conclusions about the correctness of the agent collaboration. Our monitoring approach hinges on this fact and involves the following steps:
  1. The intended collaborative behavior of the agents is modeled as a planning problem. More precisely, knowledge about the agent actions (specifically, messaging) and their effects is formalized in an action which can be reasoned about to automatically construct plans as sequences of actions to reach a given goal.
  2. From the planning problem and the collaborative goal, a set of intended plans for reaching goal is generated via a planner.
  3. The observed agent behavior, i.e., the message actions from a message log, is then compared to the plans in  intended plans.
  4. In case an incompatibility is detected, an error is flagged to the developer resp. user, pinpointing to the last action causing the failure so that further steps might be taken.

Steps 2-4 can be done by a special monitoring agent, which is added to the agent system providing support both in testing, and in the operational phase of the system.

Among the benefits of this approach are the following:

  • It allows to deal with collaboration behavior regardless of the implementation language(s) used for single agents.
  • Depending on the planner used in step 2, different kinds of plans (optimal, conformant, ...), might be considered, reflecting different agent attitudes and collaboration objectives.
  • Changes to the agent messaging by the system designer may be transparently incorporated to the action theory, without further need to adjust the monitoring process.
  • Furthermore, the planning domain adds to a formal system specification, which may be reasoned about and used in other contexts.
  • As a by-product, the method may also be used for automatic protocol generation, i.e., determine the messages needed and their order, in a (simple) collaboration.
We detail the approach and illustrate it on an example derived from an implemented agent system.




Gofish Domain


We consider an example MAS called Gofish Post Office for postal services. Its goal is to improve postal product areas by mail tracking, customer notifications, and advanced quality control. Click to get more information about Gofish agents.


Agent monitor and Running Example


Agent monitor

We add a monitoring agent (monitor) to aid debugging a given MAS. monitor exploits the fed planning problem to generate all possible plans to reach a goal, then continually checks and compares the messages sent between agents with all plans. Once it detects any incompatibility, monitor generates an error file and reports to the designer. Two types of errors, design errors and implementation (coding) errors can be distinguished.
 
Example Scenario

Pat drops a package for a friend, Sue, at the post office. In the evening, Sue gets a phone call that a package has been sent. The next day, Sue decides to pick up the package herself at the post office on her way to work. Unfortunately, the clerk has to tell her that the package is already on a truck on its way to her home. Sue did not go back home in time, therefore the package wasn't delivered on time.

Click to view two demonstrations of how to use monitor to detect errors in Gofish MAS
 


Documentations

  • IMPACT Manual ( GZIP_PS, GZIP_PDF )
    implementation overview, agent instantiation life cycle and agent definition syntax
     


Publication

  • Monitoring Agents using Declarative Planning.
    Jürgen Dix, Thomas Eiter, Michael Fink, Axel Polleres, and Yingqian Zhang
    KI 2003, Advances in AI. A. Günther, R. Kruse, B. Neumann (Eds.), Hamburg, 15-18 September 2003, pages 490--504, Springer LNAI 2821, 2003. (.pdf, .ps ).
  •  Monitoring Agents using Declarative Planning.
    Jürgen Dix, Thomas Eiter, Michael Fink, Axel Polleres, and Yingqian Zhang
    Fundamenta Informaticae, 57(2-4)*345--370, 2003.(.pdf, .ps)


References

  • DLV^K Homepage
  • IMPACT Homepage
  • J.Dix, U.Kuter, and D.Nau. HTN planning in answer set programming. Technical Report CS-TR-4332 (UMIACS-TR-2002-14), Dept. of CS, UMD, MD 20752, Feb.2002. submitted to Theory and Practice of Logic Programming.
  • J.Dix, H.Munoz-Avila, and D.Nau. and L.Zhang. Theoretical and Empirical Aspects of a Planner in a Multi-Agent Environment. In G.Ianni and S.Flesca, editors, JELIA '02, LNCS 2424,pages 173--185. Springer, 2002.
  • J.Dix, H.Munoz-Avila, D.Nau, and L.Zhang. IMPACTing SHOP: Putting an AI planner into a Multi-Agent Environment. Annals of Mathematics and AI, 2003. to appear.
  • T.Eiter, W.Faber, N.Leone, G.Pfeifer, and A.Polleres. A Logic Programming Approach to Knowledge-State Planning: Semantics and Complexity. 2002. To appear in ACM Transactions on Computational Logic.
  • T.Eiter, W.Faber, N.Leone, G.Pfeifer, and A.Polleres. A Logic Programming Approach to Knowledge-State Planning, II: the DLV K System. 2002. To appear in Artificial Intelligence.
  • K.Erol, J.A.Hendler, and D.S.Nau. UMCP: A sound and complete procedure for hierarchical task-network planning. In K.J.Hammond, editor, Proceedings of AIPS-94, pages 249--254. AAAI Press, June 1994.
  • R.E.Fikes and N.J.Nilsson. STRIPS: A new Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence, 2(3-4):189--208, 1971.
  • M.Ghallab, A.Howe, C.Knoblock, D.McDermott, A.Ram, M.Veloso, D.Weld, and D.Wilkins. PDDL --- The Planning Domain Definition language. Technical report, Yale Center for Computational Vision and Control, October 1998. Available at http://www.cs.yale.edu/pub/mcdermott/software/pddl.tar.gz.
  • M.Luck, P.McBurney, C.Preist and C.Guilfoyle.The AgentLink Agent Technology Roadmap Draft. AgentLink, 2002.
  • V.Subrahmanian, P.Bonatti, J.Dix, T.Eiter, S.Kraus,F.Ozcan, and R.Ross. Heterogenous Active Agents. MIT-Press, 2000.


Last modified by Yingqian Zhang