Intrinsic theories: a methodology for reasoning about functional programs and their computational complexity
Daniel Leivant (Indiana University)

Abstract: We present a general methodology for reasoning about functional programs, which uses as axioms only the properties most intrinsic to the data type in hand, and treats programs as auxiliary axioms. Its advantages include expressive flexibility, minimal axiomatics, dispensing with coding for reasoning about partial functions, data genericity, and a clear conceptual framework for machine-independent characterizations of a wide spectrum of computational complexity classes.

Abstracting Properties in Concurrent Constraint Programming
Rene Moreno (University of Pisa)

Abstract: We present a denotational semantics for concurrent constraint programming based on derivations containing sequences of interactions of a process with the environment. Our semantic is then used as collecting semantics for abstracting properties of computations by applying techniques of abstract interpretation.

Correctness of Logic Programs Constructed by a Schema-Based Method
Emmanouil Marakakis (Technological Educational Institute of Heraklion)

Abstract: The correctness of logic programs which are constructed by a schema-based method is presented in this paper. This schema-based method constructs typed, moded logic programs by stepwise top-down design using five program schemata, data types and modes. A proof scheme is proposed for each program schema. It is claimed that the structure of a logic program constructed by this schema-based method is reflected in the structure of its correctness proof. In other words, there are ``proof schemes" corresponding to design schemata which are followed in the correctness proofs of logic programs. Correctness proofs in this approach are guided by the constructed logic programs.

Transforming Conditional Rewrite Systems with Extra Variables into Unconditional Systems
Enno Ohlebusch (University of Bilefeld)

Abstract: Deterministic conditional rewrite systems are interesting because they permit extra variables on the right-hand sides of the rules. If such a system is quasi-reductive, then it is terminating and has a computable rewrite relation. It will be shown that every deterministic CTRS R can be transformed into an unconditional TRS U(R) such that termination of U(R) implies quasi-reductivity of R. The main theorem states that quasi-reductivity of R implies innermost termination of U(R). These results have interesting applications in two different areas: modularity in term rewriting and termination proofs of well-moded logic programs.

A fixpoint semantics for reasoning about finite failure
Roberta Gori (University of Pisa)

Abstract: Our aim is to define a new fixpoint semantics which correctly models finite failure. In order to achieve this goal a new fixpoint operator is derived from a ``suitable'' concrete semantics by defining a Galois insertion modeling finite failure. The corresponding abstract fixpoint semantics correctly models finite failure and is and-compositional.

Keywords: Abstract interpretation, Logic programming, Finite failure.

Proving Failure of Queries for Definite Logic Programs Using XSB-Prolog
Nikolay Pelov,
Maurice Bruynooghe (Katholieke Universiteit Leuven)

Abstract: Proving failure of queries for definite logic programs can be done by constructing a finite model of the program in which the query is false. A general purpose model generator for first order logic can be used for this. A recent paper presented at PLILP98 shows how the peculiarities of definite programs can be exploited to obtain a better solution. There a procedure is described which combines abduction with tabulation and uses a meta-interpreter for heuristic control of the search. The current paper shows how similar results can be obtained by direct execution under the standard tabulation of the XSB-Prolog system. The loss of control is compensated for by better intelligent backtracking and more accurate failure analysis.

Practical Reasoning for Expressive Description Logics
Ian Horrock (University of Manchester),
Ulrike Sattler (RWTH Aachen),
Stephan Tobies (RWTH Aachen)

Abstract: Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles, role hierarchies, and qualifying number restrictions. Early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSPACE. Finally, we investigate the limits of decidability for this family of DLs.

First Order Linear Temporal Logic over Finite Time Structures
Serenella Cerrito (University Paris 11),
Marta Cialdea Mayer (University Roma Tre),
Sebastien Praud (University Paris 11)

Abstract: In this work, the notion of provability for first-order linear temporal logic over finite time structures is studied. We show that the validity problem for such a logic is not recursively enumerable, hence the logic is not recursively axiomatizable.

This negative result however does not hold in the case of bounded validity, that is truth in all temporal models where the object domain is possibly infinite, but the underlying sequence of time points does not exceed a given size. A formula is defined to be k-valid if it is true in all temporal models whose underlying time structure is not greater than k, where k is any fixed positive integer. In this work a tableau calculus is defined, that is sound and complete with respect to k-validity, when given as input the initial formula and the bound k on the size of the temporal models. The main feature of the system, extending the propositional calculus defined by two of the authors in a previous work, is that of explicitly denoting time points and having tableau nodes labelled by either expressions intuitively stating that a formula holds in a given temporal interval, or ``temporal constraints'', i.e. linear disequations on pairs of time points. Branch closure is reduced to unsatisfiability over the integers of the set of temporal constraints in the branch.

Complexity of Terminological Reasoning Revisited
Carsten Lutz (RWTH Aachen)

Abstract: TBoxes are an important component of knowledge representation systems based on description logics (DLs) since they allow for a natural representation of terminological knowledge. Largely due to a classical result given by Nebel, complexity analyses for DLs have, until now, mostly focused on reasoning without (acyclic) TBoxes. In this paper, we concentrate on DLs, for which reasoning without TBoxes is PSpace-complete, and show that there exist logics for which the complexity of reasoning remains in PSpace if acyclic TBoxes are added and also logics for which the complexity increases. This demonstrates that it is necessary to take acyclic TBoxes into account for complexity analyses.

Regular Sets of Descendants for Constructor-based Rewrite Systems
Pierre Rety (University of Orleans, France)

Abstract: Starting from the regular tree language E of ground constructor-instances of any linear term, we build a finite tree automaton that recognizes the set of descendants R*(E) of E for a constructor-based term rewrite system whose right-hand-sides fulfill the following three restrictions : linearity, no nested functions, function arguments are variables or ground terms. Note that left-linearity is not assumed. We next present several applications.

Extensions to the Estimation Calculus
Jeremy Gow,
Alan Bundy,
Ian Green (Univ. of Edinburgh)

Abstract: Walther's estimation calculus was designed to prove the termination of functional programs, and can also be used to solve the similar problem of proving the well-foundedness of induction rules. However, there are certain features of the goal formulae which are more common to the problem of induction rule well-foundedness than the problem of termination, and which the calculus cannot handle. We present a sound extension of the calculus that is capable of dealing with these features. The extension develops Walther's concept of an argument bounded function in two ways: firstly, so that the function may be bounded \emph{below} by its argument, and secondly, so that a bound may exist between two arguments of a predicate. Our calculus enables automatic proofs of the well-foundedness of a large class of induction rules not captured by the original calculus.

Simplification of Horn Clauses that are Clausal Forms of Guarded Formulas
Michael Dierkes (LEIBNIZ - IMAG)

Abstract: The guarded fragment of first order logic, defined in [ANB96], has attracted much attention recently due to the fact that it is decidable and several interesting modal logics can be translated into it. Guarded clauses, defined by de Nivelle in [Niv98a], result from transformation of guarded formulas into clause form. In [Niv98a], it is shown that the class of guarded clause sets is decidable by saturation under ordered resolution.

In this work, we deal with guarded clauses that are Horn clauses. We introduce the notion of primitive guarded Horn clause: A guarded Horn clause is primitive iff it is either ground and its body is empty, or it contains exactly one body literal, which is flat and linear, and its head literal contains a non-ground functional term. Then, we show that every satisfiable and finite set of guarded Horn clauses S can be transformed into a finite set of primitive guarded Horn clauses S' such that the least Herbrand models of S and S' coincide on predicate symbols that occur in S.

This transformation is done in the following way: first, de Nivelle's saturation procedure is applied on the given set S, and certain clauses are extracted form the resulting set. Then, a resolution based technique that introduces new predicate symbols is used in order to obtain the set S'.

KEYWORDS: guarded fragment, transformation by resolution, Horn clauses

On the Complexity of Counting the Hilbert Basis of a Linear Diophantine System
Miki Hermann (LORIA, Nancy),
Laurent Juban (LORIA, Nancy),
Phokion G. Kolaitis (University of California at Santa Cruz)

Abstract: We investigate the computational complexity of counting the Hilbert basis of a homogeneous system of linear Diophantine equations. We establish lower and upper bounds on the complexity of this problem by showing that counting the Hilbert basis is #P-hard and belongs to the class #NP. Moreover, we investigate the complexity of variants obtained by restricting the number of occurrences of the variables in the system.

Efficiently executable TLA
Stephan Merz (TU Muenchen),
Yassine Mokhtari (LORIA)

Abstract: TLA (the Temporal Logic of Actions) is a linear temporal logic for specifying and reasoning about reactive systems. We define a subset of TLA whose formulas are amenable to validation by animation, with the intent to facilitate the communication between domain and solution experts in the design of reactive systems.

Proofs About Lists Using Ellipsis
Alan Bundy,
Julian Richardson (University of Edinburgh)

Abstract: In this paper we explore the use of ellipsis in proofs about lists. We present a higher-order formulation of elliptic formulae, and describe its implementation in the LambdaClam proof planner. We use an unambiguous higher-order formulation of lists which is amenable to formal proofs without using induction, and to display using the familiar "..." notation.

Resource Management in Linear Logic Proof Search Revisited
Pablo Lopez,
Ernesto Pimentel (Universidad de Malaga)

Abstract: Linear logic provides a logical framework to express fundamental computational concepts in a declarative style. As a consequence, it has been used as a sound foundation for the design of expressive programming and specification languages. Unfortunately, linearity is as convenient for specifying as difficult to implement. In particular, the successful implementation of linear logic languages and provers involving context splitting strongly depends on the efficiency of the method computing a suitable split. A number of solutions have been proposed, referred to as lazy splitting or resource management systems. In this paper, we present a new resource management system for the Lolli linear logic language. We show that the choice of the structure employed to represent the contexts has a strong influence on the overall performance of the resource management system. We also estimate the performance of previous proposals, and compare them to our new system.

KEYWORDS: Linear Logic, Logic Programming, Lolli, Implementation, Lazy Splitting

Beth definability for the guarded fragment
Eva Hoogland (University of Amsterdam),
Maarten Marx (University of Amsterdam),
Martin Otto (RWTH Aachen)

Abstract: The guarded fragment (GF) was introduced in [1] as a fragment of first order logic which combines a great expressive power with nice modal behavior. It consists of relational first order formulas whose quantifiers are relativized by atoms in a certain way. While GF has been established as a particularly well-behaved fragment of first order in many respects, interpolation fails in restriction to GF, [2]. In this paper we consider the Beth property of first-order logic and show that, despite the failure of interpolation, it is retained in restriction to GF. Being a closure property w.r.t. definability, the Beth property is of independent interest, both theoretically and for typical potential applications of GF, e.g. in the context of description logics. The Beth property for GF is here established on the basis of a limited form of interpolation, which more closely resembles the interpolation property that is usually studied in modal logics. From this we obtain that, more specifically, even every n-variable guarded fragment with up to n-ary relations has the Beth property.

Partial Order Reductions Preserving Simulations
Wojciech Penczek (Institute of Computer Science, Warsaw),
Rob Gerth (INTEL),
Ruurd Kuiper (Eindhoven University of Technology)

Abstract: The "state explosion problem" can be alleviated by using partial order reduction techniques. These methods rely on expanding only a fragment of the full state space of a program, which is sufficient for verifying the formulas of temporal logics LTL-x or CTL*-x (i.e., LTL or CTL* without the next state operator). This is guaranteed by preserving either a stuttering maximal trace equivalence or a stuttering bisimulation between the full and the reduced state space. Since a stuttering bisimulation is much more restrictive than a stuttering maximal trace equivalence, resulting in less powerful reductions for CTL*-x, we study here partial order reductions that preserve equivalences "in-between", in particular a stuttering simulation which is induced by the universal fragment of CTL*-x, called ACTL*-x. The reductions generated by our method preserve also branching simulation and weak simulation, but surprisingly, they do not appear to be included into the reductions obtained by Peled's method for verifying LTL-x properties. Therefore, in addition to ACTL*-x reduction method we suggest also an improvement of the LTL-x reduction method. Moreover, we consider concurrency fair version of ACTL*-x and prove that reduction for fair-ACTL*-x is much more efficient than for ACTL*-x.

Focusing and Proof-Nets in Linear and Non-Commutative Logic
Jean-Marc Andreoli (Xerox Research Centre Europe),
Roberto Maieli (Universit`a di Roma)

Abstract: Linear Logic (Girard 1987) has raised a lot of interest in computer research, especially because of its resource sensitive nature. One line of research studies proof construction procedures and their interpretation as computational models, in the ``Logic Programming'' tradition. An efficient proof search procedure, based on a proof normalization result called ``Focusing'', has been described in a paper published in the "Journal of Logic and Computation", 1992. Focusing is described in terms of the sequent system of commutative Linear Logic, which it refines in two steps. It is shown here that Focusing can also be interpreted in the Proof Net formalism, where it appears, at least in the multiplicative fragment, to be a simple refinement of the ``Splitting lemma'' for Proof Nets. This change of perspective allows to generalize the Focusing result to (the multiplicative fragment of) any logic where the ``Splitting lemma'' holds. This is, in particular, the case of the Non-Commutative logic of Abrusci-Ruet, and all the computational exploitation of Focusing which has been performed in the commutative case can thus be revised and adapted to the non commutative case.

Evidence Algorithm and sequent-style logical inference search
A.I. Degtyarev (Uppsala University),
A.V.Lyaletski (Inst. of Cybernetics, Kiev),
M.K. Morokhovets (Inst. of Cybernetics, Kiev)

Abstract: In this paper, we continue to develop the approach to automated search for theorem proofs started in Kyiv in 1960-1970s. This approach presupposes development of deductive techniques incorporated into integrated processing mathematical texts written in a formal first-order language close to the natural language used in mathematical papers. In the frame of this approach we construct logical calculi, gS and mS, in view of the following demands: the syntactical form of an initial problem should be preserved; proof searching should be goal oriented; preliminary skolemization is not obligatory; equality handling should be separated from a deduction process. gS is a machine-oriented sequent-type calculus with "large-block" inference rules for first-order classical logic. mS is a further development of gS enriched with the formal analogs of such natural proof search techniques as definition handling and auxiliary proposition application, and reflecting the peculiarities of the high level input data language. The results on soundness and completeness of gS and mS are reported.

Solving Combinatorial Problems with Regular Local Search Algorithms
Ramon Bejar,
Felip Manya (Universitat de Lleida, Spain)

Abstract: In this paper we describe new local search algorithms for regular CNF formulas and investigate their suitability for solving problems from the domains of graph coloring and sports scheduling. First, we define suitable encodings for such problems in the logic of regular CNF formulas. Second, we describe Regular-GSAT and Regular-WSAT, as well as some variants, which are a natural generalization of two prominent local search algorithms -GSAT and WSAT- used to solve the propositional satisfiability (SAT) problem in classical logic. Third, we report on experimental results that demonstrate that encoding graph coloring and sports scheduling problems as instances of the SAT problem in regular CNF formulas and then solving these instances with local search algorithms can outperform or compete with state-of-the-art approaches to solving hard combinatorial problems.

Folding Quantifiers into Map Composition
Domenico Cantone (University of Catania),
Alberto G. Zarba (University of Catania),
Andrea Formisano (University "La Sapienza"),
Eugenio G. Omodeo (University of L'Aquila)

Abstract: Two techniques are designed for eliminating quantifiers from an existentially quantified conjunction of dyadic literals, in terms of the composition, intersection, and inversion operators of the Tarski-Chin-Givant formalism of relations. The use of such techniques is illustrated through increasingly challenging examples, and their algorithmic complexity is assessed.

On the Complexity of Single-Rule Datalog Queries
Georg Gottlob (Vienna University of Technology),
Christos Papadimitriou (Berkeley)

Abstract: Datalog is a well-known database query language based on the Logic Programming paradigm. A general datalog program consists of a number of rules and facts. Programs containing a unique rule and possibly some facts are called single rule programs (sirups). We study the combined and program complexity of sirups, ie., the complexity of evaluating sirups over variable and fixed databases, respectively. Moreover, we study the descriptive complexity of sirups, i.e., their expressive power. In all cases it turns out that even very restricted classes of sirups have the same complexity and essentially the same expressive power as general datalog programs.

Our main result is that the evaluation of sirups consisting of only a single clause is {\rm EXPTIME} complete (combined complexity), and if restricted to linear recursive rules {\rm PSPACE} complete. Moreover, sirups with one recursive rule and one additional fact essentially capture {\rm PTIME} on ordered structures, if a certain data format is assumed.

Our results are obtained by a uniform cross-product construction which maps a datalog program into a single rule by essentially maintaining its semantics. We also prove that deciding whether a datalog clause implies another one is {\rm EXPTIME} complete.

A Partial Evaluation Framework for Curry Programs
Elvira Albert (Universidad Politecnica de Valencia),
Maria Alpuente (Universidad Politecnica de Valencia),
Michael Hanus (Informatik II, RWTH Aachen),
German Vidal (Universidad Politecnica de Valencia)

Abstract: In this work, we develop a partial evaluation technique for residuating functional logic programs, which generalize the concurrent computation models for logic programs with delays to functional logic programs. We show how to lift the nondeterministic choices from run time to specialization time. We ascertain the conditions under which the original and the transformed program have the same answer expressions for the considered class of queries as well as the same floundering behavior. All these results are relevant for program optimization in Curry, a functional logic language which is intended to become a standard in this area. Preliminary empirical evaluation of the specialized Curry programs demonstrate that our technique works well also in practice and leads to substantial performance improvements. This work is the first attempt, to our knowledge, of formally defining and proving correct a general scheme for the partial evaluation of functional logic programs with delays.

Cancellative Superposition Decides the Theory of Divisible Torsion-Free Abelian Groups
Uwe Waldmann (Max-Planck-Institut für Informatik, Saarbrücken)

Abstract: In divisible torsion-free abelian groups, the efficiency of the cancellative superposition calculus can be greatly increased by combining it with a variable elimination algorithm that transforms every clause into an equivalent clause without unshielded variables. We show that the resulting calculus is not only refutationally complete (even in the presence of arbitrary free function symbols), but that it is also a decision procedure for the theory of divisible torsion-free abelian groups.

CHAT is $\Theta$(SLG-WAM)
Bart Demoen (KU Leuven),
Konstantinos Sagonas (Uppsala University)

Abstract: CHAT offers an alternative to SLG-WAM for implementing the suspension and resumption of consumers that tabling needs: unlike SLG-WAM, it does not use freeze registers nor a complicated trail to preserve their execution environments. CHAT also limits the amount of copying of CAT, which was previously put forward as another alternative to SLG-WAM. Although experimental results show that in practice CHAT is competitive with - if not better than - SLG-WAM, there remains the annoying fact that on contrived programs the original CHAT can be made arbitrarily worse than SLG-WAM, i.e. the original CHAT has an intrinsically higher complexity. In this paper we show how to overcome this problem, in particular, we deal with the two sources of higher complexity of CHAT: the repeated traversal of the choice point stack, and the lack of sufficient sharing of the trail. This is achieved without fundamentally changing the underlying principle of CHAT by a technique that manipulates a Prolog choice point so that it assumes temporarily a different functionality and in a way that is transparent to the underlying WAM. There is more potential use of this technique besides lowering the worst case complexity of CHAT: it leads to considering scheduling strategies that were not feasible before either in CHAT or in SLG-WAM. We also discuss extensively issues related to the implementation of the trail in a tabled logic programming system.

Model Checking Games for the Alternation-Free $\mu$-Calculus and Alternating Automata
Martin Leucker (Aachen University of Technology, Germany)

Abstract: We relate game-based model checking for the {\em altern\-ation-free $\mu$-calculus} and {\em 1-letter simple weak alternating B\"uchi automata} (1SWABA). Game-based algorithms have the advantage that in addition to checking whether a formula is valid or not they determine a winning strategy which can be employed for explaining to the user why the formula is valid or not. 1SWABA are a restricted class of alternating B\"uchi automata and were defined in \cite{BVW94}. They admit efficient automata-based model checking for CTL and the alternation-free $\mu$-calculus. We give an interpretation for these automata in terms of game theory and show that this interpretation coincides with the notion of model checking games for CTL and the $\mu$-calculus. Then we explain that the efficient non-emptiness procedure for 1SWABA presented in \cite{BVW94} can also be understood as a game-based model checking procedure. Furthermore, we show that this algorithm is not only useful for checking the validity of a formula but also for determining a {\em winning strategy} for the winner of the underlying model checking game. In this way we obtain a linear time algorithm for model checking games.