Prolog Implementation
Literature references and annotations by Dick Grune, dick@dickgrune.com.
Last update: Fri Sep 06 15:53:23 2024.
These references and annotations were originally intended
for personal use and are presented here only in the hope
that they may be useful to others.
There is no claim to completeness or even correctness.
Each annotation represents my understanding of the text
at the moment I wrote the annotation.
No guarantees given; comments and content criticism welcome.
Neng-Fa Zhou,
Parameter passing and control stack management in Prolog implementation revisited,
ACM Trans. Program. Lang. Syst.,
vol. 18,
#6,
Nov. 1996,
pp. 752-779.
Provides the WAM with more advanced instructions for parameter passing
and control stack management, based on identifying guards for clauses.
Bogumił Hausman,
Turbo Erlang: Approaching the speed of C,
ed. by E. Tick and G. Succi,
in Implementations of Logic Programming Systems,
Kluwer Academic Publishers,
1994,
pp. 119-135.
Erlang is a concurrent logic language for programming real-time systems,
more particularly telephone exchanges.
It is based on message passing and features an advanced receive
statement which includes error control, time-out, etc.
(Erlang = Ericsson Language, but also named in honor of Agner Krarup Erlang,
who pioneered in telephone traffic statistics.)
Erlang code is first translated into The Erlang Abstract Machine TEAM,
which is translated into C.
The system features dynamic loading and reconfiguring; a running Eralng
program can evolve and be updated without ever having to stop.
It has start-and-stop garbage collection, though, but this is
per-process.
Test programs (factorial, qsort, the Takeuchi test set) lose a factor of
2 to 4 in speed over the C version compiled with gcc -O2.
The author attributes this result to the superiority of the TEAM over
the WAM, but supplies no details.
Saumya K. Debray,
Implementing logic programming systems -- The quiche-eating approach,
ed. by E. Tick and G. Succi,
in Implementations of Logic Programming Systems,
Kluwer Academic Publishers,
1994,
pp. 65-88.
The author compares the logic language system QD-Janus, which translates
to Prolog which is run on SICStus Prolog to another system FCP(:), which
translates to a low-level byte code, which is interpreted.
The translator to Prolog is much simpler (unsurprising) and the speed of
the resulting code is higher (surprising).
Zoltan Somogyi,
Fergus James Henderson,
Thomas Charles Conway,
The implementation of Mercury, an efficient purely declarative logic programming language,
ed. by Koen De Bosschere, Bart Demoen, and Paul Tarau,
in ILPS'94 Post-Conference Workshop on Implementation Techniques for Logic Programming Languages,
1994,
pp. 31-58.
And its companion paper Code generation for Mercury:
Peter Van Roy,
1983-1993: The wonder years of sequential Prolog implementation,
J. Logic Programming,
vol. 19-20,
1994,
pp. 385-441.
The definitive survey of sequential Prolog implementation, both in
historical and in technical respect.
Extensive literature list.
Patrice Boizumault,
The Implementation of Prolog,
Princeton Series in Computer Science,
Princeton University Press,
Princeton,
1993,
oz@nexus.yorku.ca (ozan s. yigit) writes:
This book is based on Boizumault's thesis, and was originally published
in french as Prolog: L'Implantation, 1988. Good to finally have it in
english. Prolog is a language filled with interesting implementation
problems, and this book is an enlightening synthesis of solutions offered
by various implementations. It covers every important aspect of Prolog
anatomy, and concludes with three small implementations (mini-cprolog,
Mini-WAM, Mini-Prolog-II) in common lisp, and detailed discussion.
(I wish I could fuse this book with Meier/Warren's "Computing With
Logic" somehow.) Translation is stiff in places, but never mind.
Recommended reading for serious language types.
URL : ftp://ftp.cnam.fr/
Files : Boizumault (sources for the interpretation of Prolog).
David S. Warren,
Memoing for logic programs,
Commun. ACM,
vol. 35,
#3,
March 1992,
pp. 93-111.
The paper treats three different subjects: a new resolution technique
(OLDT), abstract (=symbolic) interpretation, and partial evaluation.
As far as I can see, memoization is involved only in the first one.
This summary concerns OLDT only.
The Prolog inference technique is top-down depth-first search called
SLD, Selective Linear resolution for Definite clauses.
Its disadvantage is that the search trees is often infinite and the
search process gets lost in infinite-length branches, even if there are
solutions left to be found.
Breadth-first search has the advantage that it finds all solutions
eventually, but it still loops on infinite trees that contains a finite
number of solutions.
To address this situation, the fact that a clause C has started is
registered in a memoization table, and C is not started again
recursively until a result for C is available.
This digs up a "ground" result for C, and provides a bottom-up
component to the search.
All this is complicated by the presence of logic variables.
This technique results in a top-down, bottom-up-augmented search, in
which the search tree reduces to a search graph which is often finite
when the search tree is not.
The technique is called Ordered selective Linear resolution for Definite
clauses with Tabulation, OLDT.
A second explanation of the technique is given, starting from the
bottom-up resolution that is usual in Datalog, a database logic language
that is similar to prolog, except that it has no structure values.
This resolution technique always terminates in the finite domain of
databases, and already uses memoization to prevent the repeated
recomputation of facts.
It is, however, inefficient in that it produces many facts that are
irrelevant to the query.
This is then remedied by introducing a top-down component ("pushing the
selects in"), arriving at essentially the same algorithm as above.
All this is explained extensively, from several different viewpoints
including relational algebra.
A transformation is given that adds memoization to Prolog programs.
Hassan Aït-Kaci,
Warren's Abstract Machine -- A Tutorial Reconstruction,
MIT Press,
Cambridge, Mass.,
1991,
pp. 114.
Although generally considered the best explanation of the Warren Abstract
Machine, it does not have the rigorous clear approach that one would have
hoped for.
A sequence of four machines is defined, with their machine instructions:
the first only does unification of two terms, introducing read-write mode;
the second does the same extended with argument registers;
the third allows one definition for each term ("flat resolution");
and the fourth introduces backtracking.
The relation of all this to Prolog is not made clear, however.
Next, three optimization principles are formulated:
1. avoid creating space on the heap;
2. avoid data movement;
3. handle frequent cases by special code.
Based on this, a number of optimizations present in the WAM are explained
in order of increasing complexity.
This chapter closes with the treatment of the cut operator.
The book ends with a very clear summary of the WAM instructions, with their
implementation in a more low-level language.
No word about how to handle `assert' and `retract'.
Timothy Hickey,
Shyam Mudambi,
Global compilation of Prolog,
J. Logic Program.,
vol. 7,
#3,
1989,
pp. 193-230.
Details many techniques. Who refers to this?
David Maier,
David S. Warren,
Computing with Logic -- Logic Programming with Prolog,
Benjamin/Cummings Publ. Co.,
Menlo Park, Cal.,
1988,
pp. 535.
As the two-part title already suggests, the book has three goals:-) to
show how problems can be cast in logic programs, to show the
mathematical apparatus describing the semantics of these programs and to
show how these programs are implemented.
Unfortunately, none of these can be understood without understanding the
other two, which poses an obstacle in reading the book.
A first check on accuracy: the index entry "occurs check" (rather than
"occur check") refers us to page 284, which contains an example of an
occur-check violation without mentioning the term, to page 371 on which
I cannot find a reference and to page 415, which contains a discussion of
the impact of the occur check (named correctly this time).
Note that David S. Warren is not David H.D. Warren, the author of the
Warren Abstract Machine.
Andreas Krall,
Implementation of a high-speed Prolog interpreter,
in ACM SIGPLAN '87 Symposium on Interpreters and Interpretive Techniques,
ACM SIGPLAN Notices,
vol. 22,
#7,
July 1987,
pp. 125-131.
Shows that a clever interpreter in assembler can beat compilation to C.
Optimizations used are tail recursion optimization, clause indexing and
threaded code, but the explanations are cryptic.
Peter Kursawe,
How to invent a Prolog machine,
New Gen. Comp.,
vol. 5,
1987,
pp. 97-114.
Is actually concerned with unification only.
Starting from a high-level description of unification, four levels are
defined.
Each level makes explicit some implicit properties of the previous
level, by transforming them into Prolog or built-in predicates.
This makes it easy to see that the Prolog semantics is preserved.
At each level, possibilities for partial evaluation are examined and
specialised predicates are introduced.
The final level corresponds more or less to some WAM instructions,
including read/write mode optimization.
Emphasis is more on the design process than on the resulting
instructions.
The paper is in regrettable English.
C.S. Mellish,
Some global optimizations for a Prolog compiler,
J. Logic Programming,
vol. 2,
1985,
pp. 43-66.
The author emphasizes that although logic programs are in principle
bidirectional, Prolog program very often loop when used in unusual
directions, or get stuck in built-in one-directional operators. Also,
perhaps 3/4 of all Prolog clauses are deterministic, which means that
they can be implemented as imperative routines, with input and output
parameters.
To exploit this imperative nature of large parts of Prolog programs, a
two-stage approach is used. First, techniques are described to find
deterministic Prolog rules, or rather rules that are always used
deterministically.
This is done by symbolic (abstract) interpretation: a lattice of states
for each parameter is defined, equations are derived from the program,
and final knowledge about the states is obtained by taking the
transitive closure.
Second, optimized translations are made towards an adapted Prolog
virtual machine, POPLOG VM.
Occasionally, deep knowledge of Prolog implementation techniques is
needed to understand the paper.
For example, the operators on the lattice are mentioned
(combine, in, sim, seq, or) but not explained; the
reader is referred to technical reports.
Many elaborate examples are given, though.
Jacques Cohen,
Describing Prolog by its interpretation and compilation,
Commun. ACM,
vol. 28,
#12,
Dec. 1985,
pp. 1311-1324.
Explains Prolog in terms of Pascal with continuations; the latter are
not explained extensively.
This leads naturally to compilation.
Many other aspects of Prolog are discussed, for example negation by
failure, the cut operator, infinite trees, goal freezing, inequalities
(called disequalities here), parallelism, multi-head Prolog clauses.
inverse computation, etc.
John Gabriel,
Tim Lindholm,
E.L. Lusk,
R.A. Oberbeek,
A Tutorial on the Warren Abstract Machine for Computational Logic,
ANL-84-84,
Argonne National Laboratory,
Argonne, Ill.,
June 1985,
pp. 53.
The emphasis is on explaining the WAM as an abstract machine for doing
computational logic, rather than on implementing Prolog.
The authors consider extension for other kinds of logic languages
(theorem provers, etc.).
The text is very terse, but many examples are provided.
Something is said about assert and retract, but not much:
"We realize that the above comments are extremely cursory, and feel that
the reader should form his own judgment on the matter" (p. 23).
David A. Plaisted,
The occur-check problem in Prolog,
New Generation Computing,
vol. 2,
1984,
pp. 309-322.
Describes a Prolog preprocessor that will find situations in which a
term loop may be generated and insert correcting code.
J.A. Campbell,
Implementations of Prolog,
Ellis Horwood Limited,
New York,
1984,
Niet bij VU.
David H.D. Warren,
An abstract Prolog instruction set,
Technical Note 309,
Artificial Intelligence Center, SRI,
Menlo Park,
Oct. 1983,
pp. 30.
This is the Warren Abstract Machine.
It was designed for a machine-independent VAX or VAX microcode, and it
differs from the PLM (here called the "Old Engine") (see Warren, May
1977) in the following respects.
1. It uses copying rather than structure sharing, although it is not
explained why.
2. There are instructions for tail recursion.
3. There are means for reduced choice point creation.
4. The unification mechanism has separate read and write modes.
Also, the global stack is now (inappropiately) called the heap.
The paper is a dry-as-dust description of the data structures and the
instruction set, with hardly any motivation.
It is up to the reader to find out how to apply the instructions.
For one thing, one wonders why the read/write mode is not stacked.
The answer is lies probably in the fact that the get_structure and
get_list instructions can only be used for structures and lists that do
not contain other structures or lists: no get_structure or get_list can
be active when another get_structure or get_list is active.
This implies that strutures or lists that do nest have to be decomposed
by the compiler.
None of this is explained, though.
One appendix defines a byte-code instruction format, much like EM.
No word about how to handle `assert' and `retract'.
Alberto Martelli,
Ugo Montanari,
An efficient unification algorithm,
ACM TOPLAS,
vol. 4,
#2,
April 1982,
pp. 258-282.
A equation is formed with the terms to be unified as the operands.
This equation is then solved by applying two simple transformations,
term reduction and variable elimination.
These solve or fail to solve the equation in linear time.
The equation has no solution if there is a variable that cannot be
eliminated, which means that a term loop exists; this gives you the
occur check free of charge.
David H.D. Warren,
Implementing Prolog -- Compiling Predicate Logic Programs,
Research Report No. 39/40,
Dep. Artificial Intelligence, Univ. Edinburgh,
Edinburgh, Scotland,
May 1977,
pp. 102/???.
This report (#39) describes a predecessor to the WAM, called the ProLog
Machine (PLM).
It is tuned considerably to the hardware instructions of the DEC10,
making heavy use of that machine's indirection mechanism.
It includes the cut operator right from the start.
One important kind of data structure is used to represent terms:
a term is represented as a pair of pointers, one pointing to the
skeleton of the term and the second pointing to an array of
representations (pointers) of the logical variables in it.
When there are more instances of the term, they share the skeleton
pointer and have different data array pointers; this saves memory.
The four stacks require reallocating and relinking software, but a
garbage collector is needed only if one wants to compact the global
stack; the system will also work without one.
The machine does not do the occur check.
A few optimizations are present --special provisions are made for
variables that occur only once-- but there is no read/write mode yet.
The author complains that most of the work in implementing Prolog goes
into "non-logic" features, like built-in arithmetic, I/O, file handling,
internal database and meta-logical features.
No word about how to handle `assert' and `retract'.
The instructions and their implementations are not listed explicitly.
|