Principles of Programming Languages
and Programming Paradigms
Literature references and annotations by Dick Grune, dick@dickgrune.com.
Last update: Sun Dec 26 14:10:57 2021.
These references and annotations were originally intended
for personal use and are presented here only in the hope
that they may be useful to others.
There is no claim to completeness or even correctness.
Each annotation represents my understanding of the text
at the moment I wrote the annotation.
No guarantees given; comments and content criticism welcome.
John C. Mitchell,
Concepts in Programming Languages,
Cambridge University Press,
Cambridge, UK,
2003,
pp. 529.
Unlike most books on principles of programming languages, this one is
not paradigm-based: lambda calculus, Lisp and ML are seen as the
foundation of computation, and much is explained in this context,
including typically imperative entities like blocks, scopes, lifetimes
and activation records.
Some of the implementation of functional languages is treated.
The book is reminiscent of Abelson & Sussman's
`Structure and Interpretation of Computer Programs'.
An insightful chapter on logic programming by Krzystof Apt concludes the
book.
Robert W. Sebesta,
Concepts of Programming Languages,
(5th Edn.),
Addison-Wesley,
Reading, Mass.,
2002,
pp. 698.
Extensive, possibly long-winded, but always very clear and readable
treatment of the subject, showing the maturity of a 5th(!) edition.
Treats enough programming language implementation to allow the student
to acquire a very concrete model of what is happening behind the screens.
Very student- and teacher-friendly.
After 200 pages of concepts (history, syntax and semantics, names,
bindings, scopes, etc.) the imperative and object-oriented paradigms are
covers in 7 chapters (300 pages) in a mainly bottom-up fashion.
Somewhat incongruously, the next two chapters treat concurrency and
exception handling.
Chapters about the functional and logic paradigms conclude the book.
Allen Tucker,
Robert Noonan,
Programming Languages -- Principles and Paradigms,
McGraw-Hill,
New York,
2002,
pp. 411.
Strong, paradigm-oriented treatment of the subject in a fairly
language-independent way.
Many concepts are explained effectively, using the mini-language Jay, a
stripped version of C, in contradiction to the claim on the cover that
the it "uses Java as the language of illustration throughout the book".
In addition to the usual paradigms, the book treats event-driven
programming, using java.applet.* and java.awt.*, in 32 pages.
There is a chapter on semantics, explaining operational, axiomatic and
denotational semantics.
Most language constructs in the book are accompanied by denotational
semantics.
There is a chapter on memory management, covering arrays, run-time
stacks, memory leaks, garbage collection and more.
Another chapter treats exception handling mechanisms, this time without
denotational semantics...
Appendices on notation, Jay, and Java conclude the book.
Elliott Hughes,
How many trivial getter methods does Java have?,
ACM SIGPLAN Notices,
vol. 37,
#8,
Aug. 2002,
pp. 19-24.
In order to obtain statistics about the rate of occurrence of "trivial
getter methods" (methods that just return the value of a private field,
probably an objectionable entity), the author uses javap to disassemble
sample files and uses a simple one-page Perl script to analyse it and tally
trivial getter methods.
It is also shown that earlier methods using regular expressions on the
source code are faulty.
(The conclusion is that far fewer trivial getter methods are used than
has been reported previously).
Terrence W. Pratt,
Marvin V. Zelkowitz,
Programming Languages -- Design and Implementation,
(4th Edn.),
Prentice-Hall,
Upper Saddle River, N.J.,
2001,
pp. 649.
Like its predecessors, the book is based on the thesis that programming
languages are best understood with half an eye to their implementation.
The book explains the full array of programming language constructs,
using examples from 11 different languages, all presented in abstracts
in the book.
The main story of programming languages is interrupted regularly by
excursions into compiler construction and theoretical computer science,
which is somewhat and occasionally quite detrimental to its structure.
There are lots of goodies here, but the problem is that it is neither a
full-fledged programming language book (no clear division in paradigms;
logic programming is in 8.4 under "Sequencing with nonarithmetic
expressions") nor a compiler construction book (little LL parsing, no LR
parsing, no code generation, etc.).
Evgeny Dantsin,
Thomas Eiter,
Georg Gottlob,
Andrei Voronkov,
Complexity and expressive power of logic languages,
ACM Computing Surveys,
vol. 33,
#3,
Sept. 2001,
pp. 374-425.
Complexity theory account, mainly of "datalog", a logic-programming
description of a database.
Ulisses Ferreira,
$uu$ for programming languages,
ACM SIGPLAN Notices,
vol. 35,
#8,
Aug. 2000,
pp. 20-30.
A comprehensive account of the consequences of introducing the value
$ omega $, "undefined value", written here $uu$.
These include an if/ifnot/otherwise clause, handling the trivalent
logic, different paradigms for exception handling and object-oriented
programming, and an integration of classes and frames (from prototypes).
H. Darwen,
C.J. Date,
Foundation for Object/Relational Databases -- The Third Manifesto,
Addison-Wesley,
Reading, Mass.,
1998,
pp. ???.
New theory of type inheritance?
Timothy A. Budd,
Functional programming and the fragile base class problem,
ACM SIGPLAN Notices,
vol. 33,
#12,
Dec. 1998,
pp. 66-71.
[[ multi-paradigm programming ]]
Carlo Ghezzi,
Mehdi Jazayeri,
Programming Language Concepts,
(3rd Edn.),
John Wiley & Sons,
New York,
1998,
pp. 427.
Although much has changed since the second edition, the look-and-feel
has stayed the same.
Text in a vigorous, sometimes harsh style, supplying much detail and
very close to the real world of programming languages.
It requires considerable sophistication from the student, but the
rewards are proportionately high.
The material, both explanatory and examples, is chosen from a broad
range of languages.
After 100 pages of introduction to languages, syntax and semantics,
there are three chapters on structuring: of the data, the computation
and the program,
followed by three chapters on object-oriented, functional and
logic languages.
Parallel and distributed programming is covered under "Structuring the
computation" and programming in the large under "Structuring the
program".
No word on lazy evaluation, though.
Java has made its way into the book, but not overwhelmingly so; it is
just one of the languages, and given about 10 pages.
The lack of white margins around the pages bothers me, I'm sorry to say.
Nicolas Pippenger,
Pure versus impure Lisp,
ACM TOPLAS,
vol. 19,
#2,
March 1997,
pp. 223-238.
Proves that some (on-line) applications that are $O(n)$ in impure Lisp
require $O(n ~ log ~ n)$ in pure Lisp.
(See also: Ponder et al., Are applicative languages inefficient?, ACM
SIGPLAN Notices, Vol. 23, No. 6, p. 135-139, June 1988).
Raphael A. Finkel,
Advanced Programming Language Design,
Addison-Wesley,
Menlo Park, Ca.,
1996,
pp. 480.
The title says "advanced" and indeed the book picks up where most (all?)
other books on principles of programming languages stop.
For example, Chapter 2, Control Structures, contains the subjects
Exceptions, Coroutines, Continuations, and Power Loops, and in Chapter
8, Logic Programming pays much attention to the language Gödel.
Chapter 9, on advanced built-in data types, covers among others
associative arrays, substrings as first-class values and Icon data
handling.
Another intriguing subject is types as first-class values, in Chapter 3.
Data-flow languages get a chapter of their own (6).
This is the only book I know that is a substantial addition to the usual
books.
Excellent material for an advanced course on principles of programming
languages.
M. Ben-Ari,
Understanding Programming Languages,
John Wiley,
Chichester,
1996,
pp. 360.
Introductory course in principles of programming languages in a direct
style.
The author often speaks from considerable personal experience.
Regularly, but not systematically, language features are explained
through their implementation.
The book is 65 percent imperative principles using C and Ada, 15 percent
object orientation using C++ and Eiffel, 10 percent functional languages
using ML, and 10 percent logic languages.
Although ML is a strict language lazy evaluation is explained.
Concurrency is correctly treated as an add-on to the other paradigms.
Although the book is paradigm-oriented, the word paradigm is not
mentioned.
Nothing on uninitialized variables.
Minimal bibliography.
Daniel J. Salomon,
Using partial evaluation in support of portability, reusability, and maintainability,
in Compiler Construction: 6th International Conference, CC'96,
Lecture Notes in Computer Science #1060,
ed. by Tibor Gyimóthy,
New York,
Springer-Verlag,
1996,
pp. 208-222.
Describes Safer_C, in which the features of the preprocessor are
replaced by a simple level of partial evaluation.
A simple example is conditional code inclusion by using an if statement
with a condition that can be determined at compile time.
More advanced examples include a compile-time for-loop and compile-time
procedures that create declarations.
All this is reminiscent of PL/I.
Roberto Barbuti,
Paolo Mancarella,
A multiple-valued logical semantics for Prolog,
ed. by Hanne Riis Nielson,
in Programming Languages and Systems -- ESOP '96,
Lecture Notes in Computer Science #1058,
Springer Verlag,
Berlin,
1996,
pp. 62-76.
A consistent and complete semantical model of (pure, negation-free) Prolog
is obtained by describing the logical result of every rule by one of
four logical values:
no solutions and terminate ("false"),
at least one solution and terminate ("true"),
no solutions and loop ("undefined"),
at least one solution and loop (not given a name in the paper).
Thomas J. Bergin,
Richard G. Gibson,
History of Programming Languages,
Addison-Wesley,
Reading, Mass.,
1996,
pp. 864.
Published as an ACM Press book, it is a report on and the proceedings of the
HOPL II conference of 1993.
Much material for history buffs.
Printing quality is meagre (for financial reasons, no doubt), which does
nothing for the many photographs in the book.
Robert W. Sebesta,
Concepts of Programming Languages,
(3rd Edn.),
Addison-Wesley,
Reading, Mass.,
1996,
pp. 634.
Third edition, updated.
As with the previous editions the first impression is: words, words,
words.
Slow-paced, reasonably wide, conventional coverage of the principles of
programming languages in considerable depth, in a usually very controlled
style.
The book is very much bottom-up, with almost 200 pages of basics
(history, formalisms, binding, type checking, scopes, etc.), 300 pages
of imperative concepts, and then 100 pages for functional, logic and
object-oriented languages.
There are ample sets of exercises.
Axiomatic semantics is introduced already in chapter 3 (but not used
afterwards, as far as I can see).
Some remarks.
No word about macro processors.
Call-by-value, etc. are called pass-by-value, etc., which is probably
more appropriate but non-standard.
Logic programming is explained with almost no reference to
unification!?!
Ravi Sethi,
Programming Languages, Concepts & Constructs,
(2nd Edn.),
Addison-Wesley,
Reading, Mass.,
1996,
pp. 640.
Structured according to paradigms, unlike the first edition.
Broad coverage of a large number of issues, often in considerable depth
without getting breathless, written in a direct and personal style.
After 50 pages of introduction, concepts of the imperative,
object-oriented, functional, logic, and concurrent paradigms are covered
in chapters of 50 to 150 pages each.
The book closes with chapters on: semantic methods including attribute
grammars; lambda calculus; and a look at some (very common) languages.
Many exercises, some quite though-provoking.
There is very little material outside the usual paradigms and the usual
languages; this adds to the "cosy" and "protected" atmosphere of the book.
However, the Teddy bear on the cover now lives in a VR world.
A considerable rewrite of the 1st edition and even more intent on being
accessible: "inadvertently" has been replaced by "in error", the noun
"loom" is explained, etc.
Ruknet Cezzar,
A Guide to Programming Languages -- Overview and Comparison,
Artech House,
Boston,
1995,
pp. 496.
Thorough examination of a limited number of languages, with the explicit
aim of assisting the reader in choosing a language for a given purpose.
To this end, the author defines a set of six programming exercises,
ranging from arithmetic through commercial to symbolic.
Next, the author sets up a checklist for desired features; examples are
simplicity, uniformness (called unanimity here) expressiveness,
pedagogy, external support, etc.
The first 80 pages contain a good, down-to-earth treatment of concepts
in programming languages, including such varied items as grammars,
parsing, compiling, binding, parameter passing, recursion, lists and
visual programming.
Next, the following languages are examined, their performance on the
programming test set are analysed and they are graded according to the
wish list: Ada, Basic, C, COBOL, FORTRAN, Pascal, LISP, Prolog,
Smalltalk, Eiffel, C++, Visual Basic, Visual C++.
It is clear that the author is a seasoned programmer who will gladly
share his opinions on this, that and th'other.
I missed Icon, but Logo is mentioned. No Algol 68, though.
Worth reading.
Max Copperman,
Jeff Thomas,
Poor man's watchpoints,
ACM SIGPLAN Notices,
vol. 30,
#1,
pp. 37-44.
Jan. 1995,
Break points by code patching by dis- and re-assembly.
Branch delay slots are bad news, especially if they are not no-ops.
Another problem is the address of relative jumps getting out of range as the
result of code expansion.
Some trickery to obtain a free register is explained.
Erkan Tın,
Varol Akman,
Murat Ersan,
Towards situation-oriented programming languages,
ACM SIGPLAN Notices,
vol. 30,
#1,
pp. 27-36.
Jan. 1995,
In its simplest form situation theory extends the Prolog facts and rules
with "situations", sets of rules and facts, which form locales in which
these rules and facts are valid.
The situations are linked in a DAG, so most situations have super- and
sub-situations.
Situations can occur as items in rules and facts.
So a situation can contain a rule which says:
"If a sub-situation contains a such-and-sch rule, it should also contain
a so-and-so rule."
Also, rules and facts that are know to be valid within a given situation
can be known to be beliefs or hypotheses in a super-situation.
Three Prolog-like languages supporting situational logic are compared.
Not surprisingly, the authors' BABY-SIT is superior.
Dianxiang Xu,
Guoliang Zheng,
Logical objects with constraints,
ACM SIGPLAN Notices,
vol. 30,
#1,
pp. 5-10.
Jan. 1995,
Mixed paradigm.
Ryan Stansifer,
The Study of Programming Languages,
Prentice-Hall,
Englewood Cliffs, NJ.,
1995,
pp. 334.
Difficult to assess, since I cannot find an underlying principle.
Contains interesting pieces on history, grammars, data types, PROLOG, lambda
calculus, denotational semantics and Hoare axiomatics.
Quite theoretical.
James F. Smith,
Thomas S. Frank,
Introduction to Programming Concepts and Methods with Ada,
McGraw-Hill,
New York,
1994,
pp. 545.
The book is intended to meet the American CS1 course requirements, and
should be readable even by people who have no previous experience with
computers.
I doubt it.
E.g., what a variable is, is explained in two complicated sentences on pg 13;
since variables are a notoriously difficult concept for beginners this seems
inadequate.
Although the authors repeatedly claim that they teach programming, it is
hard to avoid the impression that they actually teach Ada (or a subset of it).
The first Ada program they shows, on page 9, is 27 lines long and includes a
package instantiation and an obscure call to Text_IO.Put; explaining these
27 lines gets the authors in deep trouble, for which they apologize twice.
It is hard to tell whether all this is the result of the CS1 course
requirements or if it shows that Ada as a first language is a bad idea.
Also, the chapter structure is puzzling:
1. Introduction to Computers and Programming;
2. Developing a Program;
3. A First Look at Packages;
4. Floating Point Types;
etc.
James F. Smith,
Thomas S. Frank,
Instructor's Manual to Accompany Introduction to Programming Concepts and Methods with Ada,
McGraw-Hill,
New York,
1994,
pp. 156.
Rationale behind the book + guidelines for the teacher.
For example, Floating Point Types are treated early to provide more
interesting numerical examples than integers would allow.
Joseph Bergin,
Data Abstraction -- The Object-Oriented Approach Using C++,
McGraw-Hill,
New York,
1994,
pp. 666.
Solid introduction to object-oriented programming.
The first 190 pages out of 660 are about data abstraction and object-oriented
programming, and definitely are worth reading.
The rest is examples and case studies, and goes on and on.
The use of object-oriented programming is motivated well and design rules
are made explicit.
The `is-a' and `has-a' relations are mentioned but do not occupy a central
position.
Henri E. Bal,
Dick Grune,
Programming Language Essentials,
International Computer Science,
Addison-Wesley,
Wokingham, England,
1994,
pp. 271.
Paradigm-oriented, concise book of wide coverage, systematic structure
and reasonable depth.
For each paradigm, principles and concepts are explained and one or two
representative languages are discussed briefly.
Paradigms covered include
imperative, object-oriented, functional, logic, parallel and distributed,
constraint, access-oriented, single-datastructure, data-flow, database,
and real-time programming, and little languages, each to a varying extent.
Karl J. Lieberherr,
Ignacio Silva-Lepe,
Cun Xiao,
Adaptive object-oriented programming using graph-based customization,
Commun. ACM,
vol. 37,
#5,
pp. 94-101.
May 1994,
It is recognized that many real-world applications consist of traversals
of graphs of objects, during which certain actions must be performed on
certain objects.
It is also recognized that such traversals are often formulated long
after the graph of objects has been defined, which makes it undesirable
to incorporate the traversal in the objects; conversely the object graph
may quite well be modified long after the traversal has been defined,
which makes it undesirable to incorporate too much knowledge of the
object graph into the traversal.
The proposed system (a variant of Demeter) allows the traversal to be
programmed more abstractly.
Basically, a program consists of a traversal specification telling
for classes with which names to look, and to call which methods if it
has found one.
The position in the search is expressed as a path from the root of the
search to the object found, naming all intermediate objects.
The search can be restricted by specifying restrictions on these paths,
using "bypassing" path expressions.
Note that the program can be written and compiled before any actual
classes are defined.
Once the classes have been defined, the program can be sent off
searching among the objects, to perform its work.
If a method specified in the program is never found, it will just never
be called.
Given both the full program and the full set of class definitions, a C++
program can be generated that will perform the traversal and the actions
and that will be type-correct in the C++ sense.
Of course, when either the program changes or the class definitions
change, a new C++ program must be generated.
Brian L. Meek,
Programming languages: towards greater commonality,
ACM SIGPLAN Notices,
vol. 29,
#4,
pp. 49-57.
April 1994,
To design a language-independent (LI) framework, one distinguishes:
I. base standards, which define basic building blocks;
functional standards, which define functionality;
general standards, which define mechanisms (data typing, procedure calls,
etc.).
II. abstract, linguistic, operational and representational levels.
E.g., reals are an approximation on the computational level of abstract real
numbers.
Several commonality areas are covered: LI Data Types, LI Arithmetics, LI
Procedure Calling.
E. Tick,
M. Korsloot,
Determinacy testing for nondeterminate logic programming languages,
ACM TOPLAS,
vol. 16,
#1,
pp. 3-34.
Jan. 1994,
In the Andorra / Pandora model of nondeterministic logic programming, the
program consists of two kinds of guarded Horn clauses: don't care and don't
know.
The don't care goals are treated as in committed choice: if one head
succeeds, it is committed.
The don't know clauses make sure all their goals are tested, in the following
way.
If only one clause can succeed, it is committed, but if more than one can
succeed, the goal suspends.
Then progress elsewhere either causes all clauses but one to fail (in
which case it is committed) or results in deadlock.
If the system deadlocks, one suspended goal is taken, a checkpoint is made,
one of its clauses is forced to commit, and computation continues.
If it was the wrong choice, backtrack will come back here and force another
clause to commit.
The paper describes algorithms for analysing such programs.
Peter Henderson,
Object-Oriented Specification and Design with C++,
McGraw-Hill,
London,
1993,
pp. 263.
Object-oriented programming, design methodology, C++ and a diagram
system are introduced and explained side by side, in a very clear and
meaningful fashion.
A fifth thread that is running through the book is the object-oriented
prototyping language Enact, designed and provided by the author.
First the principles of object-oriented programming are explained using
Enact and then code for the same purpose is constructed in C++; this
makes C++ look very hairy!
Next, increasingly complicated case studies are provided in a very clear
and well-argued way.
Some general observations and an annotated (!) bibliography conclude the
book.
The author is a master at simple explanations; e.g. `abstract classes'
are explained simply as classes for which no constructor has been
provided since none will be needed.
I've seen much worse definitions.
Excellent text book.
Kenneth C. Louden,
Programming Languages: Principles and Practice,
PWS-Kent,
Boston,
1993,
pp. 641.
Another good book on programming language principles.
Solid, traditional, with a conventional structure:
a large number of components grouped in a number of chapters,
followed by chapters on the three non-imperative paradigms,
and finally a chapter on parallel programming.
Chapters on history, language design and formal semantics are also
included.
The conventional structure is no surprise since the book aims at
fulfilling two American Course Requirements.
Much material.
Leslie B. Wilson,
Robert G. Clark,
Comparative Programming Languages -- 2nd Ed.,
Addison-Wesley,
Reading, Mass.,
1993,
pp. 374.
Second edition of Wilson and Clark [1988].
Two chapters have been added: Object-oriented Programming Languages and
Syntax and Semantics, both to the point.
Syntax is covered in some detail, denotational and axiomatic semantics
are exhibited in one and a half page each.
Alice E. Fischer,
Frances S. Grodzinsky,
The Anatomy of Programming Languages,
Prentice-Hall,
Englewood Cliffs,
1993,
pp. 557.
Although the book is easy to read in all places, I fail to understand its
structure; nor is that structure explained in the book.
It consists of three parts:
"About language", which covers subjects like communication,
abstraction and natural and formal languages;
"Describing Computation", which is mainly about elementary
constructs in programming languages, and
"Application Modeling", which seems to be about functional and logical
programming, semantics, modules, generics and inheritance (?!?).
Each single section in itself is worked out very well, though, and some of
the innuendos betray a lot of field experience.
The material is covered in rich detail; well-annotated examples abound.
A large number of languages is used in examples, and is explained adequately.
There are at least 20 or 30 exercises for any subject.
The authors believe in footnotes.
Some minor and not so minor gripes:
--
Comes dangerously close to equating natural language with English.
--
Nothing on concurrency.
--
The bibliography is adequate but not well accessible: there are no
bibliography sections and there seems to be no reasonable way to find the
literature references for a given language.
--
Divide by zero is a hardware error, like a disk-read error ?!? (p. 303)
All in all, interesting material for the already moderately sophisticated
student.
Jim Waldo,
The Evolution of C++,
MIT Press,
Cambridge, Mass.,
1993,
pp. 297.
Collection of papers, mostly from USENIX conferences.
Broad subjects are:
The early years;
Multiple inheritance;
Exception handling;
Runtime typing;
Distributed computing.
Barbara Liskov,
Jeannette L. Wing,
A new definition of the subtype relation,
in ECOOP'93 -- Object-Oriented Programming,
Lecture Notes in Computer Science #707,
pp. 119-141.
1993,
We want a subtype T to have at least the behavior of its supertype
S.
We usually achieve this by requiring all values of type T also to be
values of S, and for T to have all the methods S has, with the
same signature.
But semantically that is not enough.
Suppose an object V of type T is modified by an (overridden) method
$ M sub T $.
Then these modifications should be within the range of possible
modifications that the original $ M sub S $ could have made to V when
V were viewed as an object of type S.
Otherwise, a part of the program that handles V as a type S object
could find out that V changes in a way incompatible with what
$ M sub S $ could have done.
This means that the additional modifications by $ M sub T $ should be
invisible when viewing V as of type S, and that the visible
modifications should be identical to those made by $ M sub S $.
This requires restrictions on the pre- and post-conditions of
$ M sub S $ and $ M sub T $.
The authors propose explicit mappings between the values of and methods
of S and T, and then formulate the above restrictions in terms of
mappings.
Examples are given, all with a minimum of formalism.
Very readable.
Rainer H. Liffers,
Inheritance versus containment,
ACM SIGPLAN Notices,
vol. 28,
#9,
Sept. 1993,
pp. 36-38.
The author suggests that one difference between is-a and has-a is that
if X is a Y the Y cannot be replaced by something else without X losing
its identity, whereas if X has a Y, it can.
(Stroustrup recommends asking "Can it have two?")
Doug Bell,
Mike Parr,
Spreadsheets -- a research agenda,
ACM SIGPLAN Notices,
vol. 28,
#9,
Sept. 1993,
pp. 26-28.
Exploratory thoughts about spreadsheets as a paradigm.
Dick Grune,
Two-level grammars are more expressive than Type 0 grammars; or are they?,
ACM SIGPLAN Notices,
vol. 28,
#8,
pp. 43-45.
Aug. 1993,
Shows that two-level grammars can handle infinite sets of terminal symbols,
which makes them more expressive than Type 0 grammars.
There was heavy opposition from philosophers.
Andrew Malton,
The denotational semantics of a functional tree-manipulation language,
Computer Languages,
vol. 19,
#3,
pp. 157-168.
July 1993,
Denotational semantics of TXL.
M. Badii,
F. Abdollahzadeh,
Dynamic semantic specification by two-level grammars for a block-structured language with subroutine parameters,
ACM SIGPLAN Notices,
vol. 28,
#5,
pp. 9-18.
May 1993,
Seven-page example of same.
Paul W. Abrahams,
Typographical extensions for programming languages: Breaking out of the ASCII straightjacket,
ACM SIGPLAN Notices,
vol. 28,
#2,
Feb. 1993,
pp. 61-68.
Actually a rather tame plea for the use of different fonts and character
sets to improve program readability.
Not enough examples to be convincing.
Jean-Pierre Banâtre,
Daniel le Métayer,
Programming by multi-set transformation,
Commun. ACM,
vol. 36,
#1,
pp. 98-111.
Jan. 1993,
All the world is a multi-set.
Carl A. Gunter,
Semantics of Programming Languages -- Structures and Techniques,
The MIT Press,
Cambridge, Mass.,
1992,
pp. 419.
Probably as simple as the subject can be made, but still heavy reading.
Daniel P. Friedman,
Mitchell Wand,
Christopher T. Haynes,
Essentials of Programming Languages,
MIT Press,
Cambridge, Mass.,
1992,
pp. 536.
The title is misleading.
It should have been: `Essentials of Symbolic Computation and its
Interpretation in Scheme'.
The book can be seen as a technical companion to Abelson and Sussman's book
`Structure and Interpretation of Computer Programs';
like the latter, it is based on Scheme.
The semantics of constructs in imperative and object-oriented languages is
described operationally by giving an interpreter in Scheme.
The treatment of the functional paradigm is implicit in the use of Scheme;
logic programming is not mentioned, and neither is Prolog.
Specific languages are only mentioned very much in passing.
E.g. Modula-2 is referred to only once, on page 190, except that the actual
reference is on page 191.
It is not a general book on programming languages.
Wiley has a book of the same title; is that the same book?
W. Havens,
et al.,
Echidna: A constraint logic programming shell,
Simon Fraser Univ., Centre for Sys. Sci.,
CSS-IS TR 92-15,
1992,
[Not in libraries in the Netherlands (930423).]
Gio Wiederhold,
Peter Wegner,
Stefano Ceri,
Toward megaprogramming,
Commun. ACM,
vol. 35,
#11,
pp. 89-99.
Nov. 1992,
The megaprogramming language MPL describes an interface for communication
with and between very large modules/services.
Each service responds to a small number of universal primitives
(unfortunately with unintuitive names):
INSPECT causes the service to return a description of itself;
IMPORT SUPPLY yields a description of the data structures in which it can be
addressed (in a self-describing form);
IMPORT EXTRACT yields a description of the data structures it will use in
answering;
SUPPLY supplies data (and requests) to the service;
INVOKE confirms the request;
EXTRACT gets back the result.
The actual request is broken up into the following three parts.
First the user does a SUPPLY to send a data structure (in SUPPLY format) to
the service for inspection.
This data structure need not be complete in that details may be left open;
the service may or may not supply defaults.
The user can then ask questions about the request, e.g. how much it will cost
or how much time it will take.
To set the service working on the request, the INVOKE statement is used.
INVOKE returns immediately, allowing the user to continue.
At any time (even before the INVOKE) the user can do EXTRACT, which will
return some, possibly incomplete, answer.
The answer is complete only when it says so explicitly in the answer.
The paper describes a small application (managing the transportation of
goods between two U.S. Navy installations) and a small shell-like extension
with pipes for communication between services on behalf of the user.
Wilfred J. Hansen,
Subsequence references -- First class values for substrings,
ACM TOPLAS,
vol. 14,
#4,
1992,
pp. 471-489.
The difference between a sequence and a subsequence is that a
subsequence S has a preceding and a following subsequence, which may
be examined from S.
Subsequences as first class values means that all strings, substrings
and characters are unified into one data type: subsequence reference.
The subsequence reference is a record with a pointer to the actual
string, an index to the first character of the subsequence and an index
to the first character no longer in the subsequence.
Several operators on subsequence references are proposed and their
usefulness is demonstrated.
Feels like an alternative Icon.
CHristophe Dony,
Jacques Malenfant,
Pierre Cointe,
Prototype-based languages: From a new taxonomy to constructive proposals and their validation,
in OOPSLA'92,
ACM SIGPLAN Notices,
vol. 27,
#10,
pp. 201-217.
Oct. 1992,
Based on analysis of a number of proposed prototype-based languages, 6
basic properties of such languages are determined:
1. Representation: classically as variables and methods, or as slots
(attributes).
2. Dynamic modification of structure: once the object is created, can
its structure be modified?
3. Creation ex nihilo: can objects be created out of nothing, or do they
always have to be clones from pre-existing objects?
4. Do we have to create empty objects or can we create objects with
initial structure?
5. Is delegation implicit or explicit?
6. A "split object"is an object part of whose properties reside in
another object. Can the language handle split objects?
A taxonomy is proposed,based on asking the above questions in the above
order, and feasibility, usefulness, and other properties of the language
groups at the nodes of the taxonomy (given explicitly on page 204) are
examined.
It turns out that creation ex nihilo is virtually essential, and so is
creating objects with initial structure.
(Actually there is a 0th property: is the copying involved in cloning
deep or shallow? But deep cloning is totally impractical, so the answer
is always "shallow").
A platform, implemented in Smalltalk, is presented for experimenting
with the various possibilities.
Connections with LaLonde's exemplars and with Ungar's Self are explored.
William R. Cook,
Interfaces and specifications for the Smalltalk-80 collection classes,
ACM SIGPLAN Notices (OOPSLA '92),
vol. 27,
#10,
pp. 1-15.
Oct. 1992,
The Smalltalk-80 collection class library is analysed on the basis of
simplified interface descriptions, here called `protocols'.
A lattice of these protocols is set up, using the include relation, where
necessary creating new abstract protocols.
This very clear analysis uncovers a number of inconsistencies and gaps.
After correcting these, the author proposes an improved form of the
collection class library, which looks indeed convincingly better.
Allen L. Ambler,
Margaret M. Bunnett,
Betsy Zimmerman,
Operational versus definitional: a perspective on programming paradigms,
IEEE Computer,
vol. 25,
#9,
pp. 28-43.
1992,
For the purpose of explanation, existing paradigms are grouped into two
classes, operational and definitional, depending on whether or not
flow-of-control is specified, although there is actually a
continuum rather than two distinct classes.
Operational paradigms include
imperative,
object-oriented,
functional-operational,
asynchronous and
synchronous,
while the definitional paradigms include
functional-definitional,
transformational,
logic,
form-based,
dataflow and
constraint programming.
The two demonstrational paradigms,
inference and
demonstration of constraints,
are tentatively classified as operational.
(Functional-operational differs from functional-definitional in that the
specified order of alternatives is significant in functional-operational and
is not in functional-definitional.)
The principles of the above paradigms are described clearly, and exactly one
example of each is given (quicksort -- as far as applicable).
Has few but adequate literature references.
Raymond T. Boute,
The Euclidean definition of the functions div and mod,
ACM TOPLAS,
vol. 14,
#2,
pp. 127-144.
April 1992,
This long overdue paper clears up the still festering problems with div and
mod, by first firmly establishing three axioms for them and then examining
the two options for div: truncate to zero (T for truncate) and truncate to
−infinity (F for Floor).
T is bad since it is not periodical.
This leaves two possibilities for mod: sign of dividend (also called F) and
always non-negative (E for Euclidean).
Using a number of practical examples, the author shows that E is to be
preferred.
Mathematical foundations are given.
Henry G. Baker,
A `Linear Logic' Quicksort,
ACM SIGPLAN Notices,
vol. 29,
#2,
pp. 13-18.
Feb. 1992,
Linear Logic is a super-pure style of functional programming in which each
value that is created has to be consumed by some operation in a statically
verifiable fashion.
As a stopgap, the built-in operator kill-value is provided, but its use is
(of course) discouraged; a built-in function duplicate-value is provided and
carries no odium.
The resulting programs are very clean in their data usage and can be run
without the use of a garbage collector.
Linda Weiser Friedman,
Comparative Programming Languages: Generalizing the Programming Function,
Prentice Hall,
Englewood Cliffs, NJ,
1991,
pp. 578.
This densely written book eschews no detail; pity the student who has to
study from it.
Consists of three parts: Elements of programming languages, which contains
chapters on language processors, data structures, program structures and
control structures, in that order;
Programming languages for information processing, which covers COBOL (46
pages), Pascal (35), Modula-2 (46), C (43), Prolog (38) and Smalltalk (36
pages);
and Other programming tools (program design, 4th generation tools and expert
advisors).
The occasional sarcastic wit is somewhat redeeming, though.
Jean-Luc Gaudiot,
Lubomir Bic,
Advanced topics in data-flow computing,
Prentice Hall,
Englewood Cliffs, N.J.,
1991,
pp. 626.
Twenty-two papers on same, and hundreds of literature references.
Colin Atkinson,
Object-oriented Reuse, Concurrence and Distribution: An Ada-Based Approach,
ACM Press,
Addison-Wesley,
Wokingham, England,
1991,
pp. 270.
Describes DRAGOON, an object-oriented derivative of Ada.
The main point is that many low-level objects, e.g. the integers, are not
objects at all, but are values.
DRAGOON supports the "mixed" paradigm of combined object-oriented and
value-oriented programming.
The author claims, and illustrates with examples, that this simplifies
writing abstract classes, thus fostering reuse.
Reuse is explained as "conceptual editing of existing code to fit a new
application" (p. 71) by using the inheritance mechanism.
This leads to "programming by difference": program only the
differences between what you need and what you can pick off the shelf.
Doris Appleby,
Programming Languages -- Paradigm and Practice,
McGraw-Hill,
New York,
1991,
pp. 458.
The author has a PhD in "College Teaching of Math" and it shows.
The author takes great pains never to be difficult.
The book is constructed along Wegner's division of programming languages:
1. imperative: 1a. block-structured, 1b. object-based, 1c. distributed;
2. declarative: 2a. functional, 2b. relational (logic), 2c. database.
The book is rich on examples and has a nice chapter on Bentley's "little
languages".
There are brief theoretical excursions on subjects like:
formal languages, logic and proofs, lambda calculus, relational databases
and set theory.
Pedagogical background notwithstanding, the explanation of call-by-name is
naive and Jensen's device is shown without a single word of explanation.
Type 0 grammars and Type 0 languages are confused.
No word about iterators and generators; although dozens of languages are used
in examples, no word about Icon.
Gordon Blair,
John Gallagher,
David Hutchison,
Doug Shephard,
Object-oriented Languages, Systems and Applications,
Pitman,
London,
1991,
pp. 378.
Convincing and thorough introduction to object-oriented programming,
covering basics, programming languages, databases, design methodology
(HOOD, Hierachical Object Oriented Design),
interfaces, hardware and kernel application.
Worth reading.
Grady Booch,
Object Oriented Design with Applications,
Benjamin/Cummings,
Redwood City, Ca.,
1991,
pp. 580.
Philosophically-oriented, entertaining treatment of object-oriented
programming.
Consists of three parts: Concepts, The Method and Application, getting more
detailed along the way.
The bibliography is extensive, but the system requires getting used to.
Note: remarkably inexpensive. Note 2: there are no cartoons beyond page 150!
David A. Watt,
Programming Language Syntax and Semantics,
Prentice Hall,
New York,
1991,
pp. 389.
Systematic, detailed and well-argued treatment of theory and applications of
syntax, contextual constraints, denotational semantics, algebraic semantics
and action semantics, all exemplified by the tutorial Pascal-derived
language $ DELTA $ (pronounce "triangle", after Pascal's triangle).
David Cordes,
Marcus Brown,
The literate-programming paradigm,
IEEE Computer,
vol. 24,
#6,
pp. 52-61.
1991,
The authors observe that the literate programming paradigm (as advocated by
Knuth) is not widely used and they point out deficiencies in the technique
as defined by Knuth.
These are:
lack of interface into the program development cycle;
too much overhead;
awkwardness of the use of the combined languages for programming,
documenting and formatting;
and the restricted applicability to the construction and maintenance phase
only.
The authors propose a number of modifications:
multi-level tables of contents;
a graphical use interface;
two extra functions that show nesting and static calling tree;
and an enhanced index.
No indication is given why any of these should be an improvement.
John Placer,
Multiparadigm research: a new direction in language design,
ACM SIGPLAN Notices,
vol. 26,
#3,
pp. 9-17.
March 1991,
There are three reasons for doing multi-paradigm research:
1.
Single-paradigm languages are OK for relatively small, uniform problems, but
for large project involving many sizeable subproblems of diverse nature,
a single paradigm will hardly ever allow all the subproblems to be solved
in a natural way.
Using a different language for each subprogram would create a big interface
problem.
2.
Students often get stuck on the first paradigm they are taught.
To keep their minds open, they should be taught many paradigms at once.
3.
Some paradigms have a very restricted problem domain; an example is the
relational paradigm.
By incorporating it into a multi-paradigm language, its range can be extended.
There are three ways to do multi-paradigm research:
1.
Take one major paradigm and add some successful features of other paradigms.
Especially object-oriented is a successful add-on.
2.
Take two paradigms and combine them on equal footing.
3.
Integrate several major paradigms.
Examples are Nial and G (by the author).
(The paper may derive from a chapter from the author's 1988 Ph.D. thesis.)
Wilf LaLonde,
John Pugh,
Subclassing ≠ Subtyping ≠ Is-a,
J. Object-Oriented Programming,
vol. 3,
#5,
pp. 57-63.
Jan. 1991,
The paper is based in SmallTalk, a typeless language; this may have
influenced the ideas.
Subclassing allows code and representation sharing; it supports
reusability for the implementer.
Subtyping allows substitutability; it supports reusability for the user.
Is-a is conceptual; it supports understanding.
Example:
A set of integers is-a set, but it is not a subtype of set: a set of
integers cannot be used in all places in which a set is acceptable
because a set operation might try to ad a non-integer (this is
SmallTalk, and ignores covariant/contravariant).
Neither is a set of integers a subclass of set: it is implemented as a
bit vector or a linked list, whereas the set is implemented as a hash
table, so they share no code or representation.
Another example:
A bag is a subclass of set (in SmallTalk) since it is implemented as a
set of items with counters; but it is not a subtype, since it cannot be used
everywhere where a set can be used, because then items without counters
could be inserted.
The authors propose that the code/representation sharing and the
substitutability be separated explicitly.
They see no place for the is-a relation, except during initial design.
I have no idea of how this translates for a language with strong typing.
David A. Watt,
Programming Language Concepts and Paradigms,
Prentice Hall,
New York,
1990,
pp. 332.
Thorough, though somewhat too bottom-up (3.5 Commands under 3 Storage??)
treatment, supported by in-depth analyses (case studies) of Pascal, Ada,
Smalltalk and Prolog.
Samuel N. Kamin,
Programming Languages -- An Interpreter-Based Approach,
Addison-Wesley,
Reading, Mass.,
1990,
pp. 640.
With a minimum of introduction, the author defines a syntactically very
simple nameless language, which is in fact fully parenthesized prefix.
LISP-like function application is defined for it:
(f e1 ... en) and eleven built-in functions are defined; examples are
(if e1 e2 e3) and (begin e1 ... en).
There are global and formal variables but no local variables.
An interpreter for this language is given in Pascal.
Next, the following languages are "treated":
LISP and APL, as languages with dynamic data structures;
Scheme and SASL, as functional languages;
CLU and Smalltalk, as object-oriented languages; and
Prolog, as a logic language.
"Treated" means that the principles of the language are explained and
expressed in the nameless notation, using new functions specific to the
language; an example is the function (infer e2 from e1), which denotes
e2 :- e1.
The new functions are then integrated in the interpreter mentioned above,
and demo runs are shown.
Finally the real syntax of the language is discussed.
Operational semantics has become a teaching paradigm in this book; it is a
solid piece of work, if this is what you want.
Bruce J. MacLennan,
Functional Programming -- Theory and Practice,
Addison-Wesley,
Reading, Mass.,
1990,
pp. 596.
Starting from the overly simple description of FP as "assignmentless
programming", the author steadily increases the depths of his arguments,
adding what seems to be just the right amount of math.
Gives full proof of the Church-Rosser theorem.
Setrag Khoshafian,
Razmik Abnous,
Object Orientation -- Concepts, Languages, Databases, User Interfaces,
John Wiley & Sons, Inc.,
New York,
1990,
pp. 433.
Verbose text.
David Gelernter,
Suresh Jagannathan,
Programming Linguistics,
MIT Press,
Cambridge, Ma.,
1990,
pp. 411.
The authors define an "Idealized Software Machine", the ISM, and then
describe many, many aspects of programming languages in this model.
Elements in the ISM are:
space maps, which mainly model value mappings, i.e. expression evaluation;
time maps, which mainly model execution progress;
templates, which mainly model substitution;
and some building blocks like values, nil, etc.
There is no difference between program and data: a fully evaluated data
structure is a program segment that evaluates to itself.
Formal semantics of the ISM are given using Petri nets.
Very readable; the novel way to describe what is essentially operational
semantics leads to a fresh view of things.
Not the average programming language text.
Herbert L. Dershem,
Michael J. Jipping,
Programming Languages: Structures and Models,
Wadsworth Publ. Comp.,
Belmont, Ca.,
1990,
pp. 413.
Average and sometimes awkward treatment of the subject.
Sometimes non-standard: e.g., trees grow upwards.
Regularly shows annoying half-errors: e.g.,
"A language is a systematic set of rules(!) for communicating ideas" (p. 1),
"... PL/I, which included a large number of independent(!) constructs" (p. 9),
alternation is explained as a feature of EBNF, not present in BNF (p. 17-19).
Kelvin Nilsen,
High-level goal-directed concurrent processing in Icon,
Software -- Practice and Experience,
vol. 20,
#12,
Dec. 1990,
pp. 1273-1290.
The concurrent Icon extension Conicon capitalizes on Icon's
co-expressions, by allowing them to be created as processes.
So rather than writing every write(sequence(1,6)), one can create a
concurrent process e := create sequence(1,6) and then use the process
in every write(@e).
The results of concurrent processes can be merged non-deterministically
with the ! operator.
In fact, co-expressions are used as streams.
F. Warren Burton,
Hsi-Kai Yang,
Manipulating multilinked data structures in a pure functional language,
Software -- Practice and Experience,
vol. 20,
#11,
Nov. 1990,
pp. 1167-1185.
Provides an implementation of an ADT array implemented as a balanced
tree, uses this ADT to implement 2 more ADTs heap and pointer,
which act in the imperative manner (more or less), and then uses these
to implement traditional algorithms on multi-linked data structures, for
example depth-first search in a graph.
Part of the motivation of the research is educational: teaching "data
structures" in a curriculum based on functional languages.
Mikael Eriksson,
A correct example of multiple inheritance,
ACM SIGPLAN Notices,
vol. 25,
#7,
pp. 7-10.
July 1990,
Shoots the example of Wiener and Pinson (Sept 1989) to pieces on the
grounds that though an IntegerArray is an Array, it is not an Integer.
The Integer only comes in this way because C++ has no generic classes.
Code sharing is not inheritance.
The author's own example is even less convincing.
Tom Axford,
An elementary language construct for parallel programming,
ACM SIGPLAN Notices,
vol. 25,
#7,
pp. 72-80.
July 1990,
The author identifies a language construct for the divide-and-conquer
paradigm, but does not supply suitable syntax.
The components are
1. a test if the data are basic
2. what to do if the data are basic
3. how to split
4. how to combine.
Many examples of its use are given.
The construct can be useful both in sequential and parallel programming,
thus bridging the gap between them.
S.E. Zenith,
Linda Coordination Language; Subsystem Kernel Architecture (on Transputers),
Yale University,
New Haven, CT,
RR-794,
May 1990,
Proposes a Linda implementation for a transputer grid. It is similar to
Bjornson's hypercube system, except that the TS processors are separated from
the computing nodes, and thus form a separate subsystem.
P.L. Treadway,
The use of sets as an application programming technique,
ACM SIGPLAN Notices,
vol. 25,
#5,
May 1990,
pp. 103-116.
Actually a demo interrupt handler programmed using sets in Pascal, C and
Ada.
The result is lots of disappointingly repetitive code, but it is not
clear that any other paradigm would have done better.
Inconclusive.
Willett Kempton,
Brian A. Wichmann,
Run-time detection of undefined variables considered essential,
Software -- Practice and Experience,
vol. 20,
#4,
April 1990,
pp. 391-402.
Many small notes on the subject of undefined variables and one authors'
conclusion.
1. Undefined is bad, but undefined in subtypes is worse: an undefined
subtype integer correctly used as an array index and unchecked since
type-correct, can be out of bound and write in an arbitrary place.
2. Usually an undefined value is transported much earlier than it is
used, so test at transport.
3. There are three ways to trap an undefined value: give undefined a
special value (not always possible); keep a shadow byte for each data
byte (high memory cost); and keep a shadow bit for each data byte
(slow).
4. Automatic initialization tends to mask program errors.
5. It should not be an error to transport structures some of whose
fields / components are undefined.
6. Authors' conclusion: forbid only the transport of totally undefined
objects.
B. Meek,
The static semantics file,
ACM SIGPLAN Notices,
vol. 25,
#4,
April 1990,
pp. 33-42.
Ruminations on the border between syntactic correctness and contextual
correctness.
If you replace "static semantics" (indeed a non-concept) with "context
conditions" much of the problem goes away.
R. Sethi,
Programming Languages, Concepts & Constructs,
Addison-Wesley,
Reading, Mass.,
1989,
pp. 478.
Each (broad) concept is illustrated using (in principle) two languages:
assignments: Modula-2, C;
procedures: C, Modula-2;
data encapsulation: Modula-2, C++;
inheritance: Smalltalk, C++;
functional programming: Scheme, ML;
logic programming: Prolog;
concurrency: Ada.
Examples are drawn mainly from ML (Harper & Milner's) and from Modula-2.
Introduction to Modula-2.
Introduction to C, with a very good explanation of why character I/O requires
a character of type int.
No systematic treatment of type constructors.
Since ML, the main language of the book is already for a large part a
functional language, functional programming comes naturally.
Separate chapters on: syntax; semantic definition; lambda calculus.
The delicate touch and velvet style of the author matches perfectly the
cuteness of the cover.
R.W. Sebesta,
Concepts of Programming Languages,
Benjamin/Cummings,
Redwood City, Calif 94065,
1989,
pp. 497.
First impression: words, words, words.
Very good tables of contents at the beginning of each chapter.
Early treatment of formal semantics.
Systematic approach to each subject.
The terminology is often innovative :-),
with "pass by value" for call-by-value,
"dangling object" for garbage, etc.
Very little on aliasing problems.
Pascal Van Hentenryck,
Constraint Satisfaction in Logic Programming,
Logic Programming,
MIT Press,
Cambridge, Mass.,
1989,
pp. 224.
Revision of author's thesis, Univ. de Namur.
Defines a Prolog-like language with domain definitions and constraints.
The language has facilities for a priori pruning, using special forward
and look-ahead predicate modifiers, which allow the specification of
alternating breadth-first and depth-first search.
There are additional predicates to steer the search.
The book covers many toy problems (Mastermind, crossword puzzle generation,
etc.) and several real-world problems (construction scheduling, warehouse
location, etc.) in detail.
There is extensive coverage of related work, and a thorough survey of
existing search techniques.
The language is part of the CHIP (Constraint Handling In Prolog) project of
ECRC.
Roger King,
My cat is object-oriented,
in Object-Oriented Concepts, Databases, and Applications,
ed. by Won Kim and Frederick H. Lochovsky,
Addison-Wesley,
Reading, Ma.,
pp. 23-30.
1989,
Compares the use of semantic models and object orientation for
expressing complex data structures (more specifically databases).
Semantic models provide structural abstraction (types);
object orientation provides behavioral abstraction (methods).
Semantic models are concerned with representation of data (types);
object orientation is concerned with manipulation of data (methods).
They complement each other rather than being in opposition.
Rationale of the title: if you want to sing the praises of something,
call it object-oriented.
Jonathan E. Shopiro,
An example of multiple inheritance in C++: the iostream library,
ACM SIGPLAN Notices,
vol. 24,
#12,
pp. 32-36.
Dec. 1989,
Provides code and minimal explanation of a simplified version of the
iostream class + its base classes in C++.
Basically, class iostream is just the sum of class istream and class
ostream, each of which are instantiations of a virtual class ios.
Richard S. Wiener,
Lewis J. Pinson,
A practical example of multiple inheritance in C++,
ACM SIGPLAN Notices,
vol. 24,
#9,
pp. 112-115.
Sept. 1989,
A simple and somewhat hesitant example of multiple inheritance in C++
is given.
The example constructs a class IntegerArray from the classes Array and
Integer, but although the Array class is called generic, it is not used
that way.
In the end a class arises, objects of which are both integers and
arrays.
See Erikson (July 1990).
Paul Hudak,
Conception, evaluation, and application of functional programming languages,
ACM Computing Surveys,
vol. 21,
#3,
pp. 359-411.
Sept. 1989,
Extensive survey of the history and principles of functional languages.
Sets out by covering the principles of $lambda$-calculus, Church-Rosser
theorems I and II and fixpoint, giving simple proofs wherever possible.
Treats Lisp as $lambda$-calculus plus constants (e.g., car and if).
Next come Iswin, APL, FP and ML, the latter with extensive formal treatment of
the type system.
Miranda and the dataflow languages (SASL, KRC) follow, to culminate in the
committee-designed cover-all language Haskell.
Concepts in functional programming are:
higher-order functions;
lazy evaluation, the desirability of which is made very clear;
concrete data types (data types defined explicitly by their algebraic
properties);
pattern matching;
and the irrelevancy of evaluation order, especially that of arguments.
The lazy stream model and the continuation model of I/O in a functional
language are described in glowing terms.
Ways to implement arrays and views are shown.
Possibilities for parallelism, caching, non-determinism and combination
with other paradigms are briefly touched upon.
The paper closes with a set of arguments to dispel some myths about
functional languages.
Michael R. Hansen,
Bo S. Hansen,
Peter Lucas,
Peter van Emde Boas,
Integrating relational databases and constraint languages,
Computer Languages,
vol. 14,
#2,
pp. 63-82.
Aug. 1989,
Constraints are a good way to store knowledge in an application-independent
form.
From a given constraint, each application (or each language) can generate
code to implement that constraint; such code could calculate the values of
some variables when the values of other variables are given, or could
implement a search strategy.
For instance, Ohm's law, formulated as a single constraint, could give rise
to three function in a programming language:
$ V ( I, R ) $, $ I ( V, R ) $ and $ R ( V, I ) $.
A relational algebra for these constraints is given and applied to
examples, for instance to the x-has-an-uncle-y relation.
The authors note that their constraint system is equivalent to Prolog
without built-in search.
Rather each application has to supply its own resolution scheme.
Finding such resolution schemes will in the general case require human
assistance.
Dan Jonsson,
Next: The elimination of goto-patches?,
ACM SIGPLAN Notices,
vol. 24,
#3,
pp. 85-92.
March 1989,
Stage N in the ongoing goto debate.
Shows that all previous attempts fall into one of seven patch
categories.
The author's solution is `pancode', which is, remarkably, a dressed-down
version of his own patch category 1.
Markku Sakkinen,
How to best return the value of a function,
ACM SIGPLAN Notices,
vol. 24,
#3,
pp. 55-56.
March 1989,
Should actually be called
"How to not return the value of a function";
it is an analysis and critique of the two most usual function value
return mechanisms: assigning to the function name and using a return
statement.
Both leave much to be desired, but no better alternative is given.
P. Luker,
Never mind the language, what about the paradigm?,
ACM SIGCSE Bull.,
vol. 21,
#1,
pp. 252-256.
Feb. 1989,
Invoking Kuhn, the author predicts a (chaotic) revolution of paradigm in
programming.
Candidates are compared but all are found wanting.
Object-oriented programming is perhaps the prime candidates with functional
programming a second best.
Since there is no clear favorite, the revolution may take a long time and CS
curricula should prepare the students for it, preferably by teaching more
theory.
Speaks highly of the object-oriented language BETA.
Herbert G. Mayer,
Programming Languages,
Macmillan Publ. Co.,
New York, NY,
1988,
pp. 681.
Covers imperative languages in an entertaining style.
Full of anecdotes and opinions, plus a number of personal pet subjects of
the author, for instance a separate 20-page chapter on the goto statement,
and a very informative 40-page chapter on recursion.
Concurrent programming and logic programming each get a 20-page chapter.
The index is embarrassing.
Worth reading, because of the extracurricular information.
Jayantha Herath,
Yoshinori Yamaguchi,
Nobuo Saito,
Toshitsugu Yuba,
Dataflow computing models, languages, and machines for intelligence computations,
IEEE -- Software Engineering,
vol. 14,
#12,
Dec. 1988,
pp. 1805-1828.
This very densely written paper consists of four chapters: computation
models, language models, machine models and performance evaluation.
All developments on each of these subjects are covered in numerous short
paragraphs.
Many of these are so terse as to be almost incomprehensible; examples are
few and are generally not annotated.
The authors seem more concerned with getting information on paper than with
conveying it to the reader.
The paper contains a bibliography of 65 entries.
Mark Stefik,
Daniel G. Bobrow,
Sanjay Mittal,
Lynn Conway,
Knowledge programming in LOOPS,
in Readings from the AI Magazine,
Amer. Assn. for Artificial Intelligence,
Menlo Park, Ca.,
pp. 493-503.
1988,
Describes the experience with a course on knowledge programming in
Loops.
The subject was an economic model realized as a board game in which each
player owns a truck, drives around and buys and sells goods.
There are hazards on the road and players have to buy gas, etc.
A rich animated environment for the independent trucker was available
for the course.
The students programmed trucks as objects, displayed results using
access-oriented programming and composition, etc.
Big fun and it worked.
R. Bird,
P. Wadler,
An Introduction to Functional Programming,
Prentice-Hall,
New York,
1988,
pp. 293.
The preface claims: "No knowledge of computers or experience in writing
programs is assumed", and then continues about suitability for first-year
undergraduate students.
But even the first pages require considerable mathematical sophistication,
i.e., the book addresses the infinitely intelligent total ignoramus.
L.B. Wilson,
R.G. Clark,
Comparative Programming Languages,
Addison-Wesley,
Reading, Mass.,
1988,
pp. 379.
The bibliographies are very short but very much to the point.
Very good description of parameter passing.
Comes down heavily on call-by-name.
Original.
Separate chapter on I/O.
Wm Leler,
Constraint Programming Languages -- Their Specification and Generation,
Addison-Wesley,
Reading, Mass.,
1988,
pp. 202.
After a short introduction to constraint programming, the author
explains theproblems with existing constraint-satisfaction systems:
most are application-dependent, data types are neglected, many are not
computationally complete, and relaxation techniques cause numerical
instabilities.
This can be improved by requiring the user to supply rules telling how a
constraint should be satisfied: rule-based constraints.
An automatic solution to constraint satisfaction is local propagation,
which tries to solve all equations sequentially in some order.
This works if there are no cycles, which are, however, already present
in a simple constraint like $ x + x = 2 $.
An automatic solution is graph transformation (term rewriting); it
transforms $ x + x = 2 $ into $ 2 "µ" x = 2 $, which can be solved;
this leads to equation solvers.
Sometimes there is no such transformation; then use relaxation for numeric
problems and a search technique for symbolic problems.
The book dedicates a chapter to each of these techniques.
The main subject of the book is the author's language Bertrand, which is
based on an augmented form of term rewriting.
Other existing constraint programming languages are discussed, however:
--
Sketchpad (Sutherland, thesis 1963).
Domain: geometric figures.
Graphical input and output.
--
ThingLab (Borning, thesis 1979).
Definable domains.
--
TK!Solver (Konopasek, 1984).
Essentially an equation solver.
--
Juno (Nelson, 1984).
Geometric figures.
--
IDEAL (Van Wyk, thesis 1980).
Geometric figures.
Scott Danforth,
Consistent choice narrowing -- A new model for constraint-based programming,
in 21st Annual Hawaii International Conference,
ed. by B. Shriver,
IEEE,
Washington, DC,
pp. 595-604.
1988,
In the framework of constraint satisfaction by term rewriting, the author
proposes to rewrite nodes consistently; that is, each rewrite is remembered
and when the same node is found again, it is rewritten in the same way.
Backtracking can unrewrite the node, after which it can be rewritten
differently (and consistently) again.
Doing this for several nodes narrows down the constraint consistently,
whence the name.
This yields a cleaner semantics and provides a kind of Prolog-like
unification.
R.E. Griswold,
Programming with generators,
The Computer Journal,
vol. 31,
#3,
pp. 220-228.
June 1988,
All expressions in Icon yield 0 or more results, not necessarily 1
result.
Yielding 0 results is considered "failure".
This concept replaces conditional expressions.
The number of results consumed depends on the context: 1 in an
assignment, but an arbitrary number in an iteration:
every expr1 do expr2 gets all results of expr1 and evaluates
expr2 for each of them; and in goal-directed evaluation:
a subexpression is resumed automatically if its parent expression does
not produce a result.
As a result, the system will do a top-down search for all possible
solutions (= successful evaluations).
Actually, this is how iteration is implemented: every refuses to
produce a result, thereby forcing every possible evaluation of the
expression.
The backtracking can be broken by using semicolons:
(e1; e2) will not retry (continue) e1 when e2 fails.
Expressions that can explicitly yield multiple results are called
generators.
There are several built-in generators, for string searching and
decomposition of compound data structures.
Generators can be concatenated: e1|e2 yields all results of e1
followed by those of e2; and repeated: |e yields all results of
the repeated evaluation of e until it produces an empty result.
Generators can be limited: e\en limits e to at most n results.
Of course generators like (1 to 10) \e (2 to 4) work and do the
expected (but perhaps not so obvious): 1..2, 1..3, 1..4.
The result sequence of an expression can be filtered:
1(e1, e2) will produce those results of e1 for which e2
succeeds, until e1 itself fails.
Generators can be programmer-defined in routines that suspend themselves
using suspend e.
The paper continues with 3 demonstrations of the use of generators.
Generator procedures are powerful, especially if they are recursive.
Procedural encapsulation turns generators into tools and increases
readability.
And generators support a limited form of logic programming, with fewer
features but more explicit control.
B. Dwyer,
Regular right part programming languages,
ACM SIGPLAN Notices,
vol. 23,
#6,
pp. 140-144.
June 1988,
The grammatical paradigm to programming ! On the last page they almost
invent ALEPH!
Carl G. Ponder,
Patrick C. McGeer,
Antony P-C. Ng,
Are applicative languages inefficient?,
ACM SIGPLAN Notices,
vol. 23,
#6,
pp. 135-139.
June 1988,
Many problems can be solved equally efficiently in an applicative
language as with updatable memory, and any applicative algorithm can
simulate updatable memory in logarithmic time.
The authors specify seven problems for which they cannot get rid of the
logarithmic factor and wonder/ask if this is fundamental.
(See also: Nicolas Pippenger, Pure versus impure Lisp, ACM TOPLAS,
Vol. 19, No. 2, p. 223-238, March 1997.)
Pascal Van Hentenryck,
A constraint approach to Mastermind in logic programming,
ACM SIGART Newsletter,
#103,
pp. 31-34.
Jan. 1988,
Mastermind in Prolog with constraints; 40 times more efficient than in
normal Prolog.
Ed Anson,
A generalized iterative construct and its semantics,
ACM TOPLAS,
vol. 9,
#4,
Oct. 1987,
pp. 567-581.
Defines a combination of the Dijkstra guarded commands if-fi and do-od
into a single do-upon-od command:
do P1 → L1
[] P2 → L2
[] ...
upon Q1 → M1
[] Q2 → M2
[] ...
od
which loops repeating Pn → Ln until at least one Qm guard is true,
after which the command does Mm and terminates.
Theorems are given, and the command is compared to Parnas'
"A generalized control structure and its formal definition", Commun. ACM,
Vol. 26, No. 8, Aug. 1983, pp. 572-581.
A sample application to window updating in a window system concludes the
paper.
Michael Marcotty,
Henry Ledgard,
Programming Language Landscape: Syntax, Semantics and Implementation,
(2nd ed.),
Science Research Associates,
Chicago,
1986,
pp. 569.
Original treatment of language features by having a special mini-language for
each feature.
H. Lieberman,
Using prototypical objects to implement shared behaviour in object-oriented languages,
in ACM Conference on Object-Oriented Systems Languages and Applications (OOPSLA '86),
ACM SIGPLAN Notices,
vol. 21,
#11,
pp. 214-223.
Nov.1986,
[ast]
Sharing behaviour can be between abstract classes
using inheritance, or protoypes using delegation. An object refers to
its class for shared knowledge; a prototype refers to another
prototype for shared knowledge.
K. Pingali,
Arvind,
Efficient demand-driven evaluation, Part I,
ACM TOPLAS,
vol. 7,
#2,
1985,
pp. 311-333.
Describes a language in which streams are a data-type, providing
operators `true-gate', `false-gate' and `merge'.
The authors show that using these operators evaluation can be efficient.
Good literature list for applicative programming.
R.H. Halstead Jr.,
Multilisp: a language for concurrent symbolic computation,
ACM TOPLAS,
vol. 7,
#4,
Oct. 1985,
pp. 501-538.
[ Futures.
S. Yemini,
D. Berry,
A modular verifyable exception-handling mechanism,
ACM TOPLAS,
vol. 7,
#2,
pp. 214-243.
1985,
Based on Algol 68 and packages.
Basic idea: a call of an exception handler can do one of two things:
return a local answer, allowing the caller to proceed, and
return a more global answer, cancelling a number of levels.
The syntax used is shaky.
Greg Nelson,
Juno, a constraint-based graphics system,
in ACM SIGGRAPH '85, San Francisco, CA, July 22-26, 1985,
ed. by Brian A. Barsky,
Computer Graphics,
vol. 19,
#3,
July 1985,
pp. 235-243.
Juno is a constraint language based in Dijkstra's guarded commands.
Built-in data types are points, colours, widths and one anonymous image.
Running the program and editing it are the same: editing the image WYSIWYG
style edits the program text.
One can, e.g., write a prototype bar graph program, using a text editor;
adjusting the heights on screen would produce a new program.
The net says:
A WYSIWYG front-end to a constraint-based language (cf. Metafont) for
describing images.
Harvey Glass,
Threaded imperative systems and functional programming environments,
ACM SIGPLAN Notices,
vol. 20,
#4,
pp. 24-32.
April 1985,
Describes a Forth-like functional language, implemented as an extension of
Forth.
This technique allows easy prototyping of new functional language features.
Garbage collection is the main problem.
Graham Birtwistle,
The coroutines of Hanoi,
ACM SIGPLAN Notices,
vol. 20,
#1,
Jan. 1985,
pp. 9-10.
Full code of the Towers of Hanoi in Demos, a Simula extension for event
handling.
The disks are implemented as coroutines.
David M. Harland,
Polymorphic Languages -- Design and Implementation,
Ellis Horwood,
Chichester,
1984,
pp. 251.
The ontology diagram of a programming language is a diagram which shows
a number of spaces (sets), each populated with entities from the language.
Generally there is a name space which holds the names, a denotation space
which holds entities that can be named, and a value space which holds the
rest of the entities.
Furthermore the ontology diagram shows the relations that can hold between
the entities.
Examples of such relations are:
"denotes", from names (in N-space) to D-space;
"holds", from locations (in D- or V-space) to D- or V-space; etc.
Starting from very strong versions of Procedural Abstraction ("any piece
of code can be made into a procedure with any of its components as
parameters") and of Orthogonality ("all objects are first-class
objects"), the author shows convincingly that it is virtually necessary
to have only the locations (memory cells) in D-space and everything else in
V-space (except the names which live of course in the N-space).
More in particular, types are values; hence the need for polymorphism.
The rest of this thesis-like book is dedicated to exploring the consequences
of this approach, in the definition of the polymorphic language POLY.
Lists play a prominent role in POLY.
All values are pairs <T.type: T.value>.
The type "type" (which is a value of the type type) is of course
represented as <Type: Type>.
From this a lattice of types is constructed.
(Remarkably, although POLY has structure types, a structure must contain at
least one field; again no empty structures.)
Type checking is dynamic and types are equivalent if they have been generated
in the same call of a type generator, so to speak.
The type checking is safe but does not provide early warnings.
Locations in POLY can be type-constant, and if they are they can be
value-constant.
Checking is again dynamic.
Control statements are fairly conventional, except that they are also
expressions; e.g. a for statement can be used to yield a list of values.
The dangling else problem is solved by having if ... do ...
versus if ... then ... else ....
Modules are introduced as procedures that yield a list of values
(procedures, types, values, etc.); the programmer has to name these values
to be able to use them.
A module instantiation is a procedure call to a module and yields a list of
anonymous exported items.
The same mechanism can be used to implement "own" variables.
The book closes with a grammar for POLY and the design of a machine code
instruction set, centered around a
"heap of tagged cells arranged into arbitrary-sized blocks" (p. 232).
Furthermore the author speculates about a language in which even keywords
can be the subject of procedural abstraction, by having values of the type
keyword.
The author makes a number of interesting points and dubious remarks.
1.
If a feature would become dangerous when granted full rights under the
Orthogonality Principle, the feature should go rather than the Orthogonality
Principle (p. 36).
This is used to show why labels are bad.
2.
Concrete syntax is immaterial (p. 40) and any form is equally valid (sic!).
3.
Modula-2 and Ada are blatantly documentation-oriented and contain hardly any
operational code (p. 177).
4.
The emphasis when writing generalized software should [] be on survivability
rather than on static verifiability (sic!).
Ellis Horowitz,
Fundamentals of Programming Languages,
(2nd Edn.),
Computer Science Press,
Rockville, Md. 20850,
1984,
pp. 446.
One of the first systematic approaches to programming language
principles, now in its N-th printing.
The principles are elucidated with often very clear and detailed
examples.
Mostly on algorithmic languages, but covers functional programming,
data-flow languages and object-oriented programming in the last 80
pages.
Robert I. Winner,
Unassigned objects,
TOPLAS,
vol. 6,
#4,
Oct. 1984,
pp. 449-467.
Balanced and thoughtful examination of what "unassigned" means in an
assignment-oriented language (Ada in particular) and how one can could
approach the issue for scalar and composite objects, without leading to
simple conclusions.
John R. Pugh,
Actors -- The stage is set,
ACM SIGPLAN Notices,
vol. 19,
#3,
pp. 61-65.
March 1984,
The suitability of actors for areas outside AI is emphasized.
Actors and objects are compared, and some indication of an actor
paradigm is given.
Bruce J. MacLennan,
Principles of Programming Languages: Design, Evaluation and Implementation,
Holt, Rinehart and Winston,
New York,
1983,
pp. 554.
The book exudes an atmosphere that is different from other books on
programming languages, but the difference is hard to pinpoint.
Essentially it is a well-structured set of one- to two-page well-formulated
well-captioned chunks of knowledge rather than a book, as if teaching and
studying were to be done in chunks of 15 minutes each, to be interrupted
at any point.
Perhaps the fact that the author is from the Naval Postgraduate School has
something to do with it.
Zillions of language features are examined and critisized in a poignant
style, and the explanation is often supported by some hints at a
possible implementation.
The book explains both iterative interpreters (first for assembly code
and then for a higher-level pseudo-code)
and recursive interpreters (for LISP, etc.).
It features a list of 16 explicit principles in programming language
design.
Another unusual feature is the Concept Directory, a hierarchically
structured table of contents (in addition to the standard TOC), in the
author's words "a vertically organized outline of topics in language
design".
On the other hand, the index does not distinguish between Algol-60,
Algol-W and Algol-68.
And the book fails on the lithmus test: it explains Algol-60 call by
name parameter passing as substitution; it explains Jensen's device
correctly, though.
Applicative programming is represented by Lisp.
A simple mark-and-scan garbage collector is shown.
Three ways are shown to interpret a Prolog program: top-down,
bottom-up(!) and procedural(!!).
Backtracking is shown by example, but not really explained.
Unification is mentioned but not explained, let alone backtracking over
unification.
The literature references are in-line, which is at the same time helpful
and a nuisance.
Most of them occur in exercises.
An unusual book; worth reading.
Paul Overaa,
Computing, a psychological approach?,
Personal Computing World,
vol. ??,
pp. 207-209.
1983,
The author distinguishes three levels in problem solution understanding:
enactive ("I can show you how to do it"),
iconic ("I can draw you a diagram") and
symbolic ("I have a formalism for it").
Often two levels are used, with the problem solver falling back on the lower
level when getting in trouble.
David Lorge Parnas,
A generalized control structure and its formal definition,
Commun. ACM,
vol. 26,
#8,
Aug. 1983,
pp. 572-581.
Combines the Dijkstra guarded commands if-fi and do-od
into a single it-ti command:
it P1 → L1 T1
V P2 → L2 T2
V ...
ti
where V is the logical OR, Pn is a predicate (guard), Ln is a statement
list, and Tn is a terminator: an up-arrow or a down-arrow.
The it-ti iterates (loops), non-deterministically taking statement lists
Ln whose guards Pn are true.
If Ln is followed by an up-arrow the iteration resumes, otherwise it
terminates.
Like Dijkstra's guarded commands, this often forces one to copy
predicate parts.
Techniques are given to make the programs deterministic, to achieve more
run-time efficiency.
This includes the introduction of an else (called elseor in the
text) in place of the V, which subsumes the negation of all previous
guards.
There is hidden flow of control in the use of cand (and_then),
cor (or_else), and the predicate init (which is true only the
first time through the loop, and which constitutes a hidden variable) in
the guards.
Formal properties and application examples are given; a short history of
control structures concludes the paper.
Bent Bruun Kristensen,
Ole Lehrmann Madsen,
Birger Møller-Pedersen,
Kristen Nygaard,
Abstraction mechanisms in the BETA programming language,
in Tenth ACM Symposium on Principles of Programming Languages,
ACM,
New York,
Jan. 1983,
pp. 285-298.
Author's abstract:
The BETA programming language is developed as part of the BETA project. The
purpose of this project is to develop concepts, constructs and tools in the
field of programming and programming languages. BETA has been developed from
1975 on and the various stages of the language are documented in [BETA a].
The application area of BETA is programming of embedded as well as distributed
computing systems. For this reason a major goal has been to develop constructs
that may be efficiently implemented. Furthermore the BETA language is intended
to have a few number of basic but general constructs. It is then necessary
that the abstraction mechanisms are powerful in order to define more
specialized constructs.
J. Bresnan ed.,
The Mental Representation of Grammatical Relations,
,
MIT Press,
Cambridge, Mass.,
1982,
pp. ???.
[ TA.04716 (11e) ]
[ LFG stands for Lexical-Functional Grammar. It's hard to sum up quickly. ]
Dick Grune,
On the Design of ALEPH,
PhD thesis,
Mathematisch Centrum,
Amsterdam,
pp. 198.
1982,
Is about the grammatical paradigm of programming, without knowing it.
S.J. Young,
Real Time Languages -- Design and Implementation,
Ellis Horwood,
Chichester,
1982,
pp. 352.
First five characteristics of real-time software are discussed: must be
extremely reliable, is large and complex, must give a guaranteed response
time, must interface to weird hardware, and must be efficient. Areas of
conflict are pointed out.
Next, the author gives an exceptionally good treatment of programming
language features and, if the occasion arises, assesses their importance for
real-time programming.
The last part of the book covers RTL/2, Modula (not -2!) and Ada and screens
each of them for real-time use.
If you leave the bits about real-time programming out, a very good book about
programming languages would remain.
Christopher J. Van Wyk,
A high-level language for specifying pictures,
ACM Trans. Graphics,
vol. 1,
#2,
pp. 163-182.
April 1982,
A picture is represented as a set of complex variables, constraints on
them and drawing commands using them.
Pictures can be constrained further, thus turning a rectangle into a
square or fixing the position of a corner, for instance.
William B. Ackerman,
Data flow languages,
IEEE Computer,
vol. 15,
#2,
pp. 15-25.
Feb. 1982,
Starting from the requirements imposed by the dataflow machine
(no side effects, flow of control = data dependency, single-assignment)
the author arrives at a characterization of dataflow languages as
functional languages with an iteration construct.
Alan L. Davis,
Robert M. Keller,
Data flow program graphs,
Univ. of Utah,
IEEE Computer,
Feb. 1982,
pp. 26-41.
Authors' abstract:
Data flow languages form a subclass of the languages which are based
primarily upon function application (i.e., applicative languages).
By data flow language we mean any applicative language based entirely upon
the notion of data flowing from one function entity to another or any language
that directly supports such flowing.
This flow concept give data flow languages the advantage of allowing program
definitions to be represented exclusively by graphs.
Graphical representations and their applications are the subject of this
article.
Christoph M. Hoffmann,
Michael J. O'Donnell,
Programming with equations,
ACM TOPLAS,
vol. 4,
#1,
Jan. 1982,
pp. 83-112.
Exercise in the all-the-world-is-equations paradigm.
Actually, the equations are substitutions and work only one way: the
left-hand side is replaced by the right-hand side.
Theoretical foundation,
implementation by reduction,
limitations (two left-hand sides may not match overlapping stretches in
an expression to be reduced),
free (uninstantiated) variables.
The implementation is by tree pattern matching and is treated in some
detail.
The paradigm feels as a mix of (but not as a unification of) logic,
grammatical and functional programming.
Most examples are LISP- or Prolog-like; unfortunately, but not
surprisingly, their execution times are exponential, as in any really
general problem solver.
Alan Borning,
The programming language aspects of ThingLab, a constraint-oriented simulation laboratory,
ACM TOPLAS,
vol. 3,
#4,
pp. 353-387.
Oct. 1981,
Describes a more or less general constraint system in Smalltalk.
A constraint consists of a rule and a set of methods.
ThingLab tries to be intelligent about which user-defined method to apply
when.
It uses local propagation and relaxation.
Martin Rem,
The closure statement: A programming language construct allowing ultraconcurrent execution,
J. ACM,
vol. 28,
#2,
pp. 393-410.
April 1981,
Mathematical foundations of the closure paradigm of programming.
An associon is a an N-tuple of names; by convention the existence of a
tuple (a, b, c, ...) in the store means that the relation a
exists between (b, c, ...); the store acts as a set, so duplicates
do not occur.
A closure statement contains one or more rules that generate new
associons from old; these rules can contain free variables.
An example could be: (R, a, b) AND (R, b, c) → (R, a, c) for any
transitive relation R.
Once the closure statement is started, these rules continue until no
more elements can be added.
I don't think there has been any follow-up, which is a pity.
Seymour Papert,
Mindstroms -- Children, Computers and Powerful Ideas,
Basic Books, Inc.,
New York,
1980,
pp. 230.
About Logo.
Andrew D. McGettrick,
The Definition of Programming Languages,
Cambridge Univ. Press,
Cambridge, England,
1980,
pp. 268.
Explains the definition methods used in the definitions of FORTRAN, ALGOL 60
(BNF), LISP, ALGOL W (T-notation, a finite-choice VW grammar), COBOL, PL/I
(Vienna Definition Language, a functional programming language, the
statements of which are attached to to grammar rules), PASCAL and ALGOL 68
(VW grammars), followed by an explanation of denotational semantics.
Gerald Jay Sussman,
Guy Lewis Steele Jr.,
Constraints -- A language for expressing almost-hierarchical descriptions,
Artificial Intelligence,
vol. 14,
#1,
pp. 1-39.
1980,
Constraint programming is introduced and explained by many examples.
The paper is entirely geared to electronic circuits.
Much emphasis is on alternative views of sections of the circuitry, called
slices, which yield additional constraints, and much effort is spent on
obtaining algebraically simple constraints.
All variable names have forms like $ R sub 14 $, in engineering fashion.
In this sometimes tongue-in-cheek paper, the footnotes are often more
informative than the main text.
L.V. Atkinson,
Should if ... then ... else ... follow the dodo?,
SP&E,
vol. 9,
1979,
pp. 693-700.
Citing Sime, Green and Guest (1977), who find that
if A then B, if not A then C is better for program writing and much
better for program understanding than if A then B else C, and
invoking his own experience that the not operator is confusing to
program writing and understanding, the author proposes to improve this
approach by choosing natural language terms for A, not A, and
"A-ness".
For example, if the condition is "object is large", we choose the
following terms:
large for A,
small for not A, and
size_of_object for "A-ness".
Now the conditional can be expressed as
case size_of_object of
large: B;
small: C;
end {size_of_object}
which is indeed very easy to write and to understand.
Another advantage of this approach is that it can be extended in a
natural way to more than 2 choices, for example large, medium,
small.
The if-statement is seen as an over-specific form of the case statement.
Robert W. Floyd,
The paradigms of programming,
Commun. ACM,
vol. 22,
#8,
pp. 455-460.
Aug. 1979,
Uses the word paradigm in a much more restricted sense than is usual today.
His paradigms are today's methods and features; for instance, dynamic
programming and coroutines are paradigms.
Emphasizes the importance of paradigms and cites Kuhn;
insists that programming languages should support the "paradigms" people use
in programming (e.g. multiple assignment).
R.A. Nelson,
XL: A language structure,
ACM SIGPLAN Notices,
vol. 13,
#12,
Dec. 1978,
pp. 73-86.
Rather inaccessable text; this is a first try.
Both the program and the data are lists of lists, i.e., trees, and
operators work on both indiscriminately.
David R. Hanson,
Ralph E. Griswold,
The SL5 procedure mechanism,
Commun. ACM,
vol. 21,
#5,
pp. 392-400.
May 1978,
SL5 is a precursor to Icon.
An SL5 expresiion return a composite result consisting of a value of
arbitrary type and an integer signal; the latter is used for control
purposes, and is usually 0 (failure) or 1 (success).
Procedures are just values and can be created at any time by supplying
its denotation.
Calling a procedure, say p, occurs in three stages.
--
First an environment e is created for the procedure by writing
e := create p.
This allocates an activation record and sets the required links.
More particularly, this action sets the lexical link (I think).
--
Next its arguments are supplied (all at one, no partial parametrization
(I think)) by writing, for example, e with (1, 2).
--
Finally the fully parametrized environment is called by writing
resume e.
The running procedure can be stopped and another resumed by giving
another resume.
All this means that a procedure call p(1, 2) is an abbreviation of
resume (create p with (1, 2)), but also that the components can be
used to construct coroutines and other unusual procedural flow of
control.
Examples are given.
Scoping is dynamic in principle, but users can control it by using
public or private identifiers.
Default parameter transmission is by name, as in Algol 60, but users
can supply their own "programmer-defined argument transmitters".
They can, for example, test properties of the arguments, evaluate them,
etc.; they are used more or less as type names, and are indeed the
closest thing to types SL5 has.
An interesting example is the generator for context-free sentences.
Also interesting is the comparison between Simula classes and SL5
procedures.
Tom Jacobsen,
Another view of coroutines,
ACM SIGPLAN Notices,
vol. 13,
#4,
April 1978,
pp. 68-75.
Improves the Simula 67 implementation of the double a to b program
(Grune, "A view of coroutines", ACM SIGPLAN Notices, Vol. 12, No. 7, pp.
75-81, July 1977) by packaging the buffer manipulation and coroutine
transfer in small routines; this reduces the damage done to the aa→b
and bb→c processes when they are coupled into one program.
Also shows how the Algol 68 semaphore manipulation used in Grune, July
1977, can be implemented in Simula 67.
The author views the coroutine as "a tool ensuring explicit interaction
among procedures at well defined states".
William Slater,
Howard Modell,
Structured Programming Considered Harmful,
ACM SIGPLAN Notices,
vol. 13,
#4,
pp. 76-79.
April 1978,
Tongue-in-cheek paper which proposes various new crazy language
structures that sometimes verge on the sane.
Some of the more sane proposals are the COMEFROM statement, which
emphasizes the fact that the state information from various points of
departure of jumps must be merged at the point of arrival of the jump
(R. Lawrence Clark, 1973);
renaming REAL to RATIONAL, which indeed it is;
and a comparison for almost equal, '~~', to be used to compare two reals
for equality (which are almost never really equal).
One of the trivially insane proposals is the 'UNLESS (Boolean) DONT' for
'IF (Boolean) DO', if the Boolean is unlikely to be TRUE.
Still, the example programs are almost readable, in a Cardassian kind of
way.
A view to another universe.
W.R. LaLonde,
Letter to the editor concerning ordered types,
ACM SIGPLAN Notices,
vol. 13,
#4,
April 1978,
pp. 24-27.
[ points out errors in Abramson, Ordered types and a generalized for-statement ]
J.L.W. Kessels,
An alternative to event queues for synchronization in monitors,
Commun. ACM,
vol. 20,
#7,
July 1977,
pp. 500-503.
Monitors solve the mutual exclusion problem well, but handling the
condition synchronization using wait and signal is still awkward.
The paper proposes conditions,
a special kind of Boolean routines consisting of a single Boolean expression
each and declared inside the monitor but outside the routines in the
monitor; consequently they can access variables with monitor scope only.
Conditions can be used any place inside a routine in a WAIT statement.
The routine is then suspended and the monitor released until the
condition becomes true.
No signal statement or so is required or even possible.
It is up to the programmer to use WAIT statement only when invariants
allow it.
The paper shows that this allows easy formulation of e.g. the
multi-reader multi-writer problem, and that these conditions are
relatively easy to implement.
D. Grune,
A view of coroutines,
ACM SIGPLAN Notices,
vol. 12,
#7,
pp. 75-81.
July 1977,
The coroutine mechanism is explained as a simplified implementation of a
special case in parallel processing.
M. Rem,
Associons and the closure statement,
Mathematical Centre Tract 76,
(PhD thesis, T.H. Eindhoven),
Mathematical Centre,
Amsterdam,
1976,
pp. 115.
The introduction of associons, for which see
M. Rem, The closure statement: A programming language construct allowing
ultraconcurrent execution, J. ACM, vol. 28, 2, pp. 393-410, April 1981.
Steve Wright,
Invocation -- The key to program structure,
Commun. ACM,
vol. 19,
June 1976,
pp. 361.
Letter.
Argues that gotos, for-loops, etc. are only important locally, but that
procedure calls and module structure have a much wider impact, and are
more important.
I. Greif,
C. Hewitt,
Actor semantics of PLANNER-73,
in Second ACM Symposium on Principles of Programming Languages,
Jan. 1975,
Palo Alto, California,
pp. 67-77.
Author's abstract:
Work on PLANNER-73 and actors has led to the development of a basis for
semantics of programming languages. Its value in describing programs wtth
side-effects, parallelism, and synchronization is discussed. Formal
definitions are written and explained for sequences, cells, and a simple
synchronization primitive. In addition there is discussion of the implications
of actor semantics for the controversy over elimination of side-effects.
R. Lawrence Clark,
A linguistic contribution to GOTO-less programming,
Datamation,
vol. 19,
#12,
pp. 62-63.
Dec. 1973,
If this two-page tongue-in-cheek article indeed expresses the
non-trivial insight that the goto problem is really a come-from problem
(as Appleby [1991, p. 98] suggests), it hides that fact remarkably well.
C. Hewitt,
P. Bishop,
I. Greif,
B. Smith,
T. Matson,
R. Steiger,
Actor induction and meta-evaluation,
in First ACM Symposium on Principles of Programming Languages,
Oct. 1973,
Boston, Massachusetts,
pp. 153-168.
Author's abstract:
"Programs should not only work, but they should appear to work as well." The
PLANNER project is continuing research in natural and effective means for
embedding knowledge in procedures. In the course of this work we have
succeeded in unifying the formalism around _one_ fundamental concept: the
ACTOR. Intuitively, an ACTOR is an active agent which plays a role on cue
according to a script. We use the ACTOR metaphor to emphasize the
_inseparability of control and data flow_ in our model. Data structures,
functions, semaphores, monitors, ports, descriptions, Quillian nets, logical
formulae, numbers, identifiers, demons, processes, contexts, and data bases
can all be shown to be special cases of actors. All of the above are objects
with certain useful modes of behavior. Our formalism shows how all of these
modes of behavior can be defined in terms of _one_ kind of behavior: _sending
messages to_ actors. An actor is always invoked uniformly in exactly the same
way regardless of whether it behaves as a recursive function, data structure,
or process. The unification and simplification of the formalisms for the
procedural embedding of knowledge has a great many benefits for us. In
particular it enables us to substantiate properties of procedures more easily.
Mark Elson,
Concepts of Programming Languages,
Science Research Associates,
Chicago,
1973,
pp. 333.
Remarkably modern book, emphasizes concepts and treats them in isolation.
The first 60 pages are on Fortran, Algol 60, PL/I and APL, the rest is
concepts, treating such items as named exceptions, execution monitoring,
shared data and language extensibility.
Eberhard Wegner,
Tree-structured programs,
ACM SIGPLAN Notices,
vol. 7,
#12,
Dec. 1972,
pp. 40-42.
Incomprehensible proposal for a control structure which combines
selection and repetition.
Is there a rewrite of this? No.
Edsger W. Dijkstra,
The humble programmer,
Commun. ACM,
vol. 15,
#10,
Oct. 1972,
pp. 859-866.
After a historical introduction (1952-1972) the author starts
identifying problems with the state of affairs in programming, and
interlaces the text with several quotables.
The power and complexity of third generation machines is the main
culprit.
The presence of multiple registers has set back the progress of computer
science by at least ten years, since they force early binding.
"Saying that 3rd generation machines are not bad because there are so
many of them is like saying that smoking is not bad because so many
people do it."
Hardware has many times more influence than is commonly assumed.
Next the author examines important software developments:
EDSAC, which gave us the notion of a subroutine;
FORTRAN, which was a successful coding technique, but should be soonest
forgotten since it has few aids to conceptualization;
LISP, which has allowed us to think previously unthinkable thoughts;
ALGOL 60, which came with BNF, which in turn allowed ALGOL to have a much too
baroque syntax;
and finally PL/I, a language of frightening complexity.
"Where FORTRAN is an infantile disorder, PL/I is a fatal disease."
Then the author turns to predicting the future: before 1980 we will be
able to create almost bug-free systems at the cost of a few percent of
today.
His arguments are: desirability, economic necessity, and technical
feasibility, the latter resting on a number of observations and
desiderata.
Proofs and programs should be built hand in hand.
Programmers need to be modest and to consider intellectually manageable
programs only, programs that allow a factored solution; the intellectual
effort to manage such programs grows linearly rather than quadratically
with their lengths.
This limitation is actually a source of strength, since it forces
programmers to consider the simplest of solutions only and to keep the
interfaces clean.
Very systematic and very modest programming languages have the future
(the author seems to refer to functional languages).
Gripes: high-energy physics departments dictate the computers to be used
by everybody; teaching good programmer practices might help specifically
those already able, thus magnifying the difference in intelligence,
which may not be politically acceptable.
Jean E. Sammet,
Programming Languages: History and Fundamentals,
Series in Automatic Computation,
Prentice-Hall,
Englewood Cliffs, N.J.,
1969,
pp. 785.
Extensive and still readable.
Structured by language goals, numerical and scientific, business data
processing, string manipulation, etc.
Covers principles with examples in zillions of languages.
Sample programs in more than 30 languages, most of them deader than Sanskrit.
Richard M. Karp,
Raymond E. Miller,
Properties of a model for parallel computations: Determinacy, termination and queueing,
SIAM J. Appl. Math.,
vol. 14,
#6,
pp. 1390-1411.
Nov. 1966,
Graph-theoretical model of parallel computations.
Discusses termination, efficiency and queue length in mathematical terms.
P.J. Landin,
The next 700 programming languages,
Commun. ACM,
vol. 9,
#3,
pp. 157-166.
March 1966,
Very modern paper about functional languages.
Includes discussion by Naur, Floyd, Strachey et al.
Robert W. Floyd,
The syntax of programming languages -- A survey,
IEEE Trans. Electronic Comput.,
vol. EC-13,
1964,
pp. 346-353.
Early analysis of the advantages of and problems with the use of grammars
for the specification of programming languages.
Contains a bibliography of almost 100 entries.
Donald E. Knuth,
L.L. Bumgarner,
D.E. Hamilton,
P.Z. Ingerman,
M.P. Lietzke,
J.N. Merner,
D.T. Ross,
A proposal for input-output conventions in ALGOL 60,
Commun. ACM,
vol. 7,
#5,
pp. 273-283.
May 1964,
Proposal for a standard for input-output conventions in ALGOL 60.
Based on formats to format single items, list procedures to handle
multple items with a single format, and exception routines to handle all
kinds of overflows.
M. Klerer,
J. May,
An experiment in a user-oriented computer system,
Commun. ACM,
vol. 7,
#5,
May 1964,
pp. ??-??.
[experiment in two-dimensional languages
A.J. Perlis,
A format language,
Commun. ACM,
vol. 7,
#2,
pp. 89-97.
Feb. 1964,
Detailed description of same, to be interspersed with ALGOL 60 text.
J. H. Katz,
W. C. McGee,
An Experiment in Non-Procedural Programming,
in 1963 Proceedings of the Fall Joint Computer Conference,
IEEE,
Nov. 1963,
pp. 14.
Motivated by the desire to produce machine-independent software, the author
proposes to use the "Information Algebra" from the 1962 CODASYL report as a
programming language.
The language accepts points in a Cartesian data space as input, where the
coordinates are defined as properties, with value ranges.
The language supplies a sophisticated formalism to specify mappings from
subsets of the data points onto other data spaces.
Basically it is a functional language in the modern sense of the word.
This language is then used to specify a symbolic assembler for a subset of the
IBM 704 family instructions.
This specification is about a 100 lines long; it is quite unreadable, due to
lack of symbolic names: properties and fields are addressed by indexes.
It looks like a functional-language variant of Zuse's Plankalkül.
A compiler (called a "procedure writer") is described in outline; an
interpreter is considered but deemed too slow.
All in all, a very interesting paper that is quite ahead of its time.
J.W. Backus,
The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM Conference,
in International Conference on Information Processing,
UNESCO,
June 1959,
pp. 125-132.
Preliminary version of the Algol 60 report, with some discussion.
Jan V. Garwick proposes a for-all statement, L. Nolin proposes built-in
functions for numerical problems: finding the minimum, testing the
existence of a solution and yielding any solution.
The language is still called IAL.
The abstract is in English, French, German, and Russian(!).
|