Plane Affine Algebra (III)

See also Part I and Part II.

Finally, a theorem, in the style of the Elements:

To trisect a segment. Let the endpoints of the segment be P, Q. Let R be any point not on PQ. Let M be the midpoint of PR, N be the midpoint of QR, O the midpoint of PQ. Then let S be the midpoint of MO, and T be the midpoint of NO. Produce the line RS to PQ in X; similarly, produce the line RT to PQ in Y. Then PX, XY, YZ are equal to one-third of PQ.

Proof: We begin by calculating S and T:

= { definition with M, O := ray.P.R.½, ray.P.Q.½ }
= { (18), twice }
   ray.(P → ½·‹RP›).(P → ½·‹QP›).½
= { (18) }
   P → ½·‹RP› → ½·‹(P → ½·‹QP›)−(P → ½·‹RP›)›
= { (3); linearity }
   P → ½·(‹RP› + ‹(P → ½·‹QP›)−(P → ½·‹RP›)›)
= { (9) }
   P → ½·(‹RP› + ‹PP› + ½·‹QP› − ½·‹RP›)
= { (6); linearity }
   P → ¼·(‹RP› + ‹QP›)

From considerations of symmetry, T = S[P, Q := Q, P]. We have:

(21)  S = P → ¼·(‹RP› + ‹QP›)
(22)  T = Q → ¼·(‹RQ› + ‹PQ›)

Finally, we calculate X and Y:

= { definition }
= { (20); (18) }
 P → (‹SR⋅‹RP›)/(‹SR⋅‹QP›)·‹QP›
= { (21) }
 P → (‹(P → ¼·(‹RP› + ‹QP›))−R⋅‹RP›)/(‹(P → ¼·(‹RP› + ‹QP›))−R⋅‹QP›)·‹QP›
= { (2), twice }
 P → ((‹PR› + ¼·(‹RP› + ‹QP›))⋅‹RP›)/((‹PR› + ¼·(‹RP› + ‹QP›))⋅‹QP›)·‹QP›
= { (5), twice; linearity throughout }
 P → ((¼·‹QP› − ¾·‹RP›)⋅‹RP›)/((¼·‹QP› − ¾·‹RP›)⋅‹QP›)·‹QP›
= { (10) and (11), twice }
 P → ((¼·‹QP − ¾·‹RP)⋅‹RP›)/((¼·‹QP − ¾·‹RP)⋅‹QP›)·‹QP›
= { linearity of inner product, twice }
 P → (¼·‹QP⋅‹RP› − ¾·‹RP⋅‹RP›)/(¼·‹QP·‹QP› − ¾·‹RP⋅‹QP›)·‹QP›
= { (14), twice }
 P → (¼·‹QP⋅‹RP›)/(−¾·‹RP⋅‹QP›)·‹QP›
= { (12) }
 P → (¼·‹QP⋅‹RP›)/(¾·‹QP⋅‹RP›)·‹QP›
= { algebra }
 P → ⅓·‹QP

To finalize, from considerations of symmetry, Y = meet.Q.P.R.T = (meet.P.Q.R.S)[P, Q := Q, P]. Hence:

(23)  X = P → ⅓·‹QP(24)  Y = Q → ⅓·‹PQ

Plane Affine Algebra (II)

See also Part I and Part III.

As noted before, the algebra permits us to calculate proofs without resorting to coordinate manipulations. As a demonstration, I'll present a very simple calculation, and then a theorem that I didn't know (and I would be very hard-pressed to come up with a classical proof).

Given points P, Q, R, the altitude of the triangle PQR through the vertex Q is the segment QH perpendicular to the base PR. Denoting hgt.P.Q.R = |QH| (we write |•−•| = |‹•−•›| per abusum linguae), it is:

(17)  hgt.P.Q.R = (‹RP⋅‹QP›)/|RP|


= { definition }
= { definition of the sine line }
= { (16) with u, v := ‹RP›, ‹QP› }

The line through point P with direction vector v is the set of points Pv that, in the words of Euclid, "lies equally on the points of itself":

(18.0)  QPv ≡ |v|⋅‹QP› = |QP|⋅v

The following are theorems:

(18.1)  PPv
(18.2)  ⟨∀ k : k ≠ 0 : P⇒(kv) = Pv(18.2)  Pv = ⟨kR | Pkv

The parametric line through points P and Q is defined as the real function:

(18)  ray.P.Q.k = Pk·‹QP

From (18.2), it is immediate that ray.P.Q maps R to the line P⇒‹QP›.

As a special case, the midpoint of the segment PQ is:

(19)  ray.P.Q.½ = P → ½·‹QP

Given points P, Q, R and S, the intersection of the lines containing the segments PQ and RS is:

(20)  meet.P.Q.R.S = ray.P.Q.((‹SR⋅‹RP›)/(‹SR⋅‹QP›))

Proof: At the intersection point meet.P.Q.R.S, for some k, lR:

   ray.P.Q.k = ray.R.S.l
= { (18) }
   Pk·‹QP› = Rl·‹SR›
= { (6) }
   ‹(Pk·‹QP›) − (Rl·‹SR›)› = 0
= { (9) }
   ‹PR› + k·‹QP› − l·‹SR› = 0
= { heading towards eliminating l; inner product }
   ‹SR⋅(‹PR› + k·‹QP› − l·‹SR›) = 0
= { linearity of inner product }
   ‹SR⋅‹PR› + k·‹SR⋅‹QP› − l·‹SR⋅‹SR› = 0
= { (14) }
   ‹SR⋅‹PR› + k·‹SR⋅‹QP› = 0
= { (5); algebra }
   k = (‹SR⋅‹RP›)/(‹SR⋅‹QP›)

In Part III, the theorem.

Plane Affine Algebra (I)

See also Part II and Part III.

The algebra's objects are the set of points P on the plane, together with the associated Euclidean space V (that is, a vector space together with an inner product and the usual Euclidean metric), where directed line segments between points in P are identified with free vectors (in other words, V = P/∗, for ∗ ∈ P distinguished). In what follows, I denote points by P, Q, R, vectors by u, v, w and scalars by k, l, m. Functional application (of which subscription is but a special case) is denoted by a dot, it is left-associative, and it is given the highest binding power. Labeled formulas are implicitly universally quantified over their free variables, whose type is usually understood if not declared beforehand.

The point constructor:

• → •: PVP

and the vector constructor:

‹• − •›: PPV

are the only operations on points. They are related by the following axioms:

(0)  P → 0 = P
(1)  P → ‹QP› = Q
(2)  ‹(Pu) − Q› = ‹PQ› + u
(3)  P → (u + v) = Puv

By (0) the null vector is the identity translation. By (1), the tail of an anchored vector translated by itself is its head. (2) is the parallellogram rule for adding vectors. (3) is the associative rule for vector addition.

From these, it follows as theorems that:

(4)  ⟨∑ i : 0 ≤ i < N : ‹P.((i+1) mod N) − P.i›⟩ = 0
(5)QP› = −‹PQ(6)PP› = 0
(7)  P → ‹QQ› = P
(8)  Puv = Pvu
(9)  ‹(Pu) − (Qv)› = ‹PQ› + uv

Formula (4), the chain rule means simply that following along a closed polygonal line returns to the starting point. By (5), segments, when considered as vectors, are oriented; and by (8) vector addition is commutative.

Proofs: (4) follows from proving:

P.0 → ⟨∑ i : 0 ≤ i < N : ‹P.((i+1) mod N) − P.i›⟩ = P.0

by induction on N with base case (0) and inductive step using (3) and (1). (5) follows from (4) with N = 2; then (6) follows from (5) by setting Q := P.

(7) follows from (6) and (0); (8) follows directly from (3); and (9) follows from (5) and (2), both twice.

The algebra is also equipped with a unary postfix operator, the perp:

: VV

satisfying the following axioms:

(10)  (u + v) = u + v
(11)  (k·v) = k·(v)
(12)  uv = −(vu)
(13)  |u| = |u|

From the axioms, these theorems follow:

(14)  uu = 0
(15)  u⊥⊥ = −u
(16)  uv = |u|·|v|·sin.(θ.u.v)

where -π < θ.u.v ≤ π is the CCW angle from u to v.

By (10) and (11), perp is a linear operator; by (12) it is hemi-Hermitian, and by (13) it is magnitude-preserving. From (14), u is perpendicular to u (hence the name). There are then only two possibilities for u; the definition of θ fixes it by convention as the CCW one.

Proofs: (14) follows from (12) by setting v := u. (15) follows from (14) by setting u := u and noting that u⊥⊥u. To prove (16), we note that:

= { inner product }
= { (13) }
= { (14) }
   |u|·|v|·cos.(π/2 − θ.u.v)
= { trigonometry }

These operations form a kind of typed domain-specific language (DSL) for doing simple plane geometry. The fact that the operators are typed greatly constrains the allowed expressions, paradoxically making proofs by uninterpreted formula manipulation much easier.


  1. F. S. Hill, Jr. The Pleasures of Perp Dot Products. In Paul S. Heckbert ed. Graphics Gems IV


On Borges' "Spinoza"

Or rather, on just its fourth verse:

(Las tardes a las tardes son iguales.)

and the problem of translating it into English while trying to preserve some of its quality.

Why I find this verse so striking? First, it's an example of a perfect Spanish hendecasyllable, the so-called endecasílabo propio (proper hendecasyllable), with the least number of stressed syllables: the second, the sixth and the tenth, the last two obligatory. Second, the anastrophic antimetabole is masterful. It carries by itself the bleak melancholy of the entire poem, using just two distinct words.

First problem: how to translate "tardes"? In Buenos Aires, tarde is the time of day that stretches from just after noon (1 PM local time) to dusk (5:30 PM to 8:00 PM, local time), with subdivisions like "tardecita" (before 3 PM), "siesta" (3 PM to 5 PM) and "tarde tarde" (6 to 7 PM in summer). Let me try with the generic English "afternoon". Of course, I strive for a iambic pentameter:

The afternoons are equal to themselves.

I dare say it is viable, given the constraints: it is a translation, and it is a iambic pentameter. However, I find it unacceptable: I did preserve none of the qualities of the original. In particular, I miss its sense of "perpetual sameness", the circularity, the eternal return. Second try, a slight variation:

The afternoons are equal one another.

Well, some alliteration here, and to me the nasals are melancholic (No one, Nothing, Never is one of Juan José Saer's novels); however, there is an extra unstressed syllable now that marrs the pentameter. If I drop the requirement of preserving the plural "tardes":

One afternoon is equal to another.

which has a more regular syntax but is definitely a step backwards, and doesn't solve the problem of the extra syllable.

And, besides, the distinct image I have of "tardes" is of dusks, a sun that is forever gone, or rather, the recurrent evidence of the disappearing sun. So how about:

The dusks are equal one another.

Now I'm one syllable short. Again:

The dusks to dusks themselves are equal.

Better, but still short of a syllable. On the other hand, it is not clear if the dusks are equal to themselves (that is, identical as individual dusks), or if they are equal one another, as in the original. Also, I don't like to mix English words of Germanic and Latin origin: I like better using "same" than "equal". So:

The dusks to every other dusk are same.

I badly mangled the syntax (I've found English usage for "is same", mainly journalistic, so I don't think it's unjustifiable, but then); this could be a dead end.

The dusks to dusks are equal and the same.

This attempt leaves me with a sense of accomplishment. Now I understand what Borges meant: it is the days, dusk to dusk, that are equal, not the afternoons themselves. The Jews count days as starting with the first star, that is, at dusk; this sense is nicely conveyed by the English "dusk to dusk". The antimetabole is preserved, and "equal and the same" makes clear that days are not only compared one to another, but confused and confounded into one, unchanging day.


(Un)Expected Symmetries

There is an obvious isomorphism between depth-first and breadth-first traversals: perform one, but keeping enough information to reconstruct the tree from the traversal; do so, and finally perform the other.

Of course this isomorphism is not constructive; and it doesn't sound like it would be the simplest one possible. One thing the usual presentations of the algorithms do is obscure the parallels between both traversal strategies. Let's recall the type of rose trees:

type 'a tree = L of 'a | N of 'a tree list

and at their DFS traversal:

let rec fold_dfs f e = function
| L x -> f e x
| N t -> List.fold_left (fold_dfs f) e t

The last case in fold_dfs is a left fold over the list t of children: first visit the first child, then proceed with the rest of the tree. This makes the function distinctly non-tail-recursive. We can use an explicit stack to manage the children pending of being visited:

let fold_dfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: s -> fold (f e x) s
  | N t :: s -> fold e (t @ s)
  in fold e [t]

Contrast this with the BFS traversal we've seen before:

let fold_bfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (q @ t)
  in fold e [t]

The isomorphism couldn't be clearer.

In-Place Reversal... of Trees

We've seen before that we can traverse a tree in constant stack space; that is, we've come up with a tail-recursive BFS traversal:

let fold_bfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (List.rev_append (List.rev q) t)
  in fold e [t]

Unfortunately, this version is rather inefficient, as it copies the "queue" of nodes already waiting to be visited just to reverse it, and then copies it again to prepend it to the current node's children, finally discarding both the original and its reverse. This generates two garbage nodes per tree node processed. This outer copy is absolutely local to the traversal, since the first argument to rev_append is fresh. We can thus replace it by an in-place reversal of q.

Suppose we had a function rplacd that, given two lists l[] and m, physically modified the tail of l to be m, while returning l's old tail:

val rplacd : 'a list -> 'a list -> 'a list

With it we can write an in-place reversing function as:

let rec ip_revappend l v =
   if l == [] then v else
   let t = rplacd l v in
   ip_revappend t l

(I use a let-binding to make explicit the sequencing between the replacement and the recursive invocation). Compare this with the purely functional version:

let rec revappend l v = match l with
| []     -> v
| h :: t -> revappend t (h :: v)

In OCaml, we can use the module Obj to (carefully!) write replacd:

let rplacd : 'a list -> 'a list -> 'a list = fun l m ->
   let o = Obj.repr l in
   let t = Obj.field o 1 in
   Obj.set_field o 1 (Obj.repr m);
   Obj.obj t

(the type annotations are essential for OCaml to infer the correct types for Obj.obj). Putting it all together, and inlining rplacd in ip_revappend to avoid the call:

let rec ip_revappend (l : 'a list) v =
    if l == [] then v else
    let o = Obj.repr l in
    let t = Obj.obj (Obj.field o 1) in
    Obj.set_field o 1 (Obj.repr v);
    ip_revappend t l

(again, the type annotation is essential). In the absence of sharing, ip_revappend is observationally equivalent to revappend. Since, as explained above, the outer list is free, the following traversal is unconditionally observationally equivalent to the previous version:

let fold_bfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (ip_revappend (List.rev q) t)
  in fold e [t]

Even so, it differs from it in that it produces just one garbage node per tree node traversed. This is optimal if we want to preserve the tree: concretely, q is the list of younger siblings, so as a part of the tree it should be preserved. If the tree is ephemeral, however, we can also replace the inner rev q by an ip_revappend q [] to recruit the nodes of the tree to serve as the tail of the traversal queue.

So, by stepwise, mechanical refinement, we arrive at an in-place ip_fold_bfs that uses constant stack and heap space:

let ip_fold_bfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (ip_revappend (ip_revappend q []) t)
  in fold e [t]

This is the main point of Sobel and Friedman's paper.

Defunctionalized Lists

The last post finished with a list-based implementation of the BFS traversal. Now I am left with the problem of restoring tail-recursiveness by coming up with a properly tail-recursive version of append (as usual, its synonym is @). Let's recall the definition:

let rec append l r = match l with
| []      -> r
| x :: xs -> x :: (append xs r)

Transforming it to continuation-passing style (CPS):

let rec append_cps l r k = match l with
| []      -> k r
| x :: xs -> append_cps xs r (fun v -> k (x :: v))

so that the original append is:

let append l r = append_cps l r (fun k -> k)

Now the continuations used by append_cps fall into a few usage patterns:

  • if the left argument is empty, the continuation is invoked as is
  • otherwise, a new continuation is built from the old and the head, to prepend it to the value of the continuation
  • everything starts with the identity continuation

Hence, the type:

type 'a kont = KNil | Kons of 'a kont * 'a

defunctionalizes the continuations as type constructors instead of as closures (the pattern to follow is, in general, abstract over the free variables of the continuation). Replacing continuations in the CPS version by these data:

let rec append_kont l r k = match l with
| []      -> kapply k r
| x :: xs -> append_kont xs r (Kons (k, x))

where now have l @ r ≡ append_kont l r KNil. We have yet to apply the reified continuation to the closed-over values; that is, we need to specify kapply: we move the closure constructions in the CPS version to the corresponding branch of a match over the reified continuation constructors:

let rec kapply k v = match k with
| KNil        -> v
| Kons (k, x) -> kapply k (x :: v)

Now, the type 'a kont is isomorphic to the type of lists: F(KNil) = [], and F(KCont (k, x)) == x :: F(k). Exploiting this fact, let's rewrite:

let rec append_list l r t = match l with
| []      -> lapply t r
| x :: xs -> append_list xs r (x :: t)

let rec lapply l v = match l with
| []     -> v
| x :: l -> lapply l (x :: v)

so that now l @ r ≡ append_list l r []. Now this is where things get weird. Believe it or not, append_list and lapply are, deep down, the same function. To see this, abstract r:

let rec append_list l t = match l with
| []      -> t
| x :: xs -> append_list xs (x :: t)

so that l @ r ≡ lapply (append_list l []) r. Compare lapply to this append_list, and write:

let append l r = lapply (lapply l []) r

We're now done, as lapply is tail recursive. What's more, its real name is List.rev_append, which we don't have to write since it's built-in to OCaml. Also, rev l is a shorthand for rev_append l []. The traversal, in tail-recursive form is:

let fold_bfs f e t =
  let rec fold e = function
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (List.rev_append (List.rev q) t)
  in fold e [t]

I will show next that there is still room for improvement.

Iterative Tree Traversals

The point of this post is to illustrate with a concrete, step-by-step derivation, the thesis in Sobel and Friedman's "Recycling Continuations":

Link-inversion algorithms [...] are a standard means of traversing data structures

For that, I must begin at the beginning, with a concrete case. Consider the type of rose trees, those whose inner nodes have a list of children:

type 'a tree = L of 'a | N of 'a tree list

A breadth-first visit of a tree visits first all the children of the root before visiting their corresponding grandchildren. In order to do that, it maintains a queue of nodes waiting to be visited:

let fold_bfs f e t =
  let rec fold e q = match Queue.get q with
  | None          -> e
  | Some (L x, q) -> fold (f e x) q
  | Some (N t, q) -> fold e (List.fold_left Queue.put q t)
  in fold e (Queue.put Queue.empty t)

for a hypothetical ADT Queue, not the OCaml one. Two things to note are:

  1. the traversal is tail recursive: every call to fold is in tail position. This is analogous to the usual imperative BFS traversal, which is usually written while the queue is not empty do…
  2. the list of children is being destructured with a fold_left, which is tail-recursive

Aside: the fact that this is a rose tree makes the traversal easier, not harder. A binary tree with node constructor N of 'a tree * 'a tree leads to the temptation of forgoing tail recursion by visiting the right child with the result of visiting the left. The use of a queue would be slightly awkward, since the pattern of nested recursion would be replaced by a pattern of nested enqueueing. To me, this is a good example of simplification by generalization.

Hence this traversal can be done in constant stack space, provided that the Queue implementation is tail-recursive. Okasaki's functional queues are:

module Queue = struct
  type 'a t = 'a list * 'a list
  let empty = [], []
  let is_empty = function ([], _) -> true | _ -> false
  let check = function ([], r) -> (List.rev r, []) | q -> q
  let put (f, r) x = check (f, x :: r)
  let get = function ([], _) -> None | (x :: f, r) -> Some (x, check (f, r))

(rev is trivially tail-recursive). What happens here is that, by using the appropriate intermediate data structure, in this case a queue, we trade stack space by heap space.

Can we do better than this? That is, can we avoid the intermediate memory? Before I answer that, let me step back a little.

There is a forgetful morphism between queues and lists; namely, that which maps the empty queue to the empty list, and a queue with head x and tail q to the list with precisely head x and tail the map of q, in that order. This is "forgetful" because a queue has more structure (i.e., properties or laws it must obey) than a list, which is in fact "free" (all this in a precise technical sense).

Consequently, under this mapping, appending a queue q to a queue p is equivalent (by induction) to enqueuing into p the list of elements of q one at a time:

Queue.append p q ≡ List.fold_left Queue.put p (Queue.to_list q)

for suitably defined Queue.append and Queue.to_list. This means that, in the original traversal, List.fold_left Queue.put q t appends to q the queue whose representation as a list is t. This can be written instead as Queue.append q (Queue.of_list t), for suitably defined Queue.of_list, inverse to Queue.to_list.

Now, by the definition of List.append (denoted as usual by @), we have

Queue.to_list (Queue.append p q) ≡ Queue.to_list p @ Queue.to_list q.

This is, no more and no less, the condition for Queue.to_list to be a morphism from queues to lists; a different one to the one we started with. This morphism maps whole queues under concatenation to whole lists also under concatenation. So, leaving aside for the time being the issue of tail-recursiveness, we can apply this morphism and use lists directly in the BFS traversal:

let fold_bfs f e t =
  let rec fold e q = match q with
  | []       -> e
  | L x :: q -> fold (f e x) q
  | N t :: q -> fold e (q @ t)
  in fold e [t]

For instance, fold_bfs (fun () -> print_int) () (N [N [L 1; N [L 2; L 3]; N []]; L 4; N [L 5]]) outputs 41523, as expected.

This function is no longer tail-recursive, since append isn't. This might seem a step backwards, but in fact the change from a queue to a list makes considerably simpler to attempt further transformations. I'll show next how to deal with append.


Catullus' Fifth

The many suns may set and rise as usual.
But after this brief light of ours is snuffed,
We have a single night to sleep for ever.

What moves me in this poem is not the bleak rejection of an afterlife, but the sudden flash of insight that comes from feeling one day turning after another. This is the analeptic memory Graves spoke of: the future, as the past, compressed in a single blinding instant of awareness; the "many suns" frozen in a searing trace fixed in the sky, the short stub of a candle that will not be enough to see us through the night.

This is enlightenment, too.


The Ghost of Types Past

This is the second part of (Re)Visiting Expressions.

Is there a way to avoid having to wait until runtime to verify that our expressions are well-typed? Indeed, a simple application of Phantom types can make the compiler to work to our advantage. There is nothing in the application of phantom types that is specific to functional programming, except for the use of parametric polymorphism. We can use, like here, phantom types to make our Java programs type-safe, or to enforce statically (that is, in compile-time) various desirable properties of our program.

The idea is to constrain the construction of values by encoding the properties in type parameters. These constraints can go from typing ADT trees to enforcing access restrictions and static bounds-checking. As a technique, it is simplicity itself: a module exports a parametric type —which is kept abstract— but whose implementation does not depend on the type parameter. Smart constructor functions (the functional counterpart of static factories) are restricted in the module interface to various instantiations of the type parameter; the type-checker ensures that all applications are correctly typed by means of unification.

Our phantom type will wrap expressions, assigning to each a type tag T:

final class E<T> {
  private final Exp e;
  private E(Exp e) { this.e = e; }

However, values of this class will only be created by the smart constructors. For instance, an integer constant is tagged with Integer:

  public static E<Integer> con(int value) {
    return new E<Integer>(Exp.con(value));

Similarly for addition and comparison, with respective tags Integer and Boolean:

  public static E<Integer> add(E<Integer> left, E<Integer> right) {
    return new E<Integer>(Exp.add(left.e, right.e));
  public static E<Boolean> le(E<Integer> left, E<Integer> right) {
    return new E<Boolean>(Exp.leq(left.e, right.e));

Note that the arguments are themselves tagged to codify the requirement that addition operates on two integer values and returns an integer. In this light, the constructor for a conditional:

  public static <T> E<T> cond(E<Boolean> test, E<T> ifTrue, E<T> ifFalse) {
    return new E<T>(Exp.cond(test.e, ifTrue.e, ifFalse.e));

makes clear that the condition must be a boolean expression, and both the true and the false expressions must evaluate to the same type T (which has nothing to do with the class parameter, as this is a static method), which is then the type of the result.

For convenience, instances of this class know how to evaluate and print themselves out:

  private static final EvalExp evaluator = new EvalExp();
  private static final ShowExp printer   = new ShowExp();

  public Val eval() {
    return e.accept(evaluator);
  public void print() {

This avoids the need to expose the contents of the wrapper class; an alternative is to define a method that projects the phantom type to the type of the wrapped expression (i.e., a simple accessor).

We can now test our expressions. Our test harness becomes much simpler now that the phantom type dispatches the Visitors:

public final class PhantomTest {
  public static void main(String[] args) {
    E<Integer> e1;
    e1 = E.cond(E.le(E.con(3), E.con(5)), E.add(E.con(1), E.con(2)), E.con(0));
    System.out.print(" = ");
    e1.eval().accept(new ShowVal());

    // E<?> e2 = E.cond(E.con(1), E.con(1), E.con(1));

Running it produces the same output as before:

Cond(Leq(ConI(3),ConI(5)),Add(ConI(1),ConI(2)),ConI(0)) = 3 : int

(This is exactly the same ouput as before, which is unsurprising since the phantom type delegates blindly to the original, type-unsafe code). If you try to compile the test with the second, ill-typed expression uncommented, the compiler will complain, and will refuse to proceed:

PhantomTest.java:233: <T>cond(E<java.lang.Boolean>,E<T>,E<T>) in E cannot be
  applied to (E<java.lang.Integer>,E<java.lang.Integer>,E<java.lang.Integer>)
    E<?> e2 = E.cond(E.con(1), E.con(1), E.con(1));
1 error

As you can see, the type-checking rules of Java are being applied to detect the invalid construction of a conditional with a non-Boolean test. What is also important to remark is that it is not necessary to manually tag constructors; expressions are built out of method calls much as we've seen in the untyped case.

(Re)Visiting Expressions

Functional languages in the ML family (y compris Haskell) lend themselves naturally to interpreting the abstract syntax tree of a domain-specific language, since the former is quite naturally represented with an algebraic data type. You will see this phenomenon everywhere; to the point that I think that Greenspun's Tenth is not really an axiom but a corollary to the rather plain fact that destructuring an ADT amounts to interpreting it.

The analogous way to do this in an object-oriented language is to use the Interpreter and Visitor patterns in conjunction. I'll develop here a mini-language in Java, and I'll abstract out the different traversal strategies over both the expression and value trees by using Visitors, which will be generic on the type of the result generated.

Every language is contingent. Instead of inventing one I'll simply take Lennart Augustsson's domain-specific language of expressions:

E ::= n ∈ Z | E + E | EE | EE : E

that is, an expression can be an integer constant, an addition, a comparison or a conditional. The result of evaluating such an expression is a value, either integer or Boolean:

V ::= n ∈ Z | b ∈ B

Visitors are used to parameterize traversals by decoupling of the traversal logic from the structure of the tree. That is, instead of adding methods to each class in a composite hierarchy, we plug in different strategies to a single interface.

Note: in traditional Object-Oriented languages the Visitor pattern calls for the accumulation of intermediate results in an instance variable; here, we'll take advantage of parametric polymorphism in Java to apply stateless Visitors whenever possible.

In our little language, a value is an abstract type that accepts a value Visitor:

abstract class Val {
  public abstract <T> T accept(ValVisitor<T> visitor);

This Visitor will produce a result of a generic type, hence the method must be generic on the type parameterizing it. The possible concrete values are integer and Boolean values:

  public static final class Int extends Val {
    public final int value;
    private Int(int value) { this.value = value; }
    public <T> T accept(ValVisitor<T> visitor) {
      return visitor.visitInt(this);

  public static final class Bool extends Val {
    public final boolean value;
    private Bool(boolean value) { this.value = value; }
    public <T> T accept(ValVisitor<T> visitor) {
      return visitor.visitBool(this);

Both are inner to Val and neither can be extended or instantiated directly. They both accept the Visitor by dispatching to the appropriate method, and forcing it to recur into their structure.

These are all the possibilities for Val, so we seal it against extension; also, we have to provide the static factories to allow our code to create values:

  private Val() { }

  public static Val ofInt(int value) { return new Int(value); }
  public static Val ofBoolean(boolean value) { return new Bool(value); }

This finishes the encoding of values. Now, a value Visitor must in turn discriminate between value types:

interface ValVisitor<T> {
  T visitInt(Val.Int value);
  T visitBool(Val.Bool value);

As a concrete example, this is a simple value printer implemented as a Visitor:

final class ShowVal implements ValVisitor<Void> {
  public Void visitInt(Val.Int value) {
    System.out.printf("%d : int", value.value);
    return null;
  public Void visitBool(Val.Bool value) {
    System.out.printf("%b : bool", value.value);
    return null;

Since there isn't really a meaningful return value from the act of printing (other than the side-effect of showing the text to the output), we use the java.lang.Void type. We are forced to return null since Void has no instances; we could have created instead a singleton type Unit, paralleling what Haskell or ML do.

An expression in our DSL is, analogously to values, an abstract type that accepts an expression Visitor:

abstract class Exp {
  public abstract <T> T accept(ExpVisitor<T> visitor);

This Visitor is also generic on the return type. For each expression kind we have a concrete type. Constant expressions are simple:

  public static final class ConI extends Exp {
    public final int value;
    private ConI(int value) { this.value = value; }
    public <T> T accept(ExpVisitor<T> visitor) {
      return visitor.visitConI(this);

(Again, the classes corresponding to each type constructor are sealed.) Both additions and comparisons are binary operations on expressions:

  public static final class Add extends Exp {
    public final Exp left, right;
    private Add(Exp left, Exp right) { this.left = left; this.right = right; }
    public <T> T accept(ExpVisitor<T> visitor) {
      return visitor.visitAdd(this);

  public static final class Leq extends Exp {
    public final Exp left, right;
    private Leq(Exp left, Exp right) { this.left = left; this.right = right; }
    public <T> T accept(ExpVisitor<T> visitor) {
      return visitor.visitLeq(this);

A conditional has three sub-expressions: one for the expression being tested, one for the expression to be evaluated if the test is true, and another if it is false:

  public static final class Cond extends Exp {
    public final Exp test, ifTrue, ifFalse;
    private Cond(Exp test, Exp ifTrue, Exp ifFalse) {
      this.test = test;
      this.ifTrue = ifTrue;
      this.ifFalse = ifFalse;
    public <T> T accept(ExpVisitor<T> visitor) {
      return visitor.visitCond(this);

The definitions mirror exactly the abstract syntax outlined before. Finally, we seal the abstract Exp class and provide static factories:

  public static Exp con(int value) { return new ConI(value); }
  public static Exp add(Exp left, Exp right) { return new Add(left, right); }
  public static Exp leq(Exp left, Exp right) { return new Leq(left, right); }
  public static Exp cond(Exp test, Exp ifTrue, Exp ifFalse) { return new Cond(test, ifTrue, ifFalse); }

We now need to define the type of the expression Visitor:

interface ExpVisitor<T> {
  T visitConI(Exp.ConI value);
  T visitAdd(Exp.Add value);
  T visitLeq(Exp.Leq value);
  T visitCond(Exp.Cond value);

Again, a simple application of the Visitor is an expression printer:

final class ShowExp implements ExpVisitor<Void> {
  public Void visitConI(Exp.ConI value) {
    System.out.printf("ConI(%d)", value.value);
    return null;
  public Void visitAdd(Exp.Add value) {
    return null;
  public Void visitLeq(Exp.Leq value) {
    return null;
  public Void visitCond(Exp.Cond value) {
    return null;

It now remains to write the code that will evaluate expressions down to a value. I'll present the evaluation Visitor in parts. First, since it is obvious that it will return Vals, it will be parameterized by that interface:

final class EvalExp implements ExpVisitor<Val> {

Evaluating a constant is simply a matter of extracting the value of the expression and wrapping it into an integer value:

  public Val visitConI(Exp.ConI value) {
    return Val.ofInt(value.value);

Evaluating compound expressions is complicated by the fact that not all combination of types are meaningful. For instance, to evaluate an addition we must check that the addends evaluate to integer values before adding them. We begin by evaluating both sides of the sum; as noted in the comment, this corresponds to call-by-value. We then apply a value Visitor to the evaluated left-hand, which will in turn apply another value Visitor to the right-hand, which will finally return the sum:

  public Val visitAdd(Exp.Add value) {
    // Call-by-value semantics
    final Val left = value.left.accept(this);
    final Val right = value.right.accept(this);
    return left.accept(new ValVisitor<Val>() {
      public Val visitInt(final Val.Int left) {
        return right.accept(new ValVisitor<Val>() {
          public Val visitInt(final Val.Int right) {
            return Val.ofInt(left.value + right.value);

This works if both operands evaluate to integers. If, however, any value does not, it is a type error:

          public Val visitBool(Val.Bool _) {
            throw new IllegalArgumentException("Bad right argument to Add");
      public Val visitBool(Val.Bool _) {
        throw new IllegalArgumentException("Bad left argument to Add");

(The name of the arguments are unimportant). The evaluation of a comparison is exactly analogous to that for the sum, except that, in the end, the result of the comparison is a boolean value:

  public Val visitLeq(Exp.Leq value) {
    final Val left = value.left.accept(this);
    final Val right = value.right.accept(this);
    return left.accept(new ValVisitor<Val>() {
      public Val visitInt(final Val.Int left) {
        return right.accept(new ValVisitor<Val>() {
          public Val visitInt(final Val.Int right) {
            return Val.ofBoolean(left.value <= right.value);
          public Val visitBool(Val.Bool _) {
            throw new IllegalArgumentException("Bad right argument to Leq");
      public Val visitBool(Val.Bool _) {
        throw new IllegalArgumentException("Bad left argument to Leq");

Again, we use call-by-value by evaluating first both sides and then chaining visitors on the resulting values. Evaluating a conditional expression is simpler, fortunately. In contrast to the previous calls, a condition must lazily evaluate the correct branch depending on the result of evaluating the test. After doing that, it must visit the value and type-check that it is Boolean:

  public Val visitCond(final Exp.Cond value) {
    final Val test = value.test.accept(this);
    return test.accept(new ValVisitor<Val>() {
      public Val visitInt(Val.Int _) {
        throw new IllegalArgumentException("Bad test argument to Cond");
      public Val visitBool(Val.Bool test) {
        return test.value
          ? value.ifTrue.accept(EvalExp.this)
          : value.ifFalse.accept(EvalExp.this);

Note: the evaluator has a type hole, in that a conditional can evaluate to different types depending on the condition. The interpreter which inspired this one had exactly the same type unsoundess. We will see how phantom types can help us to avoid these (and other) kinds of type errors.

As it is, we can use this code to evaluate some simple expressions. We will use a test class to do this from the main entry point. But first, we show a helper method to print the expression, evaluate it and print the result. Since the evaluation can fail for an ill-typed expression, the method handles the exception and informs the programmer:

public final class PhantomTest {
  private static void eval(Exp e) {
    try {
      e.accept(new ShowExp());
      System.out.print(" = ");
      e.accept(new EvalExp()).accept(new ShowVal());
    } catch (IllegalArgumentException ex) {

Now we create some expressions in the main entry point and evaluate them:

  public static void main(String[] args) {
    Exp e;
    e = Exp.cond(Exp.leq(Exp.con(3), Exp.con(5)), Exp.add(Exp.con(1), Exp.con(2)), Exp.con(0));
    e = Exp.cond(Exp.con(1), Exp.con(1), Exp.con(1));

The first expression is 3 ≤ 5 → 1 + 2 : 0, which obviously evaluates to 3:

Cond(Leq(ConI(3),ConI(5)),Add(ConI(1),ConI(2)),ConI(0)) = 3 : int

The second expression is 1 → 1 : 1, which is obviously ill-typed as the condition is an integer:

Cond(ConI(1),ConI(1),ConI(1)) =
Bad test argument to Cond

Next we'll see how we can add safety to our interpreter using phantom types.


Algebraic Data Types and Java Generics (III)

See also Part I and Part II.

As remarked in many a monad introduction, monads are especially suited to containing and managing side-effects. This is because in general monads structure semantics: little packages of special functionality neatly tucked away and generalized under a unit, bind interface.

One typical gripe of imperative programming is resource mana'gement: how to structure the code to make sure that every resource consumed is accounted for. In the days of yore, we C programmers had to ensure that our code didn't leak memory: bytes allocated dynamically had to be freed exactly once no matter what the code path executed.

Of course, most modern languages incorporate a garbage collector that makes most if not all issues of dynamic memory allocation automatically solvable. There are other kinds of resources, however, that must still be manually managed. Typically, files and streams must be correctly opened and closed even in the face of exceptions, lest some data be corrupted and/or lost.

Also, in the absence of some sort of arbitration mechanism, concurrent file reading or writing by various actors is sure to lead to race conditions, where the data accessed depends on the scheduling order among the actors. If the operations they execute depend on the data read, the execution of the overall program is not deterministic.

Both problems could be solved if we had a way to ensure that:

  1. all files are closed after we're done with them
  2. nobody retains a reference to a closed file
  3. at most one user has access to a particular file

All three conditions can be met if we somehow make plain file handles (or streams) inaccessible. A monad is ideal for this: it guarantees that its type statically enforces (i.e., in compiler time) that these properties are upheld:

public class Input<X> implements Monad<X, Input<X>> {

The idea is to encapsulate operations on input files as a function from the stream to the result of type X of the processing:

  private final Function<BufferedReader, X> source;

  private Input(Function<BufferedReader, X> source) {
    this.source = source;

The construction of Input monads will be managed by bind. It takes a function f which turns values read into another Input, and threads the input stream among them, in order:

  public <Y> Input<Y> bind(final Function<X, ? extends Monad<Y, ?>> f) {
    return new Input<Y>(new Function<BufferedReader, Y>() {
      public Y apply(BufferedReader in) {
        return ((Input<Y>) f.apply(source.apply(in))).source.apply(in);

Unfortunately, covariance of the parameter type means that the bound function returns a type that is too general, and a down-cast is required to force the result into a Input. This code is not type-safe as it is. Note that the result of f is destructured in order to supply to it the same input stream; once this is done, it is repackaged as an Input monad of the correct (return) type.

The unit converts a delayed value into the Input monad that discards the stream and instead of its contents always returns this value:

  public static <X> Input<X> unit(final Function<Void, X> source) {
    return new Input<X>(new Function<BufferedReader, X>() {
      public X apply(BufferedReader _) { return source.apply(null); }

The reason why the encapsulated value is delayed is that it usually is the final result, or summary, of processing all the data read. Lastly, we need a way to actually make all this work on a file opened for input. This is as simple as opening the file, passing it to the processing function and returning the result, making sure that the code cleans after itself:

  public X run(String name) throws IOException {
    BufferedReader in = null;
    try {
      in = new BufferedReader(new FileReader(name));
      return source.apply(in);
    } catch (UncheckedException e) {
      throw (IOException) e.getCause();
    } finally {
      if (in != null)

The problem with this is that performing input might throw an exception, but the Function interface is pure. We need to encapsulate the exception in an unchecked exception, then unwrap it and pass it along.

  private static final class UncheckedException extends RuntimeException {
    public UncheckedException(Throwable cause) {

Unfortunately, there is not much that we can do with Input, as the constructor is private and the only way to bind to an input is by having one in the first place. The simplest Input we can think of is the one that reads characters from the file and returns them, one by one, unchanged:

  public static final Input<Integer> read =
    new Input<Integer>(new Function<BufferedReader, Integer>() {
      public Integer apply(BufferedReader in) {
        try { return in.read(); }
        catch (IOException e) { throw new UncheckedException(e); }

Another common way of reading text files is line by line. In this case, the result of the Input operation is a String with the text of the line:

  public static final Input<String> readLine =
    new Input<String>(new Function<BufferedReader, String>() {
      public String apply(BufferedReader in) {
        try { return in.readLine(); }
        catch (IOException e) { throw new UncheckedException(e); }

The simplest example that I can think of is a char counter. Or a line counter. Thanks to the type wizardry, I can use the same code to do both, or indeed, to count any number of items of type X:

public class InputCounter<X> {
  private int count = 0;

At the end of the stream of items, we need to return the count:

  private final Input<Integer> finalCount =
    Input.unit(new Function<Void, Integer>() {
      public Integer apply(Void _) { return count; }

This Input processor will delay looking at it until the last moment. We also need to get the next item and update the running count:

  private Input<Integer> partialCount;

We still didn't say how we'll count these mythical items: we need a way to parameterize the Input reader, and we also need a sentinel for the last item read (in other words ageneric end-of-file predicate):

  public InputCounter(Input<X> reader, final Function<X, Boolean> sentinel) {

With this, we bind the reader to a function that checks if there are any more items or not:

    this.partialCount = reader.bind(new Function<X, Input<Integer>>() {
      public Input<Integer> apply(X item) {

If not (that is, if we're at end-of-file), it returns the final count:

        if (sentinel.apply(item))
          return finalCount;

Otherwise, we increment and return the partial count.

        else {
          return partialCount;

With this, it can run the Input on the supplied file:

  public int count(String file) throws IOException {
    count = 0;
    return partialCount.run(file);

Let's use this in a test. We count both characters and lines by constructing the InputCounter with appropriate parameters:

public class InputTest {
  private static final Function<Integer, Boolean> isLastChar =
    new Function<Integer, Boolean>() {
      public Boolean apply(Integer ch) { return ch == -1; }

  private static final Function<String, Boolean> isLastLine =
    new Function<String, Boolean>() {
      public Boolean apply(String l) { return l == null; }

  public static void main(String[] args) {
    if (args.length != 1) {
      System.err.println("usage - java InputTest <file>");

    try {
      final int nchars = new InputCounter<Integer>(Input.read, isLastChar).count(args[0]);
      final int nlines = new InputCounter<String>(Input.readLine, isLastLine).count(args[0]);
      System.out.printf("File \"%s\": %d chars, %d lines\n", args[0], nchars, nlines);
    } catch (IOException e) {

The monad ensures that no file remains open, even in the event of an exception.


Monadic Buzz

Talk about synchronicity. James Iry wrote about implementing an option type in both Java and Scala. He went on with the expected progression, first explaining what monads are (and a nice explanation that is), following last week with the relation between monadic bind and Scala iterators.

I expect to write a bit more on the list monad, and general tomfoolery with Java generics.

Algebraic Data Types and Java Generics (II)

See also Part I.

Update 2007-10-05: I've reworked the code to make it simpler. The insight I had is that, since Java is rank-1 polymorphic, there is no way I can meaningfully abstract over the particular monad being used; however, the monadic class itself is the monadic type constructor. Hence, the type parameters are somewhat simpler.

There are many definitions of monads on the web, many very formal but that don't provide much insight if you're not formally inclined; some very poetic and even somewhat evocative but not very accurate, morally speaking. For our purposes, a monad takes values of type A and "wraps" them in, or marks them as values of type Monad<A>. This wrapper is mostly opaque and inviolable, except for a number of operations that makes using monads less than hopeless:

  • Given a value x of type A, unit(x) wraps it and turns it into a monadic value of type Monad<A>
  • Given a function f that takes values of type A and returns values of type Monad<B>, bind(f, x) "pipes" a value x of type Monad<A> into f, and returns the corresponding value of type Monad<B>. In effect, bind "lifts" f from taking plain values to accepting monadic values, keeping everything wrapped in the same paper, so to speak.

The intent of this rather puzzling machinery is one of representing "special" operations on values by way of their encapsulation in "tainted" procedures that cannot be escaped. The specialness of these operations might reside on their interacting with the environment, or on having "extra" structure on which they depend (state, privilege certificates, etcetera).

These operations must satisfy a number of properties, called the monad laws, with which we won't be concerned here. A monad could have just a little bit of extra structure: if there is a monadic value, call it zero, that satisfies bind(f, zero) = zero for all functions f, the monad is said to be a monad with zero. This is analogous to regular multiplication of numbers. In this case, the monad usually has another operation that "adds" two monadic values into a third, such that adding any value to the zero leaves that value unchanged; this is the monadic addition, and the extended monad is called a monad with plus.

As I wrote in concluding Part I, the Option type gives rise naturally to a monad. This monad is usually conceptualized as the failure monad, which encapsulates computations that might fail to return a value.

Before we start, we need some type-theoretic definitions. We've seen the type-safe functional object:

interface Function<X, Y> {
  Y apply(X value);

A functor is a parametric type (think of it as a "container type") with an operation map that, given a function f that transforms objects, applies it "inside itself":

interface Functor<X> {
  <Y> Functor<Y> map(Function<X, Y> f);

Another detail to take into account is that functions are strange with respect to specialization and subtyping. The more specific the return type is, the more specific the function returning it; however, the more specific the function the less specific its argument must be. This is called contravariance, and our functional parameters must take it into account.

To our monads. First, we must define what shape they will have in Java. As a container of values of type X, it can be viewed as a functional. Also, any monad should, at the least, provide a bind (since unit is a kind of "static factory", it cannot be included in the interface signature):

interface Monad<X> extends Functor<X> {
  <Y> Monad<Y> bind(Function<? super X, ? extends Monad<Y>> f);

The type of bind is complicated by the fact that it must express the covariance-contravariance of its functional argument. A MonadPlus extends a Monad with zero with an operation plus:

interface MonadPlus<X> extends Monad<X> {
  MonadPlus<X> add(MonadPlus<X> x);

(again, the zero is a static constructor, and cannot be included in the interface.) Now for casting an Option into a Monadic shape (it will actually be a MonadPlus):

abstract class Option<X> implements MonadPlus<X> {

The interface obligations for MonadPlus will be kept implicit; we will use the same trick as before to defer functionality to inner classes. The first case is that of the Some type constructor. It naturally subclasses Option:

  public static class Some<X> extends Option<X> {
    private final X value;
    private Some(X value) { this.value = value; }

The Functor interface requires implementing map. Naturally, it transforms the value and wraps it in a new Some:

    public <Y> Option<Y> map(Function<X, Y> f) {
      return new Some<Y>(f.apply(this.value));

The Monad interface requires implementing bind. For a Some(x) option, it suffices to apply f to its value and return the result:

    public <Y> Monad<Y> bind(Function<? super X, ? extends Monad<Y>> f) {
      return f.apply(this.value);

To comply with MonadPlus interface, the plus operation must return the first argument that is not zero. Since Some is not the zero of the monad, it returns itself:

    public MonadPlus<X> add(MonadPlus<X> _) {
      return this;

The other constructor is None:

  public static class None<X> extends Option<X> {
    private None() { }

The map on it returns None, as tere is no value to supply to f:

    public <Y> Option<Y> map(Function<X, Y> _) {
      return new None<Y>();

Note: I could have returned this here, as there is conceptually just one value None. However, the type of this is parameterized by X while the result must be parameterized by Y. The alternative is to use an unchecked cast, return (None<Y>) this;. This is type-safe but uglier.

Ditto for bind:

    public <Y> Monad<Y> bind(Function<? super X, ? extends Monad<Y>> _) {
      return new None<Y>();

The plus returns its argument, as the left side is this of class None, which by definition is the zero of the monad:

    public MonadPlus<X> add(MonadPlus<X> x) {
      return x;

As before, we seal the Option class from extension:

  private Option() { }

And let code create optional values using static constructors:

  public static <X> Option<X> unit(X x) { return new Some<X>(x); }
  public static <X> Option<X> zero(   ) { return new None<X>( ); }

This completes a type-safe implementation of the Option monad (equivalent to Haskell's MaybeMonad) in Java. We can test its usage:

public class OptionTest {
  public static void main(String[] args) {
    Function<Integer, Option<Integer>> f = new Function<Integer, Option<Integer>>() {
      public Option<Integer> apply(Integer x) {
        System.out.println("n = "+x);
        return Option.zero();
    Function<Integer, Option<Integer>> g = new Function<Integer, Option<Integer>>() {
      public Option<Integer> apply(Integer x) {
        System.out.println("n = "+x);
        return Option.unit(x + 10);
    final Option<Integer> zero = Option.zero();

Note that we have to help Java's type checker by assigning Option.zero() to a variable of the right type.


Algebraic Data Types and Java Generics (I)

See also Part II.

Andrew Kennedy reminds us that, yes, C# is a functional programming language. So is Java; or at least, Java can encode some of the most advanced idioms that are native to expressive functional languages like Haskell or OCaml. These examples should be straightforwardly translatable to C#.

A algebraic data type or ADT is the type of a choice of values, each of which is a collection, or tuple of other ADT values. In other words, an ADT is a (possibly recursive) "sum of products of types" type. Each option is introduced by a constructor that permits determining to which alternative a value corresponds, and to match it as a pattern to gain access to the constructor's values. Important special cases of ADTs are the list:

type 'a list = Nil | Cons of 'a * 'a list

where Nil —the empty list— is usually called [] and Cons (h, t) —the list with head h and tail t— is denoted by h :: t; and the sum type:

type ('a,'b) sum = Left of 'a | Right of 'b

A special type of sum is the one parameterized on the left by unit, the one-point (uni-valued) type. This special case is called the option type:

type 'a option = None | Some of 'a

This is a good representation for values that "might not be there". Options are used against a destructor, in this case maybe: a higher-order function that discriminates and either applies a function to Some x, or returns a default value for None:

let maybe f e v = match v with
| Some x -> f x
| None   -> e

There is an object-oriented encoding of ADTs that is especially suited for Java: the type is an abstract class declaring visitor methods, and each (algebraic, not object-oriented) constructor or variant is an inner class extending the outer abstract base. With some care with the protection level for the class constructors, this can be tightly sealed so that no extra cases can be added in an external program. I'll illustrate with an encoding for 'a option.

First, a preliminary definition. Java (unlike C#) doesn't have a notion of first-class functions, that is, methods cannot be passed around as they are. The only constructible values are objects; hence we must "simulate" first-class functions with functional classes:

interface Function<X, Y> {
  Y apply(X x);

We begin by declaring an Option class, parameterized by the type X of the value it encapsulates:

abstract class Option<X> {

The functionality that this class declares, and that its subclasses must implement, is the maybe destructor:

  public abstract <Y> Y maybe(Function<X, Y> f, Y e);

The type of the argument is fixed by the class parameter, but the type of the result depends on the particular Function f being used. To enforce encapsulation, this class will not be open to subclassing:

  private Option() { }

Now each of Option's subclasses will encode a different variant. First, the inner class corresponding to Some:

  static final class Some<X> extends Option<X> {
    public final X value;

    private Some(X value) {
      this.value = value;

Only the enclosing class will be able to construct objects of this type. The destructor for a Some object applies the function to the value it represents, and returns the result:

    public <Y> Y maybe(Function<X, Y> f, Y e) { return f.apply(this.value); }

Now the class corresponding to the None variant. Again, it will be closed to external instantiation:

  static final class None<X> extends Option<X> {
    private None() { }

Destructing a None value entails simply returning the default value:

    public <Y> Y maybe(Function<X, Y> f, Y e) { return e; }

Again, only the abstract Option class will be able to construct instances of its subclasses:

  public static <X> Option<X> some(X x) { return new Some<X>(x); }
  public static <X> Option<X> none()    { return new None<X>( ); }

This is not as general as a parametric visitor over the composite type, but is much simpler. With sufficiently rich functional objects, both approaches are interexpressible. We can test that everything works:

public class OptionTest {
  public static void main(String[] args) {
    Function<Integer, String> string_of_int = new Function<Integer, String>() {
      public String apply(Integer value) { return value.toString(); }
    Option<Integer> p = Option.some(10);
    Option<Integer> q = Option.none();
    System.out.println("p = " + p.maybe(string_of_int, "no value"));
    System.out.println("q = " + q.maybe(string_of_int, "no value"));

We'll see next how Option naturally gives rise to a monad, and how to implement a monad in Java.


Fibonacci's Rabbits

EWD797, "Fibonacci numbers and Leonardo numbers" begins with a disclaimer: The following formal derivations and computations are absolutely elementary and without scientific interest. It uses, however, a simple method to resolve the generating function (used again in EWD801) which is less general than the one explained in Concrete Mathematics or in generatingfunctionology (highly recommended, by the way) but very easy to use.

It then goes on to mention Leonardo numbers:

L.0 = L.1 = 1
L.n = L.(n-1) + L.(n-2) + 1

and explaining that, since the recurrence is not homogeneous, said method cannot be applied.

It then proceeds to pull off a rabbit out of the hat, by rewriting:

(0)   (L.n + 1) = (L.(n-1) + 1) + (L.(n-2) + 1)

This trick is quite nifty and not too subtle; what follows, however, is rather flashy: and we immediately derive:

L.n = 2·F.n - 1

Maybe I'm dense and the immediacy of the derivation is staring me in the face; I cannot justify the identity, other than by saying that yes, by induction it is true. So here is what I think.

First, I can tell that (0) is implied by

(1)   L.n = F.n - 1

This, however, fails for L.0 and L.1. What is then the simplest adjustment I can perform on (1) that satisfies the base? Well, since F.n = F.(n - 1) + F.(n - 2), I can multiply through a constant, call it k:

k·F.n = k·F.(n - 1) + k·F.(n - 2)

and equate with (0) to get L.n = k·F.n - 1. By solving the base case, we have that L.0 = k·F.0 - 1 forces k = 2.

This I don't find immediate.

Under the spell of Dijkstra's dream

I find Edsger W. Dijkstra's talk "How Computing Science created a new mathematical style", recorded in the manuscript EWD1073 one of his most beautiful. Not only is it a clear exposition of his calculational approach to mathematics, to me it is especially poignant since it carries with it the weight of thirty years of experience with the discipline. As such is not so much a promotional piece as a survey of results.

Many things in this talk struck me as worthy of reflection; one thing that stood out was the two examples he gave of how computer science informed (or ought to inform) mathematical discipline:

In order to prove that a compiler correctly parses the program texts it processes, one needs a formal grammar for the syntax of the programming language in question. To formal language theory we owe the distinction between grammars that are "context-free" and those that are not. Next we observe that the manipulating mathematician does not manipulate just strings of symbols but manipulates formulae parsed according to the underlying grammar. Since the parsing problem is simpler for context-free grammars, the moral is simple: choose for one's formalisms context-free grammars.

The other example is the notion of abstract data types —stacks, trees, permutations, bags, configurations, what have you—. Identifying an appropriate data type and some operations on it is often essential in preventing a program from becoming totally unwieldy. In exactly the same way calculational proofs of the desired brevity and clarity can often be obtained by the use of a specifically designed, special-purpose calculus. I said "exactly the same way" because I see no difference between the virtues of a data type in a program and the virtues of a calculus in a proof.

Much is written nowadays about how DSLs enhance the quality of code in complex systems; here is a forceful argument about DSLs enhancing mathematical reasoning.

As an example of this, and because I felt absolutely in awe at the simplicity and transparence of the proof and I think it deserves to be repeated, here is Dijkstra's proof of the Induction Principle:

Theorem: (C, <) is well-founded if and only if mathematical induction over C with respect to < is a valid proof technique.

  (C, <) is well-founded
≡ { definition of well-foundedness }
  (∀ S :: (∃ x :: x ∈ S) ⇒ (∃ x :: x ∈ S ∧ (∀ y : y < x : y ∉ S)))
≡ { contrapositive and Laws of de Morgan }
  (∀ S :: (∀ x :: x ∉ S) ⇐ (∀ x :: x ∉ S ∨ (∃ y : y < x : y ∈ S)))
≡ { transforming the dummy: S = { x | ¬P.x }, P.xx ∉ S }
  (∀ P :: (∀ x :: P.x) ⇐ (∀ x :: P.x ∨ (∃ y : y < x : ¬ P.y)))
≡ { predicate calculus }
  (∀ P :: (∀ x :: (∀ y : y < x : P.y) ⇒ P.x) ⇒ (∀ x :: P.x))
≡ { definition of proof via mathematical induction }
  mathematical induction over (C, <) is valid