Reflections on Stacking Stacks
13 Mar 2018I’ve been on both ends of several variations of the following conversation:
FRIEND:
Check out this algorithm. It checks that a string of parentheses, brackets, and braces is balanced properly. It goes like this:
You iterate through the string while keeping a stack. When you see an opening paren/bracket/brace you push it on. When you see a closing paren, if it’s the same kind as the one on top of the stack, you pop the stack. If not, or if the stack is empty, you fail. You also fail if the stack isn’t empty at the end of the algorithm.
ME:
Oh neat! So something like this:
const pairs = {
'(': ')',
'{': '}',
'[': ']',
};
function isOpener(c) {
return pairs.hasOwnProperty(c);
}
function isBalanced(s) {
let stack = [];
for (let i = 0; i < s.length; i++) {
if (isOpener(s[i])) {
stack.push(s[i]);
} else if (stack.length === 0 || pairs[stack.pop()] !== s[i]) {
return false;
}
}
return stack.length === 0;
}
FRIEND:
Yeah! I implemented it like this, though:
function isBalanced(s) {
let characters = s.split('');
return consume(characters);
}
const pairs = {
'(': ')',
'{': '}',
'[': ']',
};
function isOpener(c) {
return pairs.hasOwnProperty(c);
}
// consume removes a sequence of complete parenthesized expressions
// from the front of `s`, returning true if it was successful and false
// otherwise.
function consume(s) {
while (isOpener(s[0])) {
// (shift() removes the first element of the array, returning it.)
let opener = s.shift();
if (!consume(s)) {
return false;
}
if (s.shift() !== pairs[opener]) {
return false;
}
}
return true;
}
ME:
squints This doesn’t look the same at all to me. You said we have a stack, but there’s no mention of pushing or popping in your code.
FRIEND:
Sure it is, when we call
consume
recursively we’re pushing onto the stack. When we return, we’re popping off the stack.
ME:
eyes bulge in awe
I don’t necessarily think the friend’s solution is better, by the way. It’s a way of expressing the algorithm that would not have occurred to me at the time.
There’s been a trick pulled here: though the algorithm, as described in English, calls attention to an explicit stack, the friend has replaced it with the call-stack, removing any mention of it from their code. The explicit concept of a “stack” has been almost completely elided by leaning on JavaScript’s built-in stack. One well-known situation in which this style of programming unassumingly rears its head is parsers. When written in this style, they’re called recursive descent.
I want to emphasize that this isn’t just recursion in the purest sense. It’s mapping a stack from the description of an algorithm onto the call-stack of a programming language. When introduced in this way, the presence of the stack is much quieter and less explicit.
This phenomenon isn’t unique to stacks and languages which provide them as control flow.
In the language Prolog, depth-first search is the fundamental way in which control moves through a program. Because of this, there are times when a depth-first search written in Prolog doesn’t appear to do any kind of search at all. The algorithm gets pushed into the language and that particular aspect of it disappears from the program (due to this, a self-interpreter in Prolog can be even simpler than one in Lisp). In Haskell, that the language supports lazy evaluation means that in the implementation of some algorithms the idea of delaying doing something until if or when it’s needed just doesn’t appear. In the implementation of a programming language with garbage collection, if the implementation language supports garbage collection, the programmer need not explicitly include it - they can piggyback on the one provided by the host. Whatever part of the algorithm is translated in this way just falls through to the runtime and is elided from the program (which sometimes can make it look like there’s a piece missing).
It’s also worth noting that sometimes we can flip the script on this whole idea by taking language features and reifying them as concrete data structures in our language. Continuations are a perfect example of this, as are generators in JavaScript and Python. Both continuations and generators encode the flow of execution in a program as a concrete value which can be manipulated.
While the particular case of implementing a programming language in terms of existing language features is particularly well-studied, I’m not aware of a more general term for this kind of “passing through” of functionality. In my experience, it’s a nice way to make programs look and feel as though they just happen to fall together and work, as if by chance. But whether that’s a desirable property or not is left up to the reader.
Thanks to everyone who read this post and provided feedback, Raphael, Arjun, Jordan, Swati, Theo, and Ilia.