April 29, 2012

A Go-lang Lisp

Back in 2010, Nurullah Akkaya published his implementation of John McCarthy’s “Micro-manual for Lisp” paper. I thought it might be interesting to port this implementation into Go as a learning exercise. This is my first non-hello-world Go program, so inevitably there’ll be a fair bit of non-idiomatic code here. Bear with me; I’ll update it at some point in the future when I’ve got more Go under my belt.

I won’t go into too much detail; the full source is here.


The interesting part of this for me was being able to replace the idiomatic but, to me, ungainly enum-tagged structs and object * references with Go’s interfaces to be able to take advantage of Go’s more powerful type system.

Here’s the common interface for all the moving parts of my lisp:

type Object interface {
	String() string

The contents of the interface aren’t really important - I’m really using this to convince the type system that I know what I’m doing. I’m defining the String() function to make print() work, mainly so that I can just drop my Objects into fmt.Print* and have it Just Work. I did consider adding a function to print to a stream, but I don’t need that yet.

The four basic object types - atoms, cons cells, functions and lambdas - are implemented in the obvious way:

type AtomObject struct {
	name string

type ConsObject struct {
	car, cdr Object

type FuncObject struct {
	fn ObjFunc

type LambdaObject struct {
	args, sexp Object

ObjFunc, needed by FuncObject, will be our basic “callable” function type, taking two Objects and returning an Object:

type ObjFunc (func (Object, Object) Object)

LambdaObjects and FuncObjects both also implement the Evaler interface, which simplifies the implementation of eval() slightly:

type Evaler interface {
	Eval( sexp, env Object ) Object


One other major difference between Go and C is in string handling. Go’s creators have obviously learnt a lot about how to fix C’s shortcomings in that area. You can see this at work in my next_token() function:

func next_token( in io.RuneScanner ) Object {
	ch,_,err := in.ReadRune()

	for unicode.IsSpace( ch ) && err == nil {
		ch, _, err = in.ReadRune()
	if err == io.EOF {

	chstr := string( ch )

	if chstr == ")" {
		return atom(")")
	} else if chstr == "(" {
		return atom("(")

	buffer := make( []rune, 128 )
	var i int

	for i = 0; !unicode.IsSpace(ch) && chstr != ")"; i++ {
		buffer[i] = ch
		ch,_,_ = in.ReadRune()
		chstr = string( ch )

	if chstr == ")" {
	// The slice here is essential: go strings aren't null
	// terminated, so the full 128-rune length would be returned
	// if we didn't slice it.  This breaks string comparison.
	return atom( string( buffer[0:i] ) )

Everything here is done in terms of the rune type, which is completely distinct from the byte[] type, has different reader and writer functions, and is generally geared up for unicode input in a way that would have been more cumbersome - and crucially, easier to get wrong - in the original. Also note the gotcha in the comment at the end: if you don’t manage your string lengths explicitly, string comparisons break. I can understand why this was done, but it caught me completely off-guard and I spent a good couple of hours trying to find a mistake I’d made elsewhere.


Obviously not. I think this was an excellent introduction to Go, and I’ll certainly be keeping an eye on the ecosystem to see how it evolves. I can certainly see Go becoming a tool in my daily toolbelt. My experience with the toolchain in getting this to work has been nothing but pleasant.

I’m going to keep working on this interpreter as well. There are many, many directions it could go in, but only so many hours in the day. Most importantly, if you’re reading this and you can see anything I can do to improve this code, have at it. Post a gist, and let me know.

Edited to add

I didn’t put a license on the code when I originally posted it. Oops. I’ve added one now; it’s MIT’d.