MPOV

my point of view (programming stuff mostly)

I Built a Lisp Compiler

I’m very proud to announce the completion of my first programming language compiler!

Malcc is an incremental and ahead-of-time lisp compiler written in C.

This is the story of my progress over the years and what I learned in the process. An alternate title for this post is: “How to Write a Compiler in Ten Years or Less”

(There’s a TL;DR at the bottom if you don’t care about the backstory.)

Successful Failures

I have dreamed of writing a compiler for nearly a decade. I’ve always been fascinated by how programming languages work, especially compilers. Though, I imagined a compiler as dark magic and understanding how to make one from scratch firmly out of reach for a mere mortal such as myself.

But that didn’t stop me from doing and learning!

First, an Interpreter

In 2011, I started work on a simple interpreter for a made-up language called “Airball.” You can tell from the name how much confidence I had in myself to make it work. It was a fairly simple program written in Ruby that parsed the code and walked the abstract syntax tree (AST). Once I realized it did kind of work, I renamed it Lydia and rewrote it in C to make it faster.

Lydia programming language syntax

I remember thinking the syntax for Lydia was quite clever! I do still enjoy the simplicity of it.

While Lydia was far from the compiler I wanted to make, it was a small taste that inspired me to keep going. Though, I was still plagued by unanswered questions of how to make a compiler work: What do I compile to? Do I have to learn assembly language?

Second, a Bytecode Compiler and Interpreter

As a next step, in 2014, I started work on my scheme-vm – a virtual machine for Scheme written in Ruby. I thought using a VM with its own stack and bytecode would be a nice middle-ground between an AST-walking interpreter and a full compiler. And since Scheme is formally specified, I wouldn’t have to invent anything.

I tinkered off-and-on with my scheme-vm for over three years and learned a lot about how to think about compiling. But, in the end, I knew I couldn’t finish it. The code was becoming an unmaintainable mess and I still had a long way to go to completion. Without a guide or previous experience, I was mostly feeling my way around in the dark. It turns out that a language specification is not the same as a guide. Lesson learned!

By the end of 2017, I had shelved scheme-vm in search of something better.

Enter Mal

final step diagram from the Mal guide, courtesy Joel Martin

Some time in 2018 I happened across Mal, the Clojure-inspired Lisp interpreter.

Mal was invented by Joel Martin as a learning tool and has since gathered over 75 implementations in different host languages! I knew when I saw all those different implementations that I could learn a lot about the process – if I got stuck, I could go consult the Ruby or Python implementation for cheats. Finally, someone who speaks my language!

I also figured that if I could get through the steps writing an interpreter for Mal, I could probably repeat those same steps to make a compiler for Mal.

A Mal Interpreter in Rust

My first go was to follow the step-by-step guide and build an interpreter. At the time, I was also heavy into learning Rust (I’ll save that for another blog post), so I created my own implementation of Mal in Rust: mal-rust

I wrote a bit about my time using Rust here.

This was an absolute joy! I cannot give enough praise or thanks to Joel for creating the excellent Mal guide. It has detailed written steps, flowcharts, pseudocode, and tests! – everything a developer would need to make a programming language from start to finish.

By the end of the Mal guide, I was running the Mal implementation of Mal (written in Mal) on top of my Rust-hosted implementation of Mal. (2 levels deep, whew) I jumped on my chair in excitement when this worked the first time!

A Mal Compiler in C

Once I proved mal-rust to be a viable Mal implementation, I started researching how I could write a compiler. Do I compile to assembly? Do I dare compile directly to machine code?

I saw an x86 assembler written in Ruby which intrigued me, but the thought of working with assembly gave me pause.

At some point I happened across this comment on Hacker News, which mentioned using the Tiny C Compiler as a “compilation backend.” This seemed like a great idea!

TinyCC has a test file showing how to use libtcc to compile a string of C code from a C program. This gave the start I needed to build a “hello world” proof of concept.

Starting again with the Mal step-by-step guide, along with my stale C experience, I was able to build a Mal compiler in a couple months worth of spare evenings and weekends. The process was a joy.

mal test suite

If you’re used to test-driven development, you’ll recognize how valuable having a pre-done test suite is. The tests will guide you toward a working implementation.

I can’t say much about this process, other than, again, the Mal guide is a treasure. At each step, I knew exactly what what I needed to do!

Tricky Bits

Thinking back, here are some tricky bits specific to writing a Mal compiler that I had to figure out:

  1. Macros must be compiled on-the-fly during compilation and ready to be executed during compilation of the program. This is a little mind-bendy.

  2. The “environment” (the tree of hashes/associative-arrays/dictionaries that holds variables and their values) needs to be present for both the compiler code and the resulting code for the compiled program. This is so that macros can be defined at compile time.

  3. Since the environment is available at compile time, I originally had Malcc catching undefined errors (access of a variable that was not defined) at compile time, but this broke expectations of the Mal test suite in a couple places. In the end, I disabled that feature so I could get the test suite passing. It would be cool to add it back as an optional compiler flag though, since it could catch a great deal of errors ahead-of-time.

  4. I compiled the C code by writing to three strings passed around in a struct:

    • top: top level code – functions are written here
    • decl: declarations – declaring and initializing variables used in the body
    • body: where the main work is done
  5. I spent a day thinking about writing my own garbage collector, but decided it could be an exercise for further learning at a later date. The Boehm-Demers-Weiser Garbage Collector is an easy drop-in library and is readily available on lots of platforms.

  6. It’s critical to be able to easily see the code your compiler is writing. Anytime my compiler saw the DEBUG environment variable, it would spit out the compiled C code so I could review the mistakes.

Things I’d Do Differently

  1. Writing C code and trying to keep it indented was a bit of a pain and I wish I would have done something else. I believe some compilers write ugly code and then “pretty it up” with a library before writing it out. This is something to explore!

  2. Appending to strings when generating code is a bit messy. I might consider building an AST and then converting that into the final string of C code. This should tidy up the code and give the compiler a nice bit of symmetry too.

Now the Advice

I love that it’s taken me nearly a decade to learn how to make a compiler. No, really. Each step along the way is a fond memory in my process of becoming a better programmer.

That’s not to say I’m “done” though. There are still many hundreds of techniques and tools I need to learn to feel like a real compiler writer. But I can confidently say “I did it.”

Here is the process I’d recommend to make your own Lisp compiler:

  1. Pick a language you feel comfortable in. You don’t want to be learning both a new language and how to make a language at the same time.
  2. Follow the Mal guide and write an interpreter.
  3. Rejoice!
  4. Follow the guide again, but instead of executing the code, write code that executes the code. (Don’t just “refactor” your existing interpreter though. Start from scratch. Copy and paste is fine.)

I believe this technique can be used with just about any programming language that compiles to an executable. For example, one could:

  1. Write a Mal interpreter in Go.
  2. Modify your Go code to:
    1. produce a string of Go code and write it to a file;
    2. compile that resultant file with go build (by shelling out).

Ideally, there’d be a way to control the Go compiler as a library rather than shelling out, but regardless, this is one way to make a compiler!

With the Mal guide and your ingenuity, you can do it. If I can do it, so can you!

Thanks

Many thanks to Joel Martin for creating Mal and giving it to the world!

Clipboard History in Sway Window Manager

Recently I switched to the Sway window manager on my favorite laptop and realized that ClipIt does not work there. I was reminded of an old Ruby script I wrote way back in 2012 to serve this purpose.

Time to dust that thing off and make it work with Wayland! I installed the excellent wl-clipboard by Sergey Bugaev and started hacking.

Here is my script:

Here’s how to use it:

  1. Save the script above to a file at /usr/local/bin/clipd and make it executable.

  2. Install dmenu if you don’t already have it.

  3. Install wl-clipboard from source. At the time of this writing, the package in the Ubuntu repository is fairly old and causes some screen glitching. The latest version on GitHub fixed that for me.

  4. Add the following config to your Sway config at ~/.config/sway/config:

    bindsym $mod+v exec clipd menu
    exec --no-startup-id clipd
    
  5. Restart Sway.

Now, when you copy text, clipd will see the new text (within 5 seconds) and add it to your ~/.clipboard-history file. It keeps the last 100 entries there.

When you press Mod + v, a menu will show the entries and allow you to make a selection. The item you select will be put back on the clipboard so you can paste it.

Enjoy!

Church.IO

Recently I made the decision to shut down the church.io website (it’s now a redirect), delete the corresponding Slack community, and move the GitHub repositories back to my own personal account. Church.IO failed to build the community of software creators I dreamed of.

Mistakes were made. This is a retrospective, and an explanation for anyone wondering where we went.

What Was it?

“Church.IO” was created in 2011, intended to be a lovely little community of developers and designers who code and craft open source software specifically for churches. The seed project was my personal labor of love, OneBody, which I’ve been hacking on for over decade. There were a few of my other other small projects like bible_api, too.

There has been off-and-on enthusiasm for Church.IO over the years. A few other projects joined a few years in (cannot recall now exactly when): apostello and Cedar.

At the very end, however, Church.IO wasn’t much more than a website and a very quiet set of Slack channels.

Slack Shifted the Audience

We initially used IRC for communicating. Our IRC group was talkative and supportive. We hashed out ideas and built some really cool features in OneBody there.

We moved to Slack when that became popular. The allure of animated gifs and emojis got the best of me, perhaps.

I thought, at the time of the move, that doing so would encourage new, possibly less experienced developers to join the community and contribute to the software. What it did, however, was alienate and leave behind some of the more serious and experienced hackers. It was less of a move and more of a see-ya.

I don’t think I noticed this right away. Looking back, I realized that of the people who were talkative and full-of-ideas in IRC, a few joined Slack, said maybe a few things, then rarely came back.

Of the new developers that joined Slack, very few contributed the way those original IRC hackers did. It was just never the same after the move.

What we lost in serious programmers, we gained in people saying they wanted to contribute, and that they had ideas for more features. As you can guess, ideas for more features are always in heavy supply–it’s the execution that a thriving open source project needs.

Now, don’t get me wrong: I’m grateful to anyone who helped make the software better, whether they spent man-years of their life on it, or only minutes in passing (and anything in between). And the users of the software have better software because of those contributors.

But, in the end, our move to Slack helped shift the audience from the developers of the software to the users.

Community Takes Leadership

I tend to work on OneBody in sprints–not as a marathon. I will work hard for several months, add new features, fix bugs, make lots of plans, then eventually get tired of it all and take a 6-month or a year break. I’ve been building OneBody this way for over 10 years. It works for me.

And while it works fine for a one-man show, it’s not ideal for leading a community of others. I wasn’t nearly consistent enough to inspire collaboration and craftsmanship.

There were many many weeks where I didn’t feel like thinking about my open source projects–not to mention talking to someone else about them.

I’m not beating myself up about it–just pointing it out. Community takes a steady leader with vision, which I am not.

The Designers Never Came

This is a small note, but I wanted to mention it. I thought that by saying “Church.IO is a community of developers and designers building open source church software,” we would somehow entice the rare creature known as a “designer” to join us.

It didn’t work. I don’t have any good insight into that.

What Went Right

While I am sad to close this chapter in my life, I am happy about some of the things we did that actually worked. I’m talking mostly about OneBody below, because that was and still is the main project in my life. In addition to the standard open source bug fixes and feature work, we had some really cool achievements:

Marketing: Having a decent website with a memorable name and domain does amazing things on the marketing side. I was amazed how quickly people started learning about our software once it was under a “brand” name of sorts–even though there was no commercial company backing the software.

Easier Installation: We put a lot of work into making OneBody easier to install. We provided a Debian package, an Amazon AMI, a VirtualBox image, and even a one-click installer for Digital Ocean (all of these are still available today). This put the software in the hands of many more people than otherwise.

Better Documentation and UI: Several individuals helped write documentation and improve our wiki pages. This could not have happened without the community. More people hammering on the UI of OneBody forced us to improve strange and difficult-to-use portions, too.

Translation: OneBody is translated into several languages thanks to bilingual volunteers–that amazes me. Church all over the world are using OneBody, in their native language. Still blows my mind.

We Never Took Funding: Some (many?) people will say this is a mistake. But from the beginning, I wanted Church.IO to remain independent and free from investment interests. If Church.IO was to stand, it would stand because of volunteers and churches giving back their staff time in the form of code contributions–not because of someone’s or some company’s deep pockets.

Conclusion

Church.IO was an amazing part of my life, and if I could go back in time, I would still do it again. Though, I would change some things perhaps–like not moving to Slack–and I’d find a charismatic co-leader to help with consistent leadership. But hindsights is, well, you know…


Where Did the Projects Go?

The open source projects are still alive, some more alive than others, on GitHub. I moved the projects I started back to my personal GitHub account. Dean Montgomery maintains his apostello project there. And Isaac Smith maintains Cedar there too.

Leaving Twitter

Late in 2017, I politely said good-bye to Twitter and deleted my account. My Twitter account was 10 years old, and the anniversary, as anniversaries often do, prompted me to think about the value of a decade spent microblogging.

I remember when Twitter was a quiet site for geeks, and my first tweets were about HTTP servers and Git and SliceHost (remember them?). And my geek friends replied sometimes. And there were no politics.

I remember thinking, man, if more of my friends were on here, then this could be really cool.

Normies

I remember when Oprah joined Twitter just a few years later, and how I had a bad feeling about it. I couldn’t reconcile my desire for “more of my friends” to be on Twitter with the cringey tweets of a non-geek (who happened to be a celebrity). And then the masses followed. Twitter grew. And my wish for more normal people slowly but surely came true.

Only it wasn’t the happy hug-fest I imagined.

Twitter became a place for complaining and opining. Maybe it always was that place. Heck, I complained about this and that every other tweet probably. There just was nobody to read it nor anybody who cared. Or maybe the opinions were about tech, and tech opinions aren’t as dividing as other ones (at least for me).

The complaints and opinions weren’t enough to ruin Twitter for me though. I learned to accept the bad with the good. The frog was starting to simmer though.

The “Algorithm”

The full on frog boil seems to have happened many years later, when Twitter started tweaking its algorithm. Wow, even writing the word “algorithm” makes me mad. A news feed should be chronological. No algorithm needed.

Twitter started stoking the fires by making controversial tweets and discussion front-and-center. The stuff just won’t go away.

Twitter trained my most beloved friends and coworkers to “engage” with their audience. Read: be controversial. Uggg.

Political opponent hatred. Sports team hatred. Tech hatred. It is everywhere.

Maybe it’s not fully Twitter’s fault. They’re just adapting to the times, along with Facebook and Google News, etc.

Our society loves controversy, loves to be mad about something. I just want to write cool software and learn new things. I don’t need to know what my friends and colleagues are mad about today, after all.

Alternatives

So, I’ve been off Twitter for six months now, and I really don’t miss it. Sure, I follow a link to a Twitter thread occasionally to see what the new hubbub is all about. But in the end, I think leaving Twitter was a good call.

Lobste.rs and HN keeps me informed of tech subjects, and I’m back to using email for talking with old friends. It’s a long-form (or short-form, whatever you need) medium and works well for keeping in touch.

I’m spending more time reading books now. I’m writing more content at seven1m.sdf.org. And I’m publishing all kinds of new experiments on my GitHub account.

Mastodon and the “fediverse” are promising, but I’m still on break from big like-driven and retweet-driven microblogging. I kind of like shouting into the void, thus I’m just posting to a plain text file for now.

Meta

The irony of me complaining here about people on Twitter complaining too much is not lost on me. But, it turns out, you don’t have to “engage” with this blog post, and I don’t need the likes or follows from it!

Extra Meta

I was going to end this post with a quip about how “You should follow me off Twitter”, in homage to Dustin Curtis' famous A/B test of coaxing his readers to become Twitter followers. But as I was looking up the URL, I discovered that Dustin had removed the post and excluded it from the Wayback machine as well. Maybe we all want to forget that time in our lives where we pretended that follower count meant something.

Dymo Printer Agent for Linux

Our church uses iPads and Dymo printers for children check-in. Parents check in their child, a label prints on the printer, and they stick the label on the child’s shirt. We use Planning Center Check-ins of course, and it’s great!

One downside to Dymo printers, however, is that their software only runs on Mac and Windows, necessitating a full computer connected to the printer. (There’s no way that I know of to hook the Dymo printer directly to the iPad.)

I had the idea to use a Raspberry Pi as the computer instead of a full desktop machine. The setup looks something like this:

1
iPad -[wireless network]-> Raspberry Pi -[usb]-> Dymo printer

Great idea! Unfortunately, the labels are sent as a string of XML to a small Dymo tray application which converts them to PDFs. This Dymo tray application does not run on Linux. Womp womp.

This can’t be too hard… (Many weekends have been ruined by that short phrase.) We’ll just write our own XML->PDF conversion program in Ruby!

The result is: dymo-printer-agent

This little application runs on Linux (in my case, a Raspberry Pi running Raspbian) and converts Dymo’s label XML format into PDF and sends it to the printer driver.

It works great!

So, if you’re in the odd situation of wanting to print to a Dymo label printer on Linux, this project might be useful to you! I’ve tried to include all the relevant gotchas and instructions in the readme, so check it out.

Meta note: I dusted off the ol' blog to post about this project, because I’m no longer on Twitter. I’ll talk about that decision in another blog post.

Authority on Rails (Gem for Declaring User Authorization)

CanCan is a wonderful plugin for Rails that allows you to define all your authorization logic in one place. For small apps, it works well. But as my app’s authorization needs grew more complex, I realized I needed a different approach to declaring and testing authorization.

So I went looking… It turns out that the Authority gem is exactly what I was looking for:

  • Authority splits out auth logic into individual “Authorizers”. Each one handles authorization for a single model (or multiple models that behave the same way), with individual methods for each action.
  • Authority doesn’t try to do too much – it gives you an organized way to check authorization in regular Ruby code, explicitly, without having to write implicit rules.

For our app, authorization got much simplified with regular “if” statements (compare to this):

AlbumAuthorizer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class AlbumAuthorizer < ApplicationAuthorizer
  def readable_by?(user)
    # belongs to me
    if resource.owner == user
      true
    # belongs to a friend
    elsif Person === resource.owner and user.friend_ids.include?(resource.owner.id)
      true
    # belongs to a group I'm in
    elsif Group === resource.owner and user.member_of?(resource.owner)
      true
    # is marked public
    elsif resource.is_public?
      true
    # I'm an admin
    elsif user.admin?(:manage_pictures)
      true
    end
  end
end

I like having my authorization logic split into separate classes – it seems to be a cleaner approach.

Also, you can have one authorizer depend on another, like so:

PictureAuthorizer
1
2
3
4
5
6
class PictureAuthorizer < ApplicationAuthorizer
  def readable_by?(user)
    # ask the resource's parent "album" if this user can read it
    resource.album.readable_by?(user)
  end
end

Now, there’s a lot that Authority does not do:

  • Authority doesn’t build SQL for you. Unlike CanCan’s accessible_by, Authority doesn’t give you a way to query all records accessible by the user. Our solution was to build that SQL ourselves, which isn’t difficult.
  • Authority doesn’t give you a way to load and authorize your resources in your controllers. For that, I built load_and_authorize_resource as a more generic solution. (More about that in the next blog post.)

…with these things absent, Authority has a narrow job that it does very well.

CanCan is still a wonderful tool that I will likely use on a future project, but Authority takes a different approach and is a great tool to have in the toolbelt!

Find Photos on Your Hard Drive to Upload to Flickr.

If you’re like me, you have thousands of photos on Flickr, and thousands more scattered accross your laptop, external drive, and phone.

What’s been uploaded to Flickr and what hasn’t? Who knows!

I hacked up a quick solution in Ruby and called it Flickr Upload Set.

diagram

It’s built as a web app with Sinatra, but it’s not meant to run on a server – just on your local computer.

Here’s how it works:

  1. Launch the Ruby app and open your browser to localhost:3000.
  2. Navigate to a folder that contains photos.
  3. Flickr Upload Set makes some calls to the Flickr API to search for photos with the same name under your account.
  4. Files found on Flickr are grayed out on the screen.

screenshot

More details can be found on the project page on GitHub.

Check it out. Let me know if it’s useful to you!

What Is Quality?

A lightning talk (5-minute) presentation I gave at TulsaWebDevs:

What is Quality

Note: I used to have this presentation embedded here, but it was scrolling the page and interfering, so now it’s just a link.

JavaScript Gotchas

Here are some common time sinks encountered when building a JavaScript app, along with some tips to avoid them.

Note: these tips were originally shared as a part of my TulsaWebDevs presentation following 2012 Startup Weekend Tulsa.

Bind

In JavaScript, scope is resolved during a function’s execution – not its definition. When working with classes, you expect that this will point to the class, but it often won’t.

Example:

1
2
3
4
5
6
7
8
9
var Todo = Backbone.View.extend({
  events: {
    'click input': 'check'
  },

  check: function() {
    console.log(this);
  }
});

Solution 1:

Use _.bindAll()

1
2
3
4
5
6
7
8
9
10
11
12
13
var Todo = Backbone.View.extend({
  events: {
    'click input': 'check'
  },

  initialize: function() {
    _.bindAll(this, 'check');
  },

  check: function() {
    console.log(this);
  }
});

Solution 2:

Use CoffeeScript =>

1
2
3
4
5
6
class Todo extends Backbone.View
  events:
    'click input': 'check'

  check: =>
    console.log this

Callback Spaghetti

As your app grows more complicated, your code will start to look like this (unless you work hard to avoid it):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
before(function (done) {
  server.server(options, function (s, db, providers) {
    //clear db and add a test user - "testuser"
    db.user.remove({}, function () {
      db.notification.remove({}, function () {
        providers.provider1.insertBulk(item1, item2, item3],
          function (err, result) {
          providers.provider2.insert([item1, item2, item3]
            function (err, result) {
            providers.provider3.insert([item1, item2, item3]
              function (err, result) {
              providers.provider4.insert([item1, item2, item3],
                function (err, result) {
                s.listen();
                done();
                })
              });
            });
          });
        });
      });
  });
});

While I have no perfect solution, here are some tips:

1. Split callbacks out into separate methods on the class:

1
2
3
4
5
click: function() {
  $.get('/foo', function(data) {
    // do something
  });
}

…becomes:

1
2
3
4
5
6
7
click: function() {
  $.get('/foo', this.clickCallback);
}

clickCallback: function(data) {
  // do something
}

Rinse, repeat.

2. Use the async or Seq library

Waterfall:

1
2
3
4
5
6
7
8
9
10
11
12
13
async.waterfall([
  function(callback){
    callback(null, 'one', 'two');
  },
  function(arg1, arg2, callback){
    callback(null, 'three');
  },
  function(arg1, callback){
    callback(null, 'done');
  }
], function (err, result) {
 // done
});

forEach:

1
async.forEach(files, this.saveFile, this.complete);

Supervisor Pegging CPU

Supervisor monitors files for changes; if you have many files, your CPU starts to become pegged.

Solution: Ignore the node modules directory and any other directories not containing source code:

1
supervisor -i data,node_modules app.js

Supervisor doesn’t reload configs

Solution: um, be aware of this fact, and just ctrl-c and start supervisor again when you change a config. :-)

Object is not a function

This error is the bane of my existence. It happens in lots of places, for many different reasons, but here are a few that I always try to check first:

  • module.exports is not set
  • when using cs, forgetting a comma, e.g. fn bar 2 instead of fn bar, 2
  • setting a property on your object with the same name as a method

Backbone - visibility into view(s)

If you’re building a Backbone.js app, do yourself a favor, and set the main app view as window.app_view or something similar. Set other views as subviews on the main view.

This will allow you to inspect the app from FireBug after everything is up and running.

Sometimes console.log() lies - object changes after being logged

In FireBug, doing a console.log on an object can be misleading if the object changes soon after it is logged. FireBug will update the nested properties of the object.

Solution: to be sure, you should console.log(obj.foo.bar) to see the actual value of the property at the time it is logged.

Migrate Posterous Blog to Jekyll

Posterous is set to shut down on April 30th. For the past month, I’ve been dreading the move from Posterous to whatever.

Today I bit the bullet and decided to migrate my Posterous blog to Jekyll. There are many reasons I chose Jekyll – independence from free hosted services is probably the biggest reason.

I chose to use Octopress, which extends Jekyll with a set of sane defaults and a good structure for customizing layout, CSS, etc.

To migrate the Posterous blog posts, first I downloaded an archive from the Posterous website. Inside the downloaded zip file, there is a wordpress_export_1.xml file, which Jekyll knows how to import.

Lastly, I chose to download all Posterous-hosted images, because I expect that Twitter might not host Posterous assets on Amazon S3 forever. Better safe than sorry. Here is the script that I used to do that work:

Yay, now my blog isn’t at the mercy of big corporations deciding how to make more money. Never mind that I just embedded a Gist above :-)

Note: I later realized all the images were in the original zip file I downloaded from Posterous. But odly, the filenames were a bit different and the wordpress xml file didn’t link to them.