MPOV

my point of view (programming stuff mostly)

Church.IO

Recently I made the decision to shut down the church.io website (it’s now a redirect), delete the corresponding Slack community, and move the GitHub repositories back to my own personal account. Church.IO failed to build the community of software creators I dreamed of.

Mistakes were made. This is a retrospective, and an explanation for anyone wondering where we went.

What Was it?

“Church.IO” was created in 2011, intended to be a lovely little community of developers and designers who code and craft open source software specifically for churches. The seed project was my personal labor of love, OneBody, which I’ve been hacking on for over decade. There were a few of my other other small projects like bible_api, too.

There has been off-and-on enthusiasm for Church.IO over the years. A few other projects joined a few years in (cannot recall now exactly when): apostello and Cedar.

At the very end, however, Church.IO wasn’t much more than a website and a very quiet set of Slack channels.

Slack Shifted the Audience

We initially used IRC for communicating. Our IRC group was talkative and supportive. We hashed out ideas and built some really cool features in OneBody there.

We moved to Slack when that became popular. The allure of animated gifs and emojis got the best of me, perhaps.

I thought, at the time of the move, that doing so would encourage new, possibly less experienced developers to join the community and contribute to the software. What it did, however, was alienate and leave behind some of the more serious and experienced hackers. It was less of a move and more of a see-ya.

I don’t think I noticed this right away. Looking back, I realized that of the people who were talkative and full-of-ideas in IRC, a few joined Slack, said maybe a few things, then rarely came back.

Of the new developers that joined Slack, very few contributed the way those original IRC hackers did. It was just never the same after the move.

What we lost in serious programmers, we gained in people saying they wanted to contribute, and that they had ideas for more features. As you can guess, ideas for more features are always in heavy supply–it’s the execution that a thriving open source project needs.

Now, don’t get me wrong: I’m grateful to anyone who helped make the software better, whether they spent man-years of their life on it, or only minutes in passing (and anything in between). And the users of the software have better software because of those contributors.

But, in the end, our move to Slack helped shift the audience from the developers of the software to the users.

Community Takes Leadership

I tend to work on OneBody in sprints–not as a marathon. I will work hard for several months, add new features, fix bugs, make lots of plans, then eventually get tired of it all and take a 6-month or a year break. I’ve been building OneBody this way for over 10 years. It works for me.

And while it works fine for a one-man show, it’s not ideal for leading a community of others. I wasn’t nearly consistent enough to inspire collaboration and craftsmanship.

There were many many weeks where I didn’t feel like thinking about my open source projects–not to mention talking to someone else about them.

I’m not beating myself up about it–just pointing it out. Community takes a steady leader with vision, which I am not.

The Designers Never Came

This is a small note, but I wanted to mention it. I thought that by saying “Church.IO is a community of developers and designers building open source church software,” we would somehow entice the rare creature known as a “designer” to join us.

It didn’t work. I don’t have any good insight into that.

What Went Right

While I am sad to close this chapter in my life, I am happy about some of the things we did that actually worked. I’m talking mostly about OneBody below, because that was and still is the main project in my life. In addition to the standard open source bug fixes and feature work, we had some really cool achievements:

Marketing: Having a decent website with a memorable name and domain does amazing things on the marketing side. I was amazed how quickly people started learning about our software once it was under a “brand” name of sorts–even though there was no commercial company backing the software.

Easier Installation: We put a lot of work into making OneBody easier to install. We provided a Debian package, an Amazon AMI, a VirtualBox image, and even a one-click installer for Digital Ocean (all of these are still available today). This put the software in the hands of many more people than otherwise.

Better Documentation and UI: Several individuals helped write documentation and improve our wiki pages. This could not have happened without the community. More people hammering on the UI of OneBody forced us to improve strange and difficult-to-use portions, too.

Translation: OneBody is translated into several languages thanks to bilingual volunteers–that amazes me. Church all over the world are using OneBody, in their native language. Still blows my mind.

We Never Took Funding: Some (many?) people will say this is a mistake. But from the beginning, I wanted Church.IO to remain independent and free from investment interests. If Church.IO was to stand, it would stand because of volunteers and churches giving back their staff time in the form of code contributions–not because of someone’s or some company’s deep pockets.

Conclusion

Church.IO was an amazing part of my life, and if I could go back in time, I would still do it again. Though, I would change some things perhaps–like not moving to Slack–and I’d find a charismatic co-leader to help with consistent leadership. But hindsights is, well, you know…


Where Did the Projects Go?

The open source projects are still alive, some more alive than others, on GitHub. I moved the projects I started back to my personal GitHub account. Dean Montgomery maintains his apostello project there. And Isaac Smith maintains Cedar there too.

Leaving Twitter

Late in 2017, I politely said good-bye to Twitter and deleted my account. My Twitter account was 10 years old, and the anniversary, as anniversaries often do, prompted me to think about the value of a decade spent microblogging.

I remember when Twitter was a quiet site for geeks, and my first tweets were about HTTP servers and Git and SliceHost (remember them?). And my geek friends replied sometimes. And there were no politics.

I remember thinking, man, if more of my friends were on here, then this could be really cool.

Normies

I remember when Oprah joined Twitter just a few years later, and how I had a bad feeling about it. I couldn’t reconcile my desire for “more of my friends” to be on Twitter with the cringey tweets of a non-geek (who happened to be a celebrity). And then the masses followed. Twitter grew. And my wish for more normal people slowly but surely came true.

Only it wasn’t the happy hug-fest I imagined.

Twitter became a place for complaining and opining. Maybe it always was that place. Heck, I complained about this and that every other tweet probably. There just was nobody to read it nor anybody who cared. Or maybe the opinions were about tech, and tech opinions aren’t as dividing as other ones (at least for me).

The complaints and opinions weren’t enough to ruin Twitter for me though. I learned to accept the bad with the good. The frog was starting to simmer though.

The “Algorithm”

The full on frog boil seems to have happened many years later, when Twitter started tweaking its algorithm. Wow, even writing the word “algorithm” makes me mad. A news feed should be chronological. No algorithm needed.

Twitter started stoking the fires by making controversial tweets and discussion front-and-center. The stuff just won’t go away.

Twitter trained my most beloved friends and coworkers to “engage” with their audience. Read: be controversial. Uggg.

Political opponent hatred. Sports team hatred. Tech hatred. It is everywhere.

Maybe it’s not fully Twitter’s fault. They’re just adapting to the times, along with Facebook and Google News, etc.

Our society loves controversy, loves to be mad about something. I just want to write cool software and learn new things. I don’t need to know what my friends and colleagues are mad about today, after all.

Alternatives

So, I’ve been off Twitter for six months now, and I really don’t miss it. Sure, I follow a link to a Twitter thread occasionally to see what the new hubbub is all about. But in the end, I think leaving Twitter was a good call.

Lobste.rs and HN keeps me informed of tech subjects, and I’m back to using email for talking with old friends. It’s a long-form (or short-form, whatever you need) medium and works well for keeping in touch.

I’m spending more time reading books now. I’m writing more content at seven1m.sdf.org. And I’m publishing all kinds of new experiments on my GitHub account.

Mastodon and the “fediverse” are promising, but I’m still on break from big like-driven and retweet-driven microblogging. I kind of like shouting into the void, thus I’m just posting to a plain text file for now.

Meta

The irony of me complaining here about people on Twitter complaining too much is not lost on me. But, it turns out, you don’t have to “engage” with this blog post, and I don’t need the likes or follows from it!

Extra Meta

I was going to end this post with a quip about how “You should follow me off Twitter”, in homage to Dustin Curtis' famous A/B test of coaxing his readers to become Twitter followers. But as I was looking up the URL, I discovered that Dustin had removed the post and excluded it from the Wayback machine as well. Maybe we all want to forget that time in our lives where we pretended that follower count meant something.

Dymo Printer Agent for Linux

Our church uses iPads and Dymo printers for children check-in. Parents check in their child, a label prints on the printer, and they stick the label on the child’s shirt. We use Planning Center Check-ins of course, and it’s great!

One downside to Dymo printers, however, is that their software only runs on Mac and Windows, necessitating a full computer connected to the printer. (There’s no way that I know of to hook the Dymo printer directly to the iPad.)

I had the idea to use a Raspberry Pi as the computer instead of a full desktop machine. The setup looks something like this:

1
iPad -[wireless network]-> Raspberry Pi -[usb]-> Dymo printer

Great idea! Unfortunately, the labels are sent as a string of XML to a small Dymo tray application which converts them to PDFs. This Dymo tray application does not run on Linux. Womp womp.

This can’t be too hard… (Many weekends have been ruined by that short phrase.) We’ll just write our own XML->PDF conversion program in Ruby!

The result is: dymo-printer-agent

This little application runs on Linux (in my case, a Raspberry Pi running Raspbian) and converts Dymo’s label XML format into PDF and sends it to the printer driver.

It works great!

So, if you’re in the odd situation of wanting to print to a Dymo label printer on Linux, this project might be useful to you! I’ve tried to include all the relevant gotchas and instructions in the readme, so check it out.

Meta note: I dusted off the ol' blog to post about this project, because I’m no longer on Twitter. I’ll talk about that decision in another blog post.

Authority on Rails (Gem for Declaring User Authorization)

CanCan is a wonderful plugin for Rails that allows you to define all your authorization logic in one place. For small apps, it works well. But as my app’s authorization needs grew more complex, I realized I needed a different approach to declaring and testing authorization.

So I went looking… It turns out that the Authority gem is exactly what I was looking for:

  • Authority splits out auth logic into individual “Authorizers”. Each one handles authorization for a single model (or multiple models that behave the same way), with individual methods for each action.
  • Authority doesn’t try to do too much – it gives you an organized way to check authorization in regular Ruby code, explicitly, without having to write implicit rules.

For our app, authorization got much simplified with regular “if” statements (compare to this):

AlbumAuthorizer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class AlbumAuthorizer < ApplicationAuthorizer
  def readable_by?(user)
    # belongs to me
    if resource.owner == user
      true
    # belongs to a friend
    elsif Person === resource.owner and user.friend_ids.include?(resource.owner.id)
      true
    # belongs to a group I'm in
    elsif Group === resource.owner and user.member_of?(resource.owner)
      true
    # is marked public
    elsif resource.is_public?
      true
    # I'm an admin
    elsif user.admin?(:manage_pictures)
      true
    end
  end
end

I like having my authorization logic split into separate classes – it seems to be a cleaner approach.

Also, you can have one authorizer depend on another, like so:

PictureAuthorizer
1
2
3
4
5
6
class PictureAuthorizer < ApplicationAuthorizer
  def readable_by?(user)
    # ask the resource's parent "album" if this user can read it
    resource.album.readable_by?(user)
  end
end

Now, there’s a lot that Authority does not do:

  • Authority doesn’t build SQL for you. Unlike CanCan’s accessible_by, Authority doesn’t give you a way to query all records accessible by the user. Our solution was to build that SQL ourselves, which isn’t difficult.
  • Authority doesn’t give you a way to load and authorize your resources in your controllers. For that, I built load_and_authorize_resource as a more generic solution. (More about that in the next blog post.)

…with these things absent, Authority has a narrow job that it does very well.

CanCan is still a wonderful tool that I will likely use on a future project, but Authority takes a different approach and is a great tool to have in the toolbelt!

Find Photos on Your Hard Drive to Upload to Flickr.

If you’re like me, you have thousands of photos on Flickr, and thousands more scattered accross your laptop, external drive, and phone.

What’s been uploaded to Flickr and what hasn’t? Who knows!

I hacked up a quick solution in Ruby and called it Flickr Upload Set.

diagram

It’s built as a web app with Sinatra, but it’s not meant to run on a server – just on your local computer.

Here’s how it works:

  1. Launch the Ruby app and open your browser to localhost:3000.
  2. Navigate to a folder that contains photos.
  3. Flickr Upload Set makes some calls to the Flickr API to search for photos with the same name under your account.
  4. Files found on Flickr are grayed out on the screen.

screenshot

More details can be found on the project page on GitHub.

Check it out. Let me know if it’s useful to you!

What Is Quality?

A lightning talk (5-minute) presentation I gave at TulsaWebDevs:

What is Quality

Note: I used to have this presentation embedded here, but it was scrolling the page and interfering, so now it’s just a link.

JavaScript Gotchas

Here are some common time sinks encountered when building a JavaScript app, along with some tips to avoid them.

Note: these tips were originally shared as a part of my TulsaWebDevs presentation following 2012 Startup Weekend Tulsa.

Bind

In JavaScript, scope is resolved during a function’s execution – not its definition. When working with classes, you expect that this will point to the class, but it often won’t.

Example:

1
2
3
4
5
6
7
8
9
var Todo = Backbone.View.extend({
  events: {
    'click input': 'check'
  },

  check: function() {
    console.log(this);
  }
});

Solution 1:

Use _.bindAll()

1
2
3
4
5
6
7
8
9
10
11
12
13
var Todo = Backbone.View.extend({
  events: {
    'click input': 'check'
  },

  initialize: function() {
    _.bindAll(this, 'check');
  },

  check: function() {
    console.log(this);
  }
});

Solution 2:

Use CoffeeScript =>

1
2
3
4
5
6
class Todo extends Backbone.View
  events:
    'click input': 'check'

  check: =>
    console.log this

Callback Spaghetti

As your app grows more complicated, your code will start to look like this (unless you work hard to avoid it):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
before(function (done) {
  server.server(options, function (s, db, providers) {
    //clear db and add a test user - "testuser"
    db.user.remove({}, function () {
      db.notification.remove({}, function () {
        providers.provider1.insertBulk(item1, item2, item3],
          function (err, result) {
          providers.provider2.insert([item1, item2, item3]
            function (err, result) {
            providers.provider3.insert([item1, item2, item3]
              function (err, result) {
              providers.provider4.insert([item1, item2, item3],
                function (err, result) {
                s.listen();
                done();
                })
              });
            });
          });
        });
      });
  });
});

While I have no perfect solution, here are some tips:

1. Split callbacks out into separate methods on the class:

1
2
3
4
5
click: function() {
  $.get('/foo', function(data) {
    // do something
  });
}

…becomes:

1
2
3
4
5
6
7
click: function() {
  $.get('/foo', this.clickCallback);
}

clickCallback: function(data) {
  // do something
}

Rinse, repeat.

2. Use the async or Seq library

Waterfall:

1
2
3
4
5
6
7
8
9
10
11
12
13
async.waterfall([
  function(callback){
    callback(null, 'one', 'two');
  },
  function(arg1, arg2, callback){
    callback(null, 'three');
  },
  function(arg1, callback){
    callback(null, 'done');
  }
], function (err, result) {
 // done
});

forEach:

1
async.forEach(files, this.saveFile, this.complete);

Supervisor Pegging CPU

Supervisor monitors files for changes; if you have many files, your CPU starts to become pegged.

Solution: Ignore the node modules directory and any other directories not containing source code:

1
supervisor -i data,node_modules app.js

Supervisor doesn’t reload configs

Solution: um, be aware of this fact, and just ctrl-c and start supervisor again when you change a config. :-)

Object is not a function

This error is the bane of my existence. It happens in lots of places, for many different reasons, but here are a few that I always try to check first:

  • module.exports is not set
  • when using cs, forgetting a comma, e.g. fn bar 2 instead of fn bar, 2
  • setting a property on your object with the same name as a method

Backbone - visibility into view(s)

If you’re building a Backbone.js app, do yourself a favor, and set the main app view as window.app_view or something similar. Set other views as subviews on the main view.

This will allow you to inspect the app from FireBug after everything is up and running.

Sometimes console.log() lies - object changes after being logged

In FireBug, doing a console.log on an object can be misleading if the object changes soon after it is logged. FireBug will update the nested properties of the object.

Solution: to be sure, you should console.log(obj.foo.bar) to see the actual value of the property at the time it is logged.

Migrate Posterous Blog to Jekyll

Posterous is set to shut down on April 30th. For the past month, I’ve been dreading the move from Posterous to whatever.

Today I bit the bullet and decided to migrate my Posterous blog to Jekyll. There are many reasons I chose Jekyll – independence from free hosted services is probably the biggest reason.

I chose to use Octopress, which extends Jekyll with a set of sane defaults and a good structure for customizing layout, CSS, etc.

To migrate the Posterous blog posts, first I downloaded an archive from the Posterous website. Inside the downloaded zip file, there is a wordpress_export_1.xml file, which Jekyll knows how to import.

Lastly, I chose to download all Posterous-hosted images, because I expect that Twitter might not host Posterous assets on Amazon S3 forever. Better safe than sorry. Here is the script that I used to do that work:

Yay, now my blog isn’t at the mercy of big corporations deciding how to make more money. Never mind that I just embedded a Gist above :-)

Note: I later realized all the images were in the original zip file I downloaded from Posterous. But odly, the filenames were a bit different and the wordpress xml file didn’t link to them.

Super Simple Clipboard History for Linux.

I recently switched from Gnome to i3 on my main laptop. So far, I'm enjoying the simplicity of piecing together my own desktop environment from small, specialized tools.

One thing I missed from Gnome was my clipboard history tool, Gpaste. I've come to expect that my previous dozen or more clipboard copies are a keystroke or two away and not lost forever.

While tweaking my i3 setup, I realized it would be possible to replicate some or even all of the functionality of GPaste using simple unix tools:

  • Ruby (could be another scripting language, or just Bash script, but I'm more familiar with Ruby)
  • xclip
  • dmenu

An hour or so later, I had this:

Upon startup, my .xinitrc file starts up clipd, which checks the clipboard every second for a new string. (Note, this only works for textual data at the moment.) It stores the last 100 unique items in the .clipboard-history file in my home directory.

The other piece to this is my .i3/config file, which has a keybinding for win+v.

This opens up dmenu with all the available clips. Once a selection is made via dmenu, the clip is stored in the clipboard for pasting the usual way.

It's not overly clever or advanced, but it seems to work.

Happy hacking!

Tulsa Hackathon Kick-off

So far the Tulsa Hackathon, an all-night programming drive benefiting needy Tulsa non-profits, is going underway and going well.

We had an excellent catered meal, project presentations were made, and teams have been formed. People seem to be figuring out team member responsibilities and getting started on the various projects.

I’ll write a couple more blog posts as the night progresses. Here are pics thus far: