Writing Go Applications with Reusable Logic

Series: Writing Go Applications

Writing libraries in Go is a relatively well-covered topic, I think… but I see a lot fewer posts about writing commands. When it comes down to it, all Go code ends up in a command. So let’s talk about it! This will be the first in a series, since I ended up having a lot more to say than I realized.

Today I’m going to focus on basic project layout, with the aims of optimizing for reusability and testability.

There are three unique bits about commands that influence how I structure my code when writing a command rather than a library:

Package main

This is the only package a go program must have. However, aside from telling the go tool to produce a binary, there’s one other unique thing about package main - no one can import code from it. That means that any code you put in package main can not be used directly by another project, and that makes the OSS gods sad. Since one of the main reasons I write open source code is so that other developers may use it, this goes directly against my desires.

There have been many times when I’ve thought “I’d love to use the logic behind X Go binary as a part of my code”. If that logic is in package main, you can’t.


If you care about producing a binary that does what users expect, then you should care about what exit code your binary exits with. The only way to do that is to call os.Exit (or call something that calls os.Exit, like log.Fatal).

However, you can’t test a function that calls os.Exit. Why? Because calling os.Exit during a test exits the test executable. This is quite hard to figure out if you end up doing it by accident (which I know from personal experience). When running tests, no tests actually fail, the tests just exit sooner than they should, and you’re left scratching your head.

The easiest thing to do is don’t call os.Exit. Most of your code shouldn’t be calling os.Exit anyway… someone’s going to get real mad if they import your library and it randomly causes their application to terminate under some conditions.

So, only call os.Exit in exactly one place, as near to the “exterior” of your application as you can get, with minimal entry points. Speaking of which…

func main()

It’s is the one function all go commands must have. You’d think that everyone’s func main would be different, after all, everyone’s application is different, right? Well, it turns out, if you really want to make your code testable and reusable, there’s really only approximately one right answer to “what’s in your main function?”

In fact, I’ll go one step further, I think there’s only approximately one right answer to “what’s in your package main?” and that’s this:

// command main documentation here.
package main

import (

func main{

That’s it. This is approximately the most minimal code you can have in a useful package main, thereby wasting no effort on code that others can’t reuse. We isolated os.Exit to a single line function that is the very exterior of our project, and effectively needs no testing.

Project Layout

Let’s get a look at the total package layout:

/home/you/src/github.com/you/proj $ tree
├── cli
│   ├── parse.go
│   ├── parse_test.go
│   └── run.go
├── main.go
├── README.md
└── run
    ├── command.go
    └── command_test.go

We know what’s in main.go… and in fact, main.go is the only go file in the main package. LICENSE and README.md should be self-explanatory. (Always use a license! Otherwise many people won’t be able to use your code.)

Now we come to the two subdirectories, run and cli.


The cli package contains the command line parsing logic. This is where you define the UI for your binary. It contains flag parsing, arg parsing, help text, etc.

It also contains the code that returns the exit code to func main (which gets sent to os.Exit). Thus, you can test exit codes returned from those functions, instead of trying to test exit codes your binary as a whole produces.


The run package contains the meat of the logic of your binary. You should write this package as if it were a standalone library. It should be far removed from any thoughts of CLI, flags, etc. It should take in structured data and return errors. Pretend it might get called by some other library, or a web service, or someone else’s binary. Make as few assumptions as possible about how it’ll be used, just as you would a generic library.

Now, obviously, larger projects will require more than one directory. In fact, you may want to split out your logic into a separate repo. This kind of depends on how likely you think it’ll be that people want to reuse your logic. If you think it’s highly likely, I recommend making the logic a separate directory. In my mind, a separate directory for the logic shows a stronger committment to quaity and stability than some random directory nestled deep in a repo somewhere.

Putting it together

The cli package forms a command line frontend for the logic in the run package. If someone else comes along, sees your binary, and wants to use the logic behind it for a web API, they can just import the run package and use that logic directly. Likewise, if they don’t like your CLI options, they can easily write their own CLI parser and use it as a frontend to the run package.

This is what I mean about reusable code. I never want someone to have to hack apart my code to get more use out of it. And the best way to do that is to separate the UI from the logic. This is the key part. Don’t let your UI (CLI) concepts leak into your logic. This is the best way to keep your logic generic, and your UI manageable.

Larger Projects

This layout is good for small to medium projects. There’s a single binary that is in the root of the repo, so it’s easier to go-get than if it’s under multiple subdirectories. Larger projects pretty much throw everything out the window. They may have multiple binaries, in which case they can’t all be in the root of the repo. However, such projects usually also have custom build steps and require more than just go-get (which I’ll talk about later).

More to come soon.

Vanity Imports with Hugo

When working on Gorram, I decided I wanted to release it via a vanity import path. After all, that’s half the reason I got npf.io in the first place (an idea blatantly stolen from Russ Cox’s rsc.io).

What is a vanity import path? It is explained in the go get documentation. If you’re not hosted on one of the well known hosting sites (github, bitbucket, etc), go get has to figure out how to get your code. How it does this is fairly ingenious - it performs an http GET of the import path (first https then http) and looks for specific meta elements in the page’s header. The header elements tells go get what type of VCS is being used and what address to use to get the code.

The great thing about this is that it removes the dependency of your code on any one code hosting site. If you want to move your code from github to bitbucket, you can do that without breaking anyone.

So, the first thing you need to host your own vanity imports is something that will respond to those GET requests with the right response. You could do something complicated like a special web application running on a VM in the cloud, but that costs money and needs maintenance. Since I already had a Hugo website (running for free on github pages), I wanted to see if I could use that. It’s a slightly more manual process, but the barrier of entry is a lot lower and it works on any free static hosting (like github pages).

So what I want is to have go get npf.io/gorram, actually download the code from https://github.com/natefinch/gorram. For that, I need https://npf.io/gorram to serve up this meta element:

<meta name="go-import" content="npf.io/gorram git https://github.com/natefinch/gorram">

or more generally:

<meta name="go-import" content="import-prefix vcs repo-root">

Where import-prefix is a string that matches a prefix of the import statement used in your code, vcs is the type of source control used, and repo-root is the root of the VCS repo where your code lives.

What’s important to note here is that these should be set this way for packages in subdirectories as well. So, for npf.io/gorram/run, the meta tag should still be as above, since it matches a prefix of the import path, and the root of the repo is still github.com/natefinch/gorram. (We’ll get to how to handle subdirectories later.)

You need a page serving that meta tag to live at the exact same place as the import statement… that generally will mean it needs to be in the root of your domain (I know that I, personally don’t want to see go get npf.io/code/gorram when I could have go get npf.io/gorram).

The easiest way to do this and keep your code organized is to put all your pages for code into a new directory under content called “code”. Then you just need to set the “permalink” for the code type in your site’s config file thusly:

	code = "/:filename/"

Then your content’s filename (minus extension) will be used as its url relative to your site’s base URL. Following the same example as above, I have content/code/gorram.md which will make that page now appear at npf.io/gorram.

Now, for the content. I don’t actually want to have to populate this page with content… I’d rather people just get forwarded on to github, so that’s what we’ll do, by using a refresh header. So here’s our template, that’ll live under layouts/code/single.html:

<!DOCTYPE html>
  <meta http-equiv="content-type" content="text/html; charset=utf-8">
  <meta name="go-import" content="npf.io{{substr .RelPermalink 0 -1}} git {{.Params.vanity}}">
  <meta name="go-source" content="npf.io{{substr .RelPermalink 0 -1}} {{.Params.vanity}} {{.Params.vanity}}/tree/master{/dir} {{.Params.vanity}}/blob/master{/dir}/{file}#L{line}">
  <meta http-equiv="refresh" content="0; url={{.Params.vanity}}">

This will generate a page that will auto-forward anyone who hits it on to your github account. Now, there’s one more (optional but recommended) piece - the go-source meta header. This is only relevant to godoc.org, and tells godoc how to link to the sourcecode for your package (so links on godoc.org will go straight to github and not back to your vanity url, see more details here).

Now all you need is to put a value of vanity = https://github.com/you/yourrepo in the frontmatter of the correct page, and the template does the rest. If your repo has multiple directories, you’ll need a page for each directory (such as npf.io/gorram/run). This would be kind of a drag, making the whole directory struture with content docs in each, except there’s a trick you can do here to make that easier.

I recently landed a change in Hugo that lets you customize the rendering of alias pages. Alias pages are pages that are mainly used to redirect people from an old URL to the new URL of the same content. But in our case, they can serve up the go-import and go-source meta headers for subdirectories of the main code document. To do this, make an alias.html template in the root of your layouts directory, and make it look like this:

<!DOCTYPE html><html>
        {{if .Page.Params.vanity -}}
        <meta name="go-import" content="npf.io{{substr .Page.RelPermalink 0 -1}} git {{.Page.Params.vanity}}">
        <meta name="go-source" content="npf.io{{substr .Page.RelPermalink 0 -1}} {{.Page.Params.vanity}} {{.Page.Params.vanity}}/tree/master{/dir} {{.Page.Params.vanity}}/blob/master{/dir}/{file}#L{line}">
        {{- end}}
        <title>{{ .Permalink }}</title>
        <link rel="canonical" href="{{ .Permalink }}"/>
        <meta http-equiv="content-type" content="text/html; charset=utf-8" />
        <meta http-equiv="refresh" content="0; url={{ .Permalink }}" />

Other than the stuff in the if statement, the rest is the default alias page that Hugo creates anyway. The stuff in the if statement is basically the same as what’s in the code template, just with an extra indirection of specifying .Page first.

Note that this change to Hugo is in master but not in a release yet. It’ll be in 0.18, but for now you’ll have to build master to get it.

Now, to produce pages for subpackages, you can just specify aliases in the front matter of the original document with the alias being the import path under the domain name:

aliases = [ "gorram/run", "gorram/cli" ]

So your entire content only needs to look like this:

date = 2016-10-02T23:00:00Z
title = "Gorram"
vanity = "https://github.com/natefinch/gorram"
aliases = [

Any time you add a new subdirectory to the package, you’ll need to add a new alias, and regenerate the site. This is unfortunately manual, but at least it’s a trivial amount of work.

That’s it. Now go get (and godoc.org) will know how to get your code.

To Enum or Not To Enum

Enum-like values have come up in my reviews of other people’s code a few times, and I’d like to nail down what we feel is best practice.

I’ve seen many places what in other languages would be an enum, i.e. a bounded list of known values that encompass every value that should ever exist.

The code I have been critical of simply calls these values strings, and creates a few well-known values, thusly: package tool

// types of tools const ( ScrewdriverType = “screwdriver” HammerType = “hammer” // … )

type Tool struct { typ string }

func NewTool(tooltype string) (Tool, error) { switch tooltype{ case ScrewdriverType, HammerType: return Tool{typ:tooltype}, nil default: return Tool{}, errors.New(“invalid type”) } } The problem with this is that there’s nothing stopping you from doing something totally wrong like this: name := user.Name()

// … some other stuff

a := NewTool(name) That would fail only at runtime, which kind of defeats the purpose of having a compiler.

I’m not sure why we don’t at least define the tool type as a named type of string, i.e. package tool

type ToolType string

const ( Screwdriver ToolType = “screwdriver” Hammer = “hammer” // … )

type Tool struct { typ ToolType }

func NewTool(tooltype ToolType) Tool { return Tool{typ:tooltype} } Note that now we can drop the error checking in NewTool because the compiler does it for us. The ToolType still works in all ways like a string, so it’s trivial to convert for printing, serialization, etc.

However, this still lets you do something which is wrong but might not always look wrong: a := NewTool(“drill”) Because of how Go constants work, this will get converted to a ToolType, even though it’s not one of the ones we have defined.

The final revision, which is the one I’d propose, removes even this possibility, by not using a string at all (it also uses a lot less memory and creates less garbage): package tool

type ToolType int

const ( Screwdriver ToolType = iota Hammer // … )

type Tool struct { typ ToolType }

func NewTool(tooltype ToolType) Tool { return Tool{typ:tooltype} } This now prevents passing in a constant string that looks like it might be right. You can pass in a constant number, but NewTool(5) is a hell of a lot more obviously wrong than NewTool(“drill”), IMO.

The push back I’ve heard about this is that then you have to manually write the String() function to make human-readable strings… but there are code generators that already do this for you in extremely optimized ways (see https://github.com/golang/tools/blob/master/cmd/stringer/stringer.go)

Returning Errors

There are basically two ways to return errors in Go:

func (c Config) Save() error {
	if err := c.checkDefault(); err != nil {
		return err


func (c Config) Save() error {
	if err := c.checkDefault(); err != nil {
		return fmt.Errorf("can't find default config file: %v", err)

The former passes the original error up the stack, but adds no context to it. Thus, your saveConfig function may end up printing “file not found: default.cfg” without telling the caller why it was trying to open default.cfg.

The latter allows you to add context to an error, so the above error could become “can’t find default config file: file not found: default.cfg”. This gives nice context to the error, but unfortunately, it creates an entirely new error that only maintains the error string from the original. This is fine for human-facing output, but is useless for error handling code.

If you use the former code, calling code can then use os.IsNotExist(), figure out that it was a not found error, and create the file. Using the latter code, the type of the error is now a different type than the one from os.Open, and thus will not return true from os.IsNotExist. Using fmt.Errorf effectively masks the original error from calling code (unless you do ugly string parsing - please don’t).

Sometimes it’s good to mask the original error, if you don’t want your callers depending on what should be an implementation detail (thus effectively making it part of your API contract). However, lots of times you may want to give your callers the ability to introspect your errors and act on them. This then loses the opportunity to add context to the error, and so people calling your code have to do some mental gymnastics (and/or look at the implementation) to understand what an error really means.

A further problem for both these cases is that when debuging, you lose all knowledge of where an error came from. There’s no stack trace, there’s not even a file and line number of where the error originated. This can make debugging errors fairly difficult, unless you’re careful to make your error messages easy to grep for. I can’t tell you how often I’ve searched for an error formatting string, and hoped I was guessing the format correctly.

This is just the way it is in Go, so what’s a developer to do? Why, write an errors library that does smarter things of course! And there are a ton of these things out there. Many add a stack trace at error creation time. Most wrap an original error in some way, so you can add some context while keeping the original error for checks like os.IsNotExist. At Canonical, the Juju team wrote just such a library (actually we wrote 3 and then had them fight until only one was standing), and the result is https://github.com/juju/errors.

Thus you might return an error this way:

func (c Config) Save() error {
	if err := c.checkDefault(); err != nil {
		return errors.Annotatef(err, "can't find default config file")

This returns a new error created by the errors package which adds the given string to the front of the original error’s error message (just like fmt.Errorf), but you can introspect it using errors.Cause(err) to access the original error return by checkDefault. Thus you can use os.IsNotExist(errors.Cause(err)) and it’ll do the right thing.

However, this and every other special error library suffer from the same problem - your library can only understand its own special errors. And no one else’s code can understand your errors (because they won’t know to use errors.Cause before checking the error). Now you’re back to square one - your errors are just as opaque to third party code as if they were created by fmt.Errorf.

I don’t really have an answer to this problem. It’s inherent in the functionality (or lack thereof) of the standard Go error type.

Obviously, if you’re writing a standalone package for many other people to use, don’t use a third party error wrapping library. Your callers are likely not going to be using the same library, so they won’t get use out of it, and it adds unnecessary dependencies to your code. To decide between returning the original error and an annotated error using fmt.Errorf is harder. It’s hard to know when the information in the original error might be useful to your caller. On the other hand, the additional context added by fmt.Errorf can often change an inscrutable error into an obvious one.

If you’re writing an application where you’ll be controlling most of the packages being written, then an errors package may make sense… but you still run the risk of giving your custom errors to third party code that can’t understand them. Plus, any errors library adds some complexity to the code (for example, you always have to rememeber to call os.IsNotExist(errors.Cause(err)) rather than just calling os.InNotExist(err)).

You have to choose one of the three options every time you return an error. Choose carefully. Sometimes you’re going to make a choice that makes your life more difficult down the road.

Take control of your commands with Deputy


image: creative commons, © MatsuRD

As a part of my work on Juju, I have published a new package at http://github.com/juju/deputy. I think it’ll be of general use to a lot of people.

True story. The idea was this package would be a lieutenant commander (get it?)… but I also knew I didn’t want to have to try to spell lieutenant correctly every time I used the package. So that’s why it’s called deputy. He’s the guy who’s not in charge, but does all the work.


At Juju, we run a lot of external processes using os/exec. However, the default functionality of an exec.Cmd object is kind of lacking. The most obvious one is those error returns “exit status 1”. Fantastic. Have you ever wished you could just have the stderr from the command as the error text? Well, now you can, with deputy.

func main() {
    d := deputy.Deputy{
        Errors:    deputy.FromStderr,
    cmd := exec.Command("foo", "bar", "baz")
    err := d.Run(cmd)

In the above code, if the command run by Deputy exits with a non-zero exit status, deputy will capture the text output to stderr and convert that into the error text. e.g. if the command returned exit status 1 and output “Error: No such image or container: bar” to stderr, then the error’s Error() text would look like “exit status 1: Error: No such image or container: bar”. Bam, the errors from commands you run are infinitely more useful.


Another idiom we use is to pipe some of the output from a command to our logs. This can be super useful for debugging purposes. With deputy, this is again easy:

func main() {
    d := deputy.Deputy{
        Errors:    deputy.FromStderr,
        StdoutLog: func(b []byte) { log.Print(string(b)) },
    cmd := exec.Command("foo", "bar", "baz")
    err := d.Run(cmd)

That’s it. Now every line written to stdout by the process will be piped as a log message to your log.


Finally, an idiom we don’t use often enough, but should, is to add a timeout to command execution. What happens if you run a command as part of your pipeline and that command hangs for 30 seconds, or 30 minutes, or forever? Do you just assume it’ll always finish in a reasonable time? Adding a timeout to running commands requires some tricky coding with goroutines, channels, selects, and killing the process… and deputy wraps all that up for you in a simple API:

func main() {
    d := deputy.Deputy{
        Errors:    deputy.FromStderr,
        StdoutLog: func(b []byte) { log.Print(string(b)) },
        Timeout:   time.Second * 10,
    cmd := exec.Command("foo", "bar", "baz")
    err := d.Run(cmd)

The above code adds a 10 second timeout. After that time, if the process has not finished, it will be killed and an error returned.

That’s it. Give deputy a spin and let me know what you think.

Testing os/exec.Command

In Juju, we often have code that needs to run external executables. Testing this code is a nightmare… because you really don’t want to run those files on the dev’s machine or the CI machine. But mocking out os/exec is really hard. There’s no interface to replace, there’s no function to mock out and replace. In the end, your code calls the Run method on the exec.Cmd struct.

There’s a bunch of bad ways you can mock this out - you can write out scripts to disk with the right name and structure their contents to write out the correct data to stdout, stderr and return the right return code… but then you’re writing platform-specific code in your tests, which means you need a Windows version and a Linux version… It also means you’re writing shell scripts or Windows batch files or whatever, instead of writing Go. And we all know that we want our tests to be in Go, not shell scripts.

So what’s the answer? Well, it turns out, if you want to mock out exec.Command, the best place to look is in the exec package’s tests themselves. Lo and behold, it’s right there in the first function of exec_test.go

func helperCommand(t *testing.T, s ...string) *exec.Cmd {
    cs := []string{"-test.run=TestHelperProcess", "--"}
    cs = append(cs, s...)
    cmd := exec.Command(os.Args[0], cs...)
    cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
    return cmd

(one line elided for clarity)

What the heck is that doing? It’s pretty slick, so I’ll explain it.

First off, you have to understand how tests in Go work. When running go test, the go tool compiles an executable from your code, runs it, and passes it the flags you passed to go test. It’s that executable which actually handles the flags and runs the tests. Thus, while your tests are running, os.Args[0] is the name of the test executable.

This function is making an exec.Command that runs the test executable, and passes it the flag to tell the executable just to run a single test. It then terminates the argument list with -- and appends the command and arguments that would have been given to exec.Command to run your command.

The end result is that when you run the exec.Cmd that is returned, it will run the single test from this package called “TestHelperProcess” and os.Args will contain (after the --) the command and arguments from the original call.

The environment variable is there so that the test can know to do nothing unless that environment variable is set.

This is awesome for a few reasons:

  • It’s all Go code. No more needing to write shell scripts.
  • The code run in the excutable is compiled with the rest of your test code. No more needing to worry about typos in the strings you’re writing to disk.
  • No need to create new files on disk - the executable is already there and runnable, by definition.

So, let’s use this in a real example to make it more clear.

In your production code, you can do something like this:

var execCommand = exec.Command
func RunDocker(container string) ([]byte, error) {
    cmd := execCommand("docker", "run", "-d", container)
    out, err := cmd.CombinedOutput()

Mocking this out in test code is now super easy:

func fakeExecCommand(command string, args...string) *exec.Cmd {
    cs := []string{"-test.run=TestHelperProcess", "--", command}
    cs = append(cs, args...)
    cmd := exec.Command(os.Args[0], cs...)
    cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
    return cmd

const dockerRunResult = "foo!"
func TestRunDocker(t *testing.T) {
    execCommand = fakeExecCommand
    defer func(){ execCommand = exec.Command }()
    out, err := RunDocker("docker/whalesay")
    if err != nil {
        t.Errorf("Expected nil error, got %#v", err)
    if string(out) != dockerRunResult {
        t.Errorf("Expected %q, got %q", dockerRunResult, out)

func TestHelperProcess(t *testing.T){
    if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
    // some code here to check arguments perhaps?
    fmt.Fprintf(os.Stdout, dockerRunResult)

Of course, you can do a lot more interesting things. The environment variables on the command that fakeExecCommand returns make a nice side channel for telling the executable what you want it to do. I use one to tell the process to exit with a non-zero error code, which is great for testing your error handling code. You can see how the standard library uses its TestHelperProcess test here.

Hopefully this will help you avoid writing really gnarly testing code (or even worse, not testing your code at all).

Sharing Godoc of a WIP Branch

I had a problem yesterday - I wanted to use the excellent godoc.org to show coworkers the godoc for the feature I was working on. However, the feature was on a branch of the main code in Github, and go get Does Not Work That Way™. So, what to do? Well, I figured out a hack to make it work.

https://gopkg.in is a super handy service that lets you point go get at branches of your repo named vN (e.g. v0, v1, etc). It also happens to work on tags. So, we can leverage this to get godoc.org to render the godoc for our WIP branch.

From your WIP branch, simply do

git tag v0
git push myremote v0

This creates a lightweight tag that only affects your repo (not upstream from whence you forked).

You now can point godoc at your branch by way of gopkg.in: https://godoc.org/gopkg.in/GithubUser/repo.v0

This will tell godoc to ‘go get’ your code from gopkg.in, and gopkg.in will redirect the command to your v0 tag, which is currently on your branch. Bam, now you have godoc for your WIP branch on godoc.org.

Later, the tag can easily be removed (and reused if needed) thusly:

git tag -d v0
git push myremote :refs/tags/v0

So, there you go, go forth and share your godoc. I find it’s a great way to get feedback on architecture before I dive into the reeds of the implementation.

Go Plugins are as Easy as Pie

When people hear that Go only supports static linking, one of the things they eventually realize is that they can’t have traditional plugins via dlls/libs (in compiled languages) or scripts (in interpreted languages). However, that doesn’t mean that you can’t have plugins. Some people suggest doing “compiled- in” plugins - but to me, that’s not a plugin, that’s just code. Some people suggest just running sub processes and sending messages via their CLI, but that runs into CLI parsing issues and requires runnnig a new process for every request. The last option people think of is using RPC to an external process, which may also seem cumbersome, but it doesn’t have to be.

Serving up some pie

I’d like to introduce you to https://github.com/natefinch/pie - this is a Go package which contains a toolkit for writing plugins in Go. It uses processes external to the main program as the plugins, and communicates with them via RPC over the plugin’s stdin and stout. Having the plugin as an external process can actually has several benefits:

  • If the plugin crashes, it won’t crash your process.
  • The plugin is not in your process’ memory space, so it can’t do anything nasty.
  • The plugin can be written in any language, not just Go.

I think this last point is actually the most valuable. One of the nicest things about Go applications is that they’re just copy-and-run. No one even needs to know they were written in Go. With plugins as external processes, this remains true. People wanting to extend your application can do so in the language of their choice, so long as it supports the codec your application has chosen for RPC.

The fact that the communication occurs over stdin and stdout means that there is no need to worry about negotiating ports, it’s easily cross platform compatible, and it’s very secure.


Pie is written to be a very simple set of functions that help you set up communication between your process and a plugin process. Once you make a couple calls to pie, you then need to work out your own way to use the RPC connection created. Pie does not attempt to be an all-in-one plugin framework, though you could certainly use it as the basis for one.

Why is it called pie?

Because if you pronounce API like “a pie”, then all this consuming and serving of APIs becomes a lot more palatable. Also, pies are the ultimate pluggable interface - depending on what’s inside, you can get dinner, dessert, a snack, or even breakfast. Plus, then I get to say that plugins in Go are as easy as… well, you know.


I plan to be using pie in one of my own side projects. Take it out for a spin in one of your projects and let me know what you think. Happy eating!

Go Nitpicks

I saw this tweet last night:

I figured I’d answer it here about Go. Luckily, Go is a very small language, so there’s not a lot of surface area to dislike. However, there’s definitely some things I wish were different. Most of these are nitpicks, thus the title.

#1 Bare Returns

func foo() (i int, err error) {
    i, err = strconv.ParseInt("5") 
    return // wha??

For all that Go promotes readable and immediately understandable code, this seems like a ridiculous outlier. The way it works is that if you don’t declare what the function is returning, it’ll return the values stored in the named return variables. Which seems logical and handy, until you see a 100 line function with multiple branches and a single bare return at the bottom, with no idea what is actually getting returned.

To all gophers out there: don’t use bare returns. Ever.

#2 New

a := new(MyStruct)

New means “Create a zero value of the given type and return a pointer to it”. It’s sorta like the C++ new, which is probably why it exists. The problem is that it’s nearly useless. It’s mostly redundant with simply returning the address of a value thusly:

a := &MyStruct{}

The above is a lot easier to read, it also gives you the ability to populate the value you’re constructing (if you wish). The only time new is “useful” is if you want to initialize a pointer to a builtin (like a string or an int), because you can’t do this:

a := &int

but you can do this:

a := new(int)

Of course, you could always just do it in (gasp) two lines:

a := 0
b := &a

To all the gophers out there: don’t use new. Always use &Foo{} with structs, maps, and slices. Use the two line version for numbers and strings.

#3 Close

The close built-in function closes a channel. If the channel is already closed, close will panic. This pisses me off, because most of the time when I call close, I don’t actually care if it’s already closed. I just want to ensure that it’s closed. I’d much prefer if close returned a boolean that said whether or not it did anything, and then if I choose to panic, I can. Or, you know, not.

#4 There is no 4

That’s basically it. There’s some things I think are necessary evils, like goto and panic. There’s some things that are necessary ugliness, like the built-in functions append, make, delete, etc. I sorta wish x := range foo returned the value in x and not the index, but I get that it’s to be consistent between maps and slices, and returning the value in maps would be odd, I think.

All these are even below the level of nitpicks, though. They don’t bug me, really. I understand that everything in programming is a tradeoff, and I think the decisions made for Go were the right ones in these cases. Sometimes you need goto. Sometimes you need to panic. Making those functions built-ins rather than methods on the types means you don’t need any methods on the types, which keeps them simpler, and means they’re “just data”. It also means you don’t lose any functionality if you make new named types based on them.

So that’s my list for Go.


Someone on the twitter discussion mentioned he couldn’t think of anything he disliked about C#, which just about made me spit my coffee across the room. I programmed in C# for ~9 years, starting out porting some 1.1 code to 2.0, and leaving as 5.0 came out. The list of features in C# as of 5.0 is gigantic. Even being a developer writing in it 40+ hours a week for 9 years, there was still stuff I had to look up to remember how it worked.

I feel like my mastery of Go after a year of side projects was about equivalent to my mastery of C# after 9 years of full time development. If we assume 1:1 correlation between time to master and size of the language, an order of magnitude sounds about right.

Why Everyone Hates Go

Obviously, not everyone hates Go. But there was a quora question recently about why everyone criticizes Go so much. (sorry, I don’t normally post links to Quora, but it was the motivator for this post) Even before I saw the answers to the question, I knew what they’d consist of:

  • Go is a language stuck in the 70’s.
  • Go ignores 40 years of programming language research.
  • Go is a language for blue collar (mediocre) developers.
  • Gophers are ok with working in Java 1.0.

Unfortunately, the answers to the questions were more concerned with explaining why Go is “bad”, rather than why this gets under so many people’s skin.

When reading the answers I had a eureka moment, and I realized why it is. So here’s my answer to the same question. This is why Go is so heavily criticized, not why Go is “bad”.

There’s two awesome posts that inform my answer: Paul Graham’s post about keeping your identity small, and Kathy Sierra’s post about the Koolaid point. I encourage you to read those two posts, as they’re both very informative. I hesitate to compare the horrific things that happen to women online with the pedantry of flamewars about programming languages, but the Koolaid Point is such a valid metaphor that I wanted to link to the article.

Paul says

people can never have a fruitful argument about something that’s part of their identity

i.e. the subject hits too close to home, and their response becomes emotional rather than logical.

Kathy says

the hate wasn’t so much about the product/brand but that other people were falling for it.

i.e. they’d drunk the kool-aid.

Go is the only recent language that takes the aforementioned 40 years of programming language research and tosses it out the window. Other new languages at least try to keep up with the Jones - Clojure, Scala, Rust - all try to incorporate “modern programming theory” into their design. Go actively tries not to. There is no pattern matching, there’s no borrowing, there’s no pure functional programming, there’s no immutable variables, there’s no option types, there’s no exceptions, there’s no classes, there’s no generics…. there’s a lot Go doesn’t have. And in the beginning this was enough to merely earn it scorn. Even I am guilty of this. When I first heard about Go, I thought “What? No exceptions? Pass.”

But then something happened - people started using it. And liking it. And building big projects with it. This is the Koolaid-point - where people have started to drink the Koolaid and get fooled into thinking Go is a good language. And this is where the scorn turns into derision and attacks on the character of the people using it.

The most vocal Go detractors are those developers who write in ML-derived languages (Haskell, Rust, Scala, et al) who have tied their preferred programming language into their identity. The mere existence of Go says “your views on what makes a good programming language are wrong”. And the more people that use and like Go, the more strongly they feel that they’re being told their choice of programming language - and therefore their identity - is wrong.

Note that basically no one in the Go community actually says this. But the Go philosophy of simplicity and pragmatism above all else is the polar opposite of what those languages espouse (in which complexity in the language is ok because it enforces correctness in the code). This is insulting to the people who tie their identity to that language. Whenever a post on Go makes it to the front page of Hacker News, it is an affront to everything they hold dear, and so you get comments like Go developers are stuck in the 70’s, or is only for blue-collar devs.

So, this is why I think people are so much more vocal about their dislike of Go: because it challenges their identity, and other people are falling for it. This is also why these posts so often mention Google and how the language would have died without them. Google is now the koolaid dispenser. The fact that they are otherwise generally thought of as a very talented pool of developers means that it is simultaneously more outrageous that they are fooling people and more insulting that their language flies in the face of ML-derived languages.

Update: I removed the “panties in a bunch” comment, since I was (correctly) scolded for being sexist, not to mention unprofessional. My apologies to anyone I offended.

Deploy Discourse with Juju in 8 minutes

Steve Francia asked me to help him get Discourse deployed as a place for people to discuss Hugo, his static site generator (which is what I use to build this blog). If you don’t know Discourse, it’s pretty amazing forum software with community-driven moderation, all the modern features you expect (@mentions, SSO integration, deep email integration, realtime async updates, and a whole lot more). What I ended up deploying is now at discuss.gohugo.io.

I’d already played around with deploying Discourse about six months ago, so I already had an idea of what was involved. Given that I work on Juju as my day job, of course I decided to use Juju to deploy Discourse for Steve. This involved writing a Juju charm which is sort of like an install script, but with hooks for updating configuration and hooks for interacting with other services. I’ll talk about the process of writing the charm in a later post, but for now, all you need to know is that it follows the official install guide for installing Discourse.

The install guide says that you can install Discourse in 30 minutes. Following it took me a lot longer than that, due to some confusion about what the install guide really wanted you to do, and what the install really required. But you don’t need to know any of that to use Juju to install Discourse, and you can get it done in 8 minutes, not 30. Here’s how:

First, install Juju:

sudo add-apt-repository -y ppa:juju/stable
sudo apt-get update && sudo apt-get install -y juju-core

Now, Juju does not yet have a provider for Digital Ocean, so we have to use a plugin to get the machine created. We’re in the process of writing a provider for Digital Ocean, so soon the plugin won’t be necessary. If you use another cloud provider, such as AWS, Azure, HP Cloud, Joyent, or run your own Openstack or MAAS, you can easily configure Juju to use that service, and a couple of these steps will not be necessary. I’ll post separate steps for that later. But for now, let’s assume you’re using Digital Ocean.

Install the juju Digital Ocean plugin:

sudo apt-get install -y python-pip
pip install -U juju-docean

Get your Digital Ocean access info and set the client id in an environment variable called DO_CLIENT_ID and the API key in an environment variable called DO_API_KEY.

Juju requires access with an SSH key to the machines, so make sure you have one set up in your Digital Ocean account.

Now, let’s create a simple configuration so juju knows where you want to deploy your new environment.

juju init

Running juju init will create a boilerplate configuration file at ~/.juju/environments.yaml. We’ll append our digital ocean config at the bottom:

echo "    digitalocean:
        type: manual
        bootstrap-host: null
        bootstrap-user: root
" >> ~/.juju/environments.yaml

Note that this is yaml, so the spaces at the beginning of each line are important. Copy and paste should do the right thing, though.

Now we can start the real fun, let’s switch to the digitalocean environment we just configured, and create the first Juju machine in Digital Ocean:

juju switch digitalocean
juju docean bootstrap --constraints="mem=2g, region=nyc2"

(obviously replace the region with whatever one you want)

Now, it’ll take about a minute for the machine to come up.

Discourse requires email to function, so you need an account at mandrill, mailgun, etc. They’re free, so don’t worry. From that account you need to get some information to properly set up Discourse. You can do this after installing discourse, but it’s faster if you do it before and give the configuration at deploy time. (changing settings later will take a couple minutes while discourse reconfigures itself)

When you deploy discourse, you’re going to give it a configuration file, which will look something like this:

  DISCOURSE_HOSTNAME: discuss.example.com
  DISCOURSE_DEVELOPER_EMAILS: foo@example.com,bar@example.com
  DISCOURSE_SMTP_ADDRESS: smtp.mailservice.com
  DISCOURSE_SMTP_USER_NAME: postmaster@example.com
  DISCOURSE_SMTP_PASSWORD: supersecretpassword

The first line must be the same as the name of the service you’re deploying. By default it’s “discourse”, so you don’t need to change it unless you’re deploying multiple copies of discourse to the same Juju environment. And remember, this is yaml, so those spaces at the beginning of the rest of the lines are important.

The rest should be pretty obvious. Hostname is the domain name where your site will be hosted. This is important, because discourse will send account activation emails, and the links will use that hostname. Developer emails are the email addresses of accounts that should get automatically promoted to admin when created. The rest is email-related stuff from your mail service account. Finally, unicorn workers should just stay 3 unless you’re deploying to a machine with less than 2GB of RAM, in which case set it to 2.

Ok, so now that you have this file somewhere on disk, we can deploy discourse. Don’t worry, it’s really easy. Just do this:

juju deploy cs:~natefinch/trusty/discourse --config path/to/configfile --to 0
juju expose discourse

That’s it. If you’re deploying to a 2GB Digital Ocean droplet, it’ll take about 7 minutes.

To check on the status of the charm deployment, you can do juju status, which will show, among other things “agent-state: pending” while the charm is being deployed. Or, if you want to watch the logs roll by, you can do juju debug- log.

Eventually juju status will show agent-state: started. Now grab the ip address listed at public address: in the same output and drop that into your browser. Bam! Welcome to Discourse.

If you ever need to change the configuration you set in the config file above, you can do that by editing the file and doing

juju set discourse --config=/path/to/config

Or, if you just want to tweak a few values, you can do

juju set discourse foo=bar baz=bat ...

Note that every time you call juju set, it’ll take a couple minutes for Discourse to reconfigure itself, so you don’t want to be doing this over and over if you can hep it.

Now you’re on your own, and will have to consult the gurus at discourse.org if you have any problems. But don’t worry, since you deployed using Juju, which uses their official install instructions, your discourse install is just like the ones people deploy manually (albeit with a lot less time and trouble).

Good Luck!

Please let me know if you find any errors in this page, and I will fix them immediately.

Intro to TOML

TOML stands for Tom’s Own Minimal Language. It is a configuration language vaguely similar to YAML or property lists, but far, far better. But before we get into it in detail, let’s look back at what came before.

Long Ago, In A Galaxy Far, Far Away

Since the beginning of computing, people have needed a way to configure their software. On Linux, this generally is done in text files. For simple configurations, good old foo = bar works pretty well. One setting per line, name on the left, value on the right, separated by an equals. Great. But when your configuration gets more complicated, this quickly breaks down. What if you need a value that is more than one line? How do you indicate a value should be parsed as a number instead of a string? How do you namespace related configuration values so you don’t need ridiculously long names to prevent collisions?

The Dark Ages

In the 90’s, we used XML. And it sucked. XML is verbose, it’s hard for humans to read and write, and it still doesn’t solve a lot of the problems above (like how to specify the type of a value). In addition, the XML spec is huge, processing is very complicated, and all the extra features invite abuse and overcomplication.


In the mid 2000’s, JSON came to popularity as a data exchange format, and it was so much better than XML. It had real types, it was easy for programs to process, and you didn’t have to write a spec on what values should get processed in what way (well, mostly). It was sigificantly less verbose than XML. But it is a format intended for computers to read and write, not humans. It is a pain to write by hand, and even pretty-printed, it can be hard to read and the compact data format turns into a nested mess of curly braces. Also, JSON is not without its problems… for example, there’s no date type, there’s no support for comments, and all numbers are floats.

A False Start

YAML came to popularity some time after JSON as a more human-readable format, and its key: value syntax and pretty indentation is definitely a lot easier on the eyes than JSON’s nested curly-braces. However, YAML trades ease of reading for difficulty in writing. Indentation as delimiters is fraught with error… figuring out how to get multiple lines of data into any random value is an exercise in googling and trial & error.

The YAML spec is also ridiculously long. 100% compatible parsers are very difficult to write. Writing YAML by hand is a ridden with landmines of corner cases where your choice of names or values happens to hit a reserved word or special marker. It does support comments, though.

The Savior

On February 23, 2013, Tom Preston-Werner (former CEO of GitHub) made his first commit to https://github.com/toml-lang/toml. TOML stands for Tom’s Obvious, Minimal Language. It is a language designed for configuring software. Finally.

TOML takes inspiration from all of the above (well, except XML) and even gets some of its syntax from Microsoft’s INI files. It is easy to write by hand and easy to read. The spec is short and understandable by mere humans, and it’s fairly easy for computers to parse. It supports comments, has first class dates, and supports both integers and floats. It is generally insensitive to whitespace, without requiring a ton of delimiters.

Let’s dive in.

The Basics

The basic form is key = value

# Comments start with hash
foo = "strings are in quotes and are always UTF8 with escape codes: \n \u00E9"

bar = """multi-line strings
use three quotes"""

baz = 'literal\strings\use\single\quotes'

bat = '''multiline\literals\use

int = 5 # integers are just numbers
float = 5.0 # floats have a decimal point with numbers on both sides

date = 2006-05-27T07:32:00Z # dates are ISO 8601 full zulu form

bool = true # good old true and false

One cool point: If the first line of a multiline string (either literal or not) is a line return, it will be trimmed. So you can make your big blocks of text start on the line after the name of the value and not need to worry about the extraneous newline at the beginning of your text:

preabmle = """
We the people of the United States, in order to form a more perfect union,
establish justice, insure domestic tranquility, provide for the common defense,
promote the general welfare, and secure the blessings of liberty to ourselves
and our posterity, do ordain and establish this Constitution for the United
States of America."""


Lists (arrays) are signified with brackets and delimited with commas. Only primitives are allowed in this form, though you may have nested lists. The format is forgiving, ignoring whitespace and newlines, and yes, the last comma is optional (thank you!):

foo = [ "bar", "baz"

nums = [ 1, 2, ]

nested = [[ "a", "b"], [1, 2]]

I love that the format is forgiving of whitespace and that last comma. I like that the arrays are all of a single type, but allowing mixed types of sub-arrays bugs the heck out of me.

Now we get crazy

What’s left? In JSON there are objects, in YAML there are associative arrays… in common parlance they are maps or dictionaries or hash tables. Named collections of key/value pairs.

In TOML they are called tables and look like this:

# some config above
foo = 1
bar = 2

Foo and bar are keys in the table called table_name. Tables have to be at the end of the config file. Why? because there’s no end delimiter. All keys under a table declaration are associated with that table, until a new table is declared or the end of the file. So declaring two tables looks like this:

# some config above
foo = 1
bar = 2

	foo = 1
	baz = 2

The declaration of table2 defines where table1 ends. Note that you can indent the values if you want, or not. TOML doesn’t care.

If you want nested tables, you can do that, too. It looks like this:

	foo = "bar"

	baz = "bat"

nested_table is defined as a value in table1 because its name starts with table1.. Again, the table goes until the next table definition, so baz="bat" is a value in table1.nested_table. You can indent the nested table to make it more obvious, but again, all whitespace is optional:

	foo = "bar"

		baz = "bat"

This is equivalent to the JSON:

	"table1" : {
		"foo" : "bar",
		"nested_table" : {
			"baz" : "bat"

Having to retype the parent table name for each sub-table is kind of annoying, but I do like that it is very explicit. It also means that ordering and indenting and delimiters don’t matter. You don’t have to declare parent tables if they’re empty, so you can do something like this:

bat = "hi"

Which is the equivalent to this JSON:

	"foo" : {
		"bar" : {
			"baz" : {
				"bat" : "hi"

Last but not least

The last thing is arrays of tables, which are declared with double brackets thusly:

author = "Nate"
text = "Great Article!"

author = "Anonymous"
text = "Love it!"

This is equivalent to the JSON:

	"comments" : [
			"author" : "Nate",
			"text" : Great Article!"
			"author" : "Anonymous",
			"text" : Love It!"

Arrays of tables inside another table get combined in the way you’d expect, like [[table1.array]].

TOML is very permissive here. Because all tables have very explicitly defined parentage, the order they’re defined in doesn’t matter. You can have tables (and entries in an array of tables) in whatever order you want. This is totally acceptable:

author = "Anonymous"
text = "Love it!"

bat = "hi"

howdy = "neighbor"

author = "Anonymous"
text = "Love it!"

Of course, it generally makes sense to actually order things in a more organized fashion, but it’s nice that you can’t shoot yourself in the foot if you reorder things “incorrectly”.


That’s TOML. It’s pretty awesome.

There’s a list of parsers on the TOML page on github for pretty much whatever language you want. I recommend BurntSushi’s for Go, since it works just like the built-in parsers.

It is now my default configuration language for all the applications I write.

The next time you write an application that needs some configuration, take a look at TOML. I think your users will thank you.

Making It a Series

Series: Hugo 101

I obviously have a lot to talk about with Hugo, so I decided I wanted to make this into a series of posts, and have links at the bottom of each post automatically populated with the other posts in the series. This turned out to be somewhat of a challenge, but doable with some effort… hopefully someone else can learn from my work.

This now brings us to Taxonomies. Taxonomies are basically just like tags, except that you can have any number of different types of tags. So you might have “Tags” as a taxonomy, and thus you can give a content tags with values of “go” and “programming”. You can also have a taxonomy of “series” and give content a series of “Hugo 101”.

Taxonomy is sort of like relatable metadata to gather multiple pieces of content together in a structured way… it’s almost like a minimal relational database. Taxonomies are listed in your site’s metadata, and consist of a list of keys. Each piece of content can specify one or more values for those keys (the Hugo documentation calls the values “Terms”). The values are completely ad-hoc, and don’t need to be pre-defined anywhere. Hugo automatically creates pages where you can view all content based on Taxonomies and see how the various values are cross-referenced against other content. This is a way to implement tags on posts, or series of posts.

So, for my example, we add a Taxonomy to my site config called “series”. Then in this post, the “Hugo: Beyond the Defaults” post, and the “Hugo is Friggin’ Awesome” post, I just add series = ["Hugo 101"] (note the brackets - the values for the taxonomy are actually a list, even if you only have one value). Now all these posts are magically related together under a taxonomy called “series”. And Hugo automatically generates a listing for this taxonomy value at /series/hugo-101 (the taxonomy value gets url-ized). Any other series I make will be under a similar directory.

This is fine and dandy and pretty aweomse out of the box… but I really want to automatically generate a list of posts in the series at the bottom of each post in the series. This is where things get tricky, but that’s also where things get interesting.

The examples for displaying Taxonomies all “hard code” the taxonomy value in the template… this works great if you know ahead of time what value you want to display, like “all posts with tag = ‘featured’”. However, it doesn’t work if you don’t know ahead of time what the taxonomy value will be (like the series on the current post).

This is doable, but it’s a little more complicated.

I’ll give you a dump of the relevant portion of my post template and then talk about how I got there:

{{ if .Params.series }}
    {{ $name := index .Params.series 0 }}
	<p><a href="" id="series"></a>This is a post in the 
	<b>{{$name}}</b> series.<br/>
	Other posts in this series:</p>

    {{ $name := $name | urlize }}
    {{ $series := index .Site.Taxonomies.series $name }}
    <ul class="series">
    {{ range $series.Pages }}
    	<li>{{.Date.Format "Jan 02, 2006"}} -
    	<a href="{{.Permalink}}">{{.LinkTitle}}</a></li>

So we start off defining this part of the template to only be used if the post has a series. Right, sure, move on.

Now, the tricky part… the taxonomy values for the current page resides in the .Params values, just like any other custom metadata you assign to the page.

Taxonomy values are always a list (so you can give things multiple tags etc), but I know that I’ll never give something more than one series, so I can just grab the first item from the list. To do that, I use the index function, which is just like calling series[0] and assign it to the $name variable.

Now another tricky part… the series in the metadata is in the pretty form you put into the metadata, but the list of Taxonomies in .Site.Taxonomies is in the urlized form… How did I figure that out? Printf debugging. Hugo’s auto-reloading makes it really easy to use the template itself to figure out what’s going on with the template and the data.

When I started writing this template, I just put {{$name}} in my post template after the line where I got the name, and I could see it rendered on webpage of my post that the name was “Hugo 101”. Then I put {{.Site.Taxonomies.series}} and I saw something like map[hugo-101:[{0 0xc20823e000} {0 0xc208048580} {0 0xc208372000}]] which is ugly, but it showed me that the value in the map is “hugo-101”… and I realized it was using the urlized version, so I used the pre-defined hugo function urlize to convert the pretty series.

And from there it’s just a matter of using index again, this time to use $name as a key in the map of series…. .Site.Taxonomies is a map (dictionary) of Taxonomy names (like “series”) to maps of Taxonomy values (like “hugo-101”) to lists of pages. So, .Site.Taxonomies.series reutrns a map of series names to lists of pages… index that by the current series names, and bam, list of pages.

And then it’s just a matter of iterating over the pages and displaying them nicely. And what’s great is that this is now all automatic… all old posts get updated with links to the new posts in the series, and any new series I make, regardless of the name, will get the nice list of posts at the bottom for that series.

Hugo: Beyond the Defaults

Series: Hugo 101

In my last post, I had deployed what is almost the most basic Hugo site possible. The only reason it took more than 10 minutes is because I wanted to tweak the theme. However, there were a few things that immediately annoyed me.

I didn’t like having to type hugo -t hyde all the time. Well, turns out that’s not necessary. You can just put theme = "hyde" in your site config, and never need to type it again. Sweet. Now to run the local server, I can just run hugo server -w, and for final generation, I can just run hugo.

Next is that my posts were under npf.io/post/postname … which is not the end of the world, but I really like seeing the date in post URLs, so that it’s easy to tell if I’m looking at something really, really old. So, I went about looking at how to do that. Turns out, it’s trivial. Hugo has a feature called permalinks, where you can define the format of the url for a section (a section is a top level division of your site, denoted by a top level folder under content/). So, all you have to do is, in your site’s config file, put some config that looks like this:

    post = "/:year/:month/:filename/"
    code = "/:filename/"

While we’re at it, I had been putting my code in the top level content directory, because I wanted it available at npf.io/projectname …. however there’s no need to do that, I can put the code under the code directory and just give it a permalink to show at the top level of the site. Bam, awesome, done.

One note: Don’t forget the slash at the end of the permalink.

But wait, this will move my “Hugo is Friggin’ Awesome” post to a different URL, and Steve Francia already tweeted about it with the old URL. I don’t want that url to send people to a 404 page! Aliases to the rescue. Aliases are just a way to make redirects from old URLs to new ones. So I just put aliases = ["/post/hugo-is-awesome/"] in the metadata at the top of that post, and now links to there will redirect to the new location. Awesome.

Ok, so cool… except that I don’t really want the content for my blog posts under content/post/ … I’d prefer them under content/blog, but still be of type “post”. So let’s change that too. This is pretty easy, just rename the folder from post to blog, and then set up an archetype to default the metadata under /blog/ to type = “post”. Archetypes are default metadata for a section, so in this case, I make a file archetypes/blog.md and add type= “post” to the archetype’s metadata, and now all my content created with hugo new blog/foo.md will be prepopulated as type “post”. (does it matter if the type is post vs. blog? no. But it matters to me ;)

@mlafeldt on Twitter pointed out my RSS feed was wonky…. wait, I have an RSS feed? Yes, Hugo has that too. There are feed XML files automatically output for most listing directories… and the base feed for the site is a list of recent content. So, I looked at what Hugo had made for me (index.xml in the root output directory)… this is not too bad, but I don’t really like the title, and it’s including my code content in the feed as well as posts, which I don’t really want. Luckily, this is trivial to fix. The RSS xml file is output using a Go template just like everything else in the output. It’s trivial to adjust the template so that it only lists content of type “post”, and tweak the feed name, etc.

I was going to write about how I got the series stuff at the bottom of this page, but this post is long enough already, so I’ll just make that into its own post, as the next post in the series! :)

Hugo Is Friggin' Awesome

Series: Hugo 101

This blog is powered by Hugo, a static site generator written by Steve Francia (aka spf13). It is, of course, written in Go. It is pretty similar to Jekyll, in that you write markdown, run a little program (hugo) and html pages come out the other end in the form of a full static site. What’s different is that Jekyll is written in ruby and is relatively slow, and Hugo is written in Go and is super fast… only taking a few milliseconds to render each page.

Hugo includes a webserver to serve the content, which will regenerate the site automatically when you change your content. Your browser will update with the changes immediately, making your development cycle for a site a very tight loop.

The basic premise of Hugo is that your content is organized in a specific way on purpose. Folders of content and the name of the files combine to turn into the url at which they are hosted. For example, content/foo/bar/baz.md will be hosted at <site>/foo/bar/baz.

Every content file has a section of metadata at the top that allows you to specify information about the content, like the title, date, even arbitrary data for your specific site (for example, I have lists of badges that are shown on pages for code projects).

All the data in a content file is just that - data. Other than markdown specifying a rough view of your page, the actual way the content is viewed is completely separated from the data. Views are written in Go’s templating language, which is quick to pick up and easy to use if you’ve used other templating languages (or even if, like me, you haven’t). This lets you do things like iterate over all the entries in a menu and print them out in a ul/li block, or iterate over all the posts in your blog and display them on the main page.

You can learn more about Hugo by going to its site, which, of course, is built using Hugo.

The static content for this site is hosted on github pages at https://github.com/natefinch/natefinch.github.io. But the static content is relatively boring… that’s what you’re looking at in your browser right now. What’s interesting is the code behind it. That lives in a separate repo on github at https://github.com/natefinch/npf. This is where the markdown content and templates live.

Here’s how I have things set up locally… all open source code on my machine lives in my GOPATH (which is set to my HOME). So, it’s easy to find anything I have ever downloaded. Thus, the static site lives at $GOPATH/src/github.com/natefinch/natefinch.github.io and the markdown + templates lives in $GOPATH/src/github.com/natefinch/npf. I created a symbolic link under npf called public that points to the natefinch.github.io directory. This is the directory that hugo outputs the static site to by default… that way Hugo dumps the static content right into the correct directory for me to commit and push to github. I just had to add public to my .gitignore so everyone wouldn’t get confused.

Then, all I do is go to the npf directory, and run

hugo new post/urlofpost.md
hugo server --buildDrafts --watch -t hyde

That generates a new content item that’ll show up on my site under /post/urlofpost. Then it runs the local webserver so I can watch the content by pointing a browser at localhost:1313 on a second monitor as I edit the post in a text editor. hyde is the name of the theme I’m using, though I have modified it. Note that hugo will mark the content as a draft by default, so you need –buildDrafts for it to get rendered locally, and remember to delete the draft = true line in the page’s metadata when you’re ready to publish, or it won’t show up on your site.

When I’m satisfied, kill the server, and run

hugo -t hyde

to generate the final site output, switch into the public directory, and

git commit -am "some new post"

That’s it. Super easy, super fast, and no muss. Coming from Blogger, this is an amazingly better workflow with no wrestling with the WYSIWYG editor to make it display stuff in a reasonable fashion. Plus I can write posts 100% offline and publish them when I get back to civilization.

There’s a lot more to Hugo, and a lot more I want to do with the site, but that will come in time and with more posts :)

First Post

This is the first post of my new blog. You may (eventually) see old posts showing up behind here, those have been pulled in from my personal blog at blog.natefinch.com. I’ve decided to split off my programming posts so that people who only want to see the coding stuff don’t have to see my personal posts, and people that only want to see my personal stuff don’t have to get inundated with programming posts.

Right now the site is pretty basic, but I will add more features to it, such as post history etc.

CI for Windows Go Packages with AppVeyor

I recently needed to update my npipe package, and since I want it to be production quality, that means setting up CI, so that people using my package can know it’s passing tests.  Normally I’d use Travis CI or Drone.io for that, but npipe is a Windows-only Go package, and neither of the aforementioned services support running tests on Windows.

With some googling, I saw that Nathan Youngman had worked with AppVeyor to add Go support to their CI system.  The example on the blog talks about making a build.cmd file in your repo to enable Go builds, but I found that you can easily set up a Go build without having to put CI-specific files in your repo.

To get started with AppVeyor, just log into their site and tell it where to get your code (I logged in with Github, and it was easy to specify what repo of mine to test).  Once you choose the repo, go to the Settings page on AppVeyor for that repo.  Under the Environment tab on the left, set the clone directory to C:\GOPATH\src&lt;your import path> and set an environment variable called GOPATH to C:\GOPATH.  Under the build tab, set the build type to “SCRIPT” and the script type to “CMD”, and make the contents of the script

go get -v -d -t <your import path>/…
(this will download the dependencies for your package).  In the test tab, set the test type to “SCRIPT”, the script type to “CMD” and the script contents to
go test -v -cover ./…
 (this will run all the tests in verbose mode and also output the test coverage).

That’s pretty much it.  AppVeyor will automatically run a build on commits, like you’d expect.  You can watch the progress on a console output on their page, and get a pretty little badge from the badges page.  It’s free for open source projects, and seems relatively responsive from my admittedly limited experience.

This is a great boon for Go developers, so you can be sure your code builds and passes tests on Windows, with very little work to set it up.  I’m probably going to add this to all my production repos, even the ones that aren’t Windows-only, to ensure my code works well on Windows as well as Linux.

Intro to BoltDB: Painless Performant Persistence

BoltDB is a pure Go persistence solution that saves data to a memory mapped file. I call it a persistence solution and not a database, because the word database has a lot of baggage associated with it that doesn’t apply to bolt. And that lack of baggage is what makes bolt so awesome.

Bolt is just a Go package. There’s nothing you need to install on the system, no configuration to figure out before you can start coding, nothing. You just go get github.com/boltdb/bolt and then import “github.com/boltdb/bolt”.

All you need to fully use bolt as storage is a file name. This is fantastic from both a developer’s point of view, and a user’s point of view. I don’t know about you, but I’ve spent months of work time over my career configuring and setting up databases and debugging configuration problems, users and permissions and all the other crap you get from more traditional databases like Postgres and Mongo. There’s none of that with bolt. No users, no setup, just a file name. This is also a boon for users of your application, because they don’t have to futz with all that crap either.

Bolt is not a relational database. It’s not even a document store, though you can sort of use it that way. It’s really just a key/value store… but don’t worry if you don’t really know what that means or how you’d use that for storage. It’s super simple and it’s incredibly flexible. Let’s take a look.

Storage in bolt is divided into buckets. A bucket is simply a named collection of key/value pairs, just like Go’s map. The name of the bucket, the keys, and the values are all of type []byte. Buckets can contain other buckets, also keyed by a []byte name.

… that’s it. No, really, that’s it. Bolt is basically a bunch of nested maps. And this simplicity is what makes it so easy to use. There’s no tables to set up, no schemas, no complex querying language to struggle with. Let’s look at a bolt hello world:

package main

import (


var world = []byte(“world”)

func main() {
db, err := bolt.Open(“/home/nate/foo/bolt.db”, 0644, nil)
if err != nil {
defer db.Close()

key := []byte(“hello”)
value := []byte(“Hello World!”)

// store some data
err = db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists(world)
if err != nil {
return err

err = bucket.Put(key, value)
if err != nil {
return err
return nil

if err != nil {

// retrieve the data
err = db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(world)
if bucket == nil {
return fmt.Errorf(“Bucket %q not found!”, world)

val := bucket.Get(key)

return nil

if err != nil {

// output:
// Hello World!
I know what you’re thinking - that seems kinda long. But keep in mind, I fully handled all errors in at least a semi-proper way, and we’re doing all this:

1.) creating a database
2.) creating some structure (the “world” bucket)
3.) storing data to the structure
4.) retrieving data from the structure.

I think that’s not too bad in 54 lines of code.

So let’s look at what that example is really doing. First we call bolt.Open to get the database. This will create the file if necessary, or open it if it exists.

All reads from or writes to the bolt database must be done within a transaction. You can have as many Readers in read-only transactions at the same time as you want, but only one Writer in a writable transaction at a time (readers maintain a consistent view of the DB while writers are writing).

To begin, we call db.Update, which takes a function to which it’ll pass a bolt.Tx - bolt’s transaction object. We then create a Bucket (since all data in bolt lives in buckets), and add our key/value pair to it. After the write transaction finishes, we start a read- only transaction with DB.View, and get the values back out.

What’s great about bolt’s transaction mechanism is that it’s super simple - the scope of the function is the scope of the transaction. If the function passed to Update returns nil, all updates from the transaction are atomically stored to the database. If the function passed to Update returns an error, the transaction is rolled back. This makes bolt’s transactions completely intuitive from a Go developer’s point of view. You just exit early out of your function by returning an error as usual, and bolt Does The Right Thing. No need to worry about manually rolling back updates or anything, just return an error.

The only other basic thing you may need is to iterate over key/value pairs in a Bucket, in which case, you just call bucket.Cursor(), which returns a Cursor value, which has functions like Next(), Prev() etc that return a key/value pair and work like you’d expect.

There’s a lot more to the bolt API, but most of the rest of it is more about database statistics and some stuff for more advanced usage scenarios… but the above is all you really need to know to start storing data in a bolt database.

For a more complex application, just storing strings in the database may not be sufficient, but that’s ok, Go has your back there, too. You can easily use encoding/json or encoding/gob to serialize structs into the database, keyed by a unique name or id. This is what makes it easy for bolt to go from a key/value store to a document store - just have one bucket per document type. Again, the benefit of bolt is low barrier of entry. You don’t have to figure out a whole database schema or install anything to be able to just start dumping data to disk in a performant and manageable way.

The main drawback of bolt is that there are no queries. You can’t say “give me all foo objects with a name that starts with bar”. You could make your own index in the database and keep it up to date manually. This could be as easy as a slice of IDs serialized into an “indices” bucket for a particular query. Obviously, this is where you start getting into the realm of developing your own relational database, but if you don’t go overboard, it can be nice that all this code is just that - code. It’s not queries in some external DSL, it’s just code like you’d write for an in-memory data store.

Bolt is not for every application. You must understand your application’s needs and if bolt’s key/value style will be sufficient to fulfill those needs. If it is, I think you’ll be very happy to use such a simple data store with so little mental overhead.

[edited to clarify reader/writer relationship] Bonus Gob vs. Json benchmark for storing structs in Bolt:

BenchmarkGobEncode 1000000 2191 ns/op
BenchmarkJsonEncode 500000 4738 ns/op
BenchmarkGobDecode 1000000 2019 ns/op
BenchmarkJsonDecode 200000 12993 ns/op
Code: http://play.golang.org/p/IvfDUGBpJ6

Autogenerate docs with this one dumb trick

Yesterday, I was trying to think of a way of automating some doc generation for my go packages. The specific task I wanted to automate was updating a badge in my package’s README to show the test coverage. What I wanted was a way to run go test -cover, parse the results, and put the result in the correct spot of my README. My first thought was to write an application that would do that for me … but then I’d have to run that instead of go test. What I realized I wanted was something that was “compatible with go test” - i.e. I want to run go test and not have to remember to run some special other command.

And that’s when it hit me: What is a test in Go? A test is a Go function that gets run when you run “go test”.  Nothing says your test has to actually test anything.  And nothing prevents your test from doing something permanent on your machine (in fact we usually have to bend over backwards to make sure our tests don’t do anything permanent.  You can just write a test function that updates the docs for you.

I actually quite like this technique.  I often have some manual tasks after updating my code - usually updating the docs in the README with changes to the API, or changing the docs to show new CLI flags, etc.  And there’s one thing I always do after I update my code - and that’s run “go test”.  If that also updates my docs, all the better.

This is how covergen was born.  https://github.com/natefinch/covergen

Covergen is a particularly heinous example of a test that updates your docs.  The heinous part is that it actually doubles the time it takes to run your tests… this is because that one test re-runs all the tests with -cover to get the coverage percent.  I’m not sure I’d actually release real code that used such a thing - doubling the time it takes to run your tests just to save a few seconds of copy and paste is pretty terrible.

However, it’s a valid example of what you can do when you throw away testing convention and decide you want to write some code in a test that doesn’t actually test anything, and instead just runs some automated tasks that you want run whenever anyone runs go test.  Just make sure the result is idempotent so you’re not continually causing things to look modified to version control.

Diffing Go with Beyond Compare

I love Beyond Compare, it’s an awesome visual diff/merge tool.  It’s not free, but I don’t care, because it’s awesome.  However, there’s no built-in configuration for Go code, so I made one.  Not sure what the venn diagram of Beyond Compare users and Go users looks like, it might be that I’m the one point of crossover, but just in case I’m not, here’s the configuration file for Beyond Compare 3 for the Go programming language: http://play.golang.org/p/G6NWE0z1GC  (please forgive the abuse of the Go playground)

Just copy the text into a file and in Beyond Compare, go to Tools->Import Settings… and choose the file.  Please let me know if you have any troubles or suggested improvements.

Intro++ to Go Interfaces

Standard Interface Intro

Go’s interfaces are one of it’s best features, but they’re also one of the most confusing for newbies. This post will try to give you the understanding you need to use Go’s interfaces and not get frustrated when things don’t work the way you expect. It’s a little long, but a bunch of that is just code examples.

Go’s interfaces are different than interfaces in other languages, they are implicitly fulfilled. This means that you never need to mark your type as explicitly implementing the interface (like class CFoo implements IFoo). Instead, your type just needs to have the methods defined in the interface, and the compiler does the rest.

For example:

type Walker interface {
    Walk(miles int)

type Camel struct {
    Name string

func (c Camel) Walk(miles int) {
     fmt.Printf(“%s is walking %v miles\n”, c.Name, miles)

func LongWalk(w Walker) {

func main() {
    c := Camel{“Bill”}

// prints
// Bill is walking 500 miles.
// Bill is walking 500 miles.


Camel implements the Walker interface, because it has a method named Walk that takes an int and doesn’t return anything. This means you can pass it into the LongWalk function, even though you never specified that your Camel is a Walker. In fact, Camel and Walker can live in totally different packages and never know about one another, and this will still work if a third package decides to make a Camel and pass it into LongWalk.

Non-Standard Continuation

This is where most tutorials stop, and where most questions and problems begin. The problem is that you still don’t know how the interfaces actually work, and since it’s not actually that complicated, let’s talk about that.

What actually happens when you pass Camel into LongWalk?

So, first off, you’re not passing Camel into LongWalk. You’re actually assigning c, a value of type Camel to a value w of type Walker, and w is what you operate on in LongWalk.

Under the covers, the Walker interface (like all interfaces), would look more or less like this if it were in Go (the actual code is in C, so this is just a really rough approximation that is easier to read).

type Walker struct {
    type InterfaceType
    data *void

type InterfaceType struct {
    valtype *gotype
    func0 *func
    func1 *func

All interfaces values are just two pointers - one pointer to information about the interface type, and one pointer to the data from the value you passed into the interface (a void in C-like languages… this should probably be Go’s unsafe.Pointer, but I liked the explicitness of two actual *’s in the struct to show it’s just two pointers).

The InterfaceType contains a pointer to information about the type of the value that you passed into the interface (valtype). It also contains pointers to the methods that are available on the interface.

When you assign c to w, the compiler generates instructions that looks more or less like this (it’s not actually generating Go, this is just an easier-to-read approximation):

data := c
w := Walker{ 
    type: &InterfaceType{ 
              valtype: &typeof(c), 
              func0: &Camel.Walk 
    data: &data

When you assign your Camel value c to the Walker value w, the Camel type is copied into the interface value’s Type.valtype field. The actual data in the value of c is copied into a new place in memory, and w’s Data field points at that memory location.

Implications of the Implementation

Now, let’s look at the implications of this code. First, interface values are very small - just two pointers. When you assign a value to an interface, that value gets copied once, into the interface, but after that, it’s held in a pointer, so it doesn’t get copied again if you pass the interface around.

So now you know why you don’t need to pass around pointers to interfaces - they’re small anyway, so you don’t have to worry about copying the memory, plus they hold your data in a pointer, so changes to the data will travel with the interface.

Interfaces Are Types

Let’s look at Walker again, this is important:

type Walker interface

Note that first word there: type. Interfaces are types, just like string is a type or Camel is a type. They aren’t aliases, they’re not magic hand-waving, they’re real types and real values which are distinct from the type and value that gets assigned to them.

Now, let’s assume you have this function:

func LongWalkAll(walkers []Walker) { for _, w := range walkers { LongWalk(w) } }

And let’s say you have a caravan of Camels that you want to send on a long walk:

caravan := []Camel{ Camel{“Bill”}, Camel{“Bob”}, Camel{“Steve”}}

You want to pass caravan into LongWalkAll, will the compiler let you? Nope. Why is that? Well, []Walker is a specific type, it’s a slice of values of type Walker. It’s not shorthand for “a slice of anything that matches the Walker interface”. It’s an actual distinct type, the way []string is different from []int. The Go compiler will output code to assign a single value of Camel to a single value of Walker. That’s the only place it’ll help you out. So, with slices, you have to do it yourself:

walkers := make([]Walker, len(caravan))
for n, c := range caravan {
    walkers[n] = c

However, there’s a better way if you know you’ll just need the caravan for passing into LongWalkAll:

caravan := []Walker{ Camel{“Bill”}, Camel{“Bob”}, Camel{“Steve”}}

Note that this goes for any type which includes an interface as part of its definition: there’s no automatic conversion of your func(Camel) into func(Walker) or map[string]Camel into map[string]Walker. Again, they’re totally different types, they’re not shorthand, and they’re not aliases, and they’re not just a pattern for the compiler to match.

Interfaces and the Pointers That Satisfy Them

What if Camel’s Walk method had this signature instead?

func (c *Camel) Walk(miles int)

This line says that the type *Camel has a function called Walk. This is important: *Camel is a type. It’s the “pointer to a Camel” type. It’s a distinct type from (non-pointer) Camel. The part about it being a pointer is part of its type. The Walk method is on the type *Camel. The Walk method (in this new incarnation) is not on the type Camel. This becomes important when you try to assign it to an interface.

c := Camel{“Bill”}

// compiler output:
cannot use c (type Camel) as type Walker in function argument:
 Camel does not implement Walker (Walk method has pointer receiver)

To pass a Camel into LongWalk now, you need to pass in a pointer to a Camel:

c := &Camel{“Bill”}


c := Camel{“Bill”}

Note that this true even though you can still call Walk directly on Camel:

c := Camel{“Bill”}
c.Walk(500) // this works

The reason you can do that is that the Go compiler automatically converts this line to (&c).Walk(500) for you. However, that doesn’t work for passing the value into an interface. The reason is that the value in an interface is in a hidden memory location, and so the compiler can’t automatically get a pointer to that memory for you (in Go parlance, this is known as being “not addressable”).

Nil Pointers and Nil Interfaces

The interaction between nil interfaces and nil pointers is where nearly everyone gets tripped up when they first start with Go.

Let’s say we have our Camel type with the Walk method defined on *Camel as above, and we want to make a function that returns a Walker that is actually a Camel (note that you don’t need a function to do this, you can just assign a *Camel to a Walker, but the function is a good illustrative example):

func MakeWalker() Walker {
    return &Camel{“Bill”}

w := MakeWalker()
if w != nil {
    w.Walk(500)  // we will hit this

This works fine. But now, what if we do something a little different:

func MakeWalker(c *Camel) Walker {
    return c

var c *Camel
w := MakeWalker(c)
if w != nil {
    // we’ll get in here, but why?

This code will also get inside the if statement (and then panic, which we’ll talk about in a bit) because the returned Walker value is not nil. How is that possible, if we returned a nil pointer? Well, let’s go look back to the instructions that get generated when we assign a value to an interface.

data := c
w := Walker{ 
    type: &InterfaceType{ 
              valtype: &typeof(c), 
              func0: &Camel.Walk 
    data: &data

In this case, c is a nil pointer. However, that’s a perfectly valid value to assign to the Walker’s Data value, so it works just fine. What you return is a non-nil Walker value, that has a pointer to a nil *Camel as its data. So, of course, if you check w == nil, the answer is false, w is not nil… but then inside the if statement, we try to call Camel’s walk:

func (c *Camel) Walk(miles int) {
     fmt.Printf(“%s is walking %v miles\n”, c.Name, miles)

And when we try to do c.Name, Go automatically turns that into (*c).Name, and the code panics with a nil pointer dereference error.

Hopefully this makes sense, given our new understanding of how interfaces wrap values, but then how do you account for nil pointers? Assume you want MakeWalker to return a nil interface if it gets passed a nil Camel. You have to explicitly assign nil to the interface:

func MakeWalker(c *Camel) Walker {
    if c == nil {
        return nil
    return c

var c *Camel
w := MakeWalker(c)
if w != nil {
    // Yay, we don’t get here!

And now, finally, the code is doing what we expect. When you pass in a nil *Camel, we return a nil interface. Here’s an alternate way to write the function:

func MakeWalker(c *Camel) Walker {
    var w Walker
    if c != nil {
        w = c
    return w

This is slightly less optimal, but it shows the other way to get a nil interface, which is to use the zero value for the interface, which is nil.

Note that you can have a nil pointer value that satisfies an interface. You just need to be careful not to dereference the pointer in your methods. For example, if *Camel’s Walk method looked like this:

func (c *Camel) Walk(miles int) {
    fmt.Printf(“I’m walking %d miles!”, miles)

Note that this method does not dereference c, and therefore you can call it even if c is nil:

var c *Camel
// prints “I’m walking 500 miles!”



I hope this article helps you better understand how interfaces works, and helps you avoid some of the common pitfalls and misconceptions newbies have about how interfaces work. If you want more information about the internals of interfaces and some of the optimizations that I didn’t cover here, read Russ Cox’s article on Go interfaces, I highly recommend it.

Mocking functions in Go

Functions in Go are first class citizens, that means you can have a variable that contains a function value, and call it like a regular function.

printf := fmt.Printf
printf(“This will output %d line.\n”, 1)
This ability can come in very handy for testing code that calls a function which is hard to properly test while testing the surrounding code.  In Juju, we occasionally use function variables to allow us to stub out a difficult function during tests, in order to more easily test the code that calls it.  Here’s a simplified example:
// in install/mongodb.go
package install

func SetupMongodb(path string) error {
     // suppose the code in this method modifies files in root
     // directories, mucks with the environment, etc…
     // Actions you actively don’t want to do during most tests.

// in startup/bootstrap.go
package startup

func Bootstrap() error {
    path := getPath()
    if err := install.SetupMongodb(path); err != nil {
       return err
So, suppose you want to write a test for Bootstrap, but you know SetupMongodb won’t work, because the tests don’t run with root privileges (and you don’t want to setup mongodb on the dev’s machine anyway).  What can you do?  This is where mocking comes in.

We just make a little tweak to Bootstrap:
package startup

var setupMongo = install.SetupMongodb

func Bootstrap() error {
    path := getRootDirPath()
    if err := setupMongo(path); err != nil {
       return err
Now if we want to test Bootstrap, we can mock out the setupMongo function thusly:
// in startup/bootstrap_test.go
package startup

type fakeSetup struct {
    path string
    err error

func (f *fakeSetup) setup(path string) error {
    f.path = path
    return f.err

TestBootstrap(t *testing.T) {
    f := &fakeSetup{ err: errors.New(“Failed!”) }
    // this mocks out the function that Bootstrap() calls
    setupMongo = f.setup
    err := Bootstrap()
    if err != f.err {
        t.Fail(“Error from setupMongo not returned. Expected %v, got %v”, f.err, err)
    expPath := getPath()
    if f.path != expPath {
        t.Fail(“Path not correctly passed into setupMongo. Expected %q, got %q”, expPath, f.path)

    // and then try again with f.err == nil, you get the idea
Now we have full control over what happens in the setupMongo function, we can record the parameters that are passed into it, what it returns, and test that Bootstrap is at least using the API of the function correctly.

Obviously, we need tests elsewhere for install.SetupMongodb to make sure it does the right thing, but those can be tests internal to the install package, which can use non-exported fields and functions to effectively test the logic that would be impossible from an external package (like the setup package). Using this mocking means that we don’t have to worry about setting up an environment that allows us to test SetupMongodb when we really only want to test Bootstrap.  We can just stub out the function and test that Bootstrap does everything correctly, and trust that SetupMongodb works because it’s tested in its own package.

Effective Godoc

I started to write a blog post about how to get the most out of godoc, with examples in a repo, and then realized I could just write the whole post as godoc on the repo, so that’s what I did.  Feel free to send pull requests if there’s anything you see that could be improved.

I actually learned quite a lot writing this article, by exploring all the nooks and crannies of Go’s documentation generation.  Hopefully you’ll learn something too.

Either view the documentation on godoc.org:


or view it locally using the godoc tool:

go get code.google.com/p/go.tools/cmd/godoc
go get github.com/natefinch/godocgo
godoc -http=:8080

Then open a browser to http://localhost:8080/pkg/github.com/natefinch/godocgo


Unused Variables in Go

The Go compiler treats unused variables as a compilation error. This causes much annoyance to some newbie Gophers, especially those used to writing languages that aren’t compiled, and want to be able to be fast and loose with their code while doing exploratory hacking.

The thing is, an unused variable is often a bug in your code, so pointing it out early can save you a lot of heartache.

Here’s an example:

50 func Connect(name, port string) error {
51     hostport := ""
52    if port == "" {
53        hostport := makeHost(name)
54        logger.Infof("No port specified, connecting on port 8080.")
55    } else {
56        hostport := makeHostPort(name, port)
57        logger.Infof("Connecting on port %s.", port)
58    }
59    // ... use hostport down here
60 }

Where’s the bug in the above? Without the compiler error, you’d run the code and have to figure out why hostport was always an empty string. Did we pass in empty strings by accident? Is there a bug in makeHost and makeHostPort?

With the compiler error, it will say “53, hostport declared and not used” and “56, hostport declared and not used”

This makes it a lot more obvious what the problem is… inside the scope of the if statement, := declares new variables called hostport. These hide the variable from the outer scope, thus, the outer hostport never gets modified, which is what gets used further on in the function.

50 func Connect(name, port string) error {
51    hostport := ""
52    if port == "" {
53        hostport = makeHost(name)
54        logger.Infof("No port specified, connecting on port 8080.")
55    } else {
56        hostport = makeHostPort(name, port)
57        logger.Infof("Connecting on port %s.", port)
58    }
59    // ... use hostport down here
60 }

The above is the corrected code. It took only a few seconds to fix, thanks to the unused variable error from the compiler. If you’d been testing this by running it or even with unit tests… you’d probably end up spending a non- trivial amount of time trying to figure it out. And this is just a very simple example. This kind of problem can be a lot more elaborate and hard to find.

And that’s why the unused variable declaration error is actually a good thing. If a value is important enough to be assigned to a variable, it’s probably a bug if you’re not actually using that variable.

Bonus tip:

Note that if you don’t care about the variable, you can just assign it to the empty identifier directly:

_, err := computeMyVar()

This is the normal way to avoid the compiler error in cases where a function returns more than you need.

If you really want to silence the unused variable error and not remove the variable for some reason, this is the way to do it:

v, err := computeMyVar() 
_ = v  // this counts as using the variable 

Just don’t forget to clean it up before committing.

All of the above also goes for unused packages. And a similar tip for silencing that error:

_ = fmt.Printf // this counts as using the package

Go and Github

Francesc Campoy recently posted about how to work on someone else’s Go repo from github.  His description was correct, but I think there’s an easier way, and also one that might be slightly less confusing.

Let’s say you want to work on your own branch of github.com/natefinch/gocog - here’s the easiest way to do it:

  1. Fork github.com/natefinch/gocog on github
  2. mkdir -p $GOPATH/src/github.com/natefinch/gocog
  3. cd $GOPATH/src/github.com/natefinch/gocog
  4. git clone https://github.com/YOURNAME/gocog .
  5. (optional) go get github.com/natefinch/gocog

That’s it.  Now you can work on the code, push/pull etc from your github repo as normal, and submit a pull request when you’re done.

go get is useful for getting code that you want to use, but it’s not very useful for getting code that you want to work on.  It doesn’t set up source control.  git clone does.  What go get is handy for is getting the dependencies of a project, which is what step 5 does (only needed if the project relies on outside repos you don’t already have).  (thanks to a post on G+ for reminding me that git clone won’t get the dependencies)

Also note, the path on disk is the same as the original repo’s URL, not your branch’s URL.  That’s intentional, and it’s the key to making this work.  go get is the only thing that actually cares if the repo URL is the same as the path on disk.  Once the code is on disk, go build etc just expects import paths to be directories under $GOPATH.  The code expects to be under $GOPATH/src/github.com/natefinch/gocog because that’s what the import statements say it should be.  There’s no need to change import paths or anything wacky like that (though it does mean that you can’t have both the original version of the code and your branch coexisting in the same $GOPATH).

Note that this is actually the same procedure that you’d use to work on your own code from github, you just change step 1 to “create the repo in github”.  I prefer making the repo in github first because it lets me set up the license, the readme, and the .gitignore with just a few checkboxes, though obviously that’s optional if you want to hack locally first.  In that case, just make sure to set up the path under gopath where it would go if you used go get, so that go get will work correctly when you decide to push up to github.

(updated to mention using go get after git clone)

Go Tips for Newbie Gophers

This is just a collection of tips that would have saved me a lot of time if I had known about them when I was a newbie:

Build or test everything under the current directory and subdirectories:

go build ./…
go test ./…
Technically, both commands take a pattern to match the name of one or more packages, and the … specifier is a wildcard, so you could do …/foo/… to match all packages under GOPATH with foo in their path. 

Have an io.Writer that writes to an in-memory data structure:

b := &bytes.Buffer{}
Have an io.Reader read from a string (useful when you want to use a string as the input data for something):
r := strings.NewReader(myString)
Copy data from a reader to a writer:

io.Copy(toWriter, fromReader)
Timeout waiting on a channel:

select {
   case val := <- ch
       // use val
   case <-time.After(time.Second*5)
Convert a slice of bytes to a string:
var b []byte = getData()
s := string(b)
Passing a nil pointer into an interface does not result in a nil interface:
func isNil(i interface{}) bool {
    return i == nil
var f *foo = nil
fmt.Println(isNil(f))  // prints false
The only way to get a nil interface is to pass the keyword nil:
var f *foo = nil
if f == nil {
    fmt.Println(isNil(nil))  // prints true
How to remember where the arrow goes for channels:

The arrow points in the direction of data flow, either into or out of the channel, and always points left.

The above is generalizable to anything where you have a source and destination, or reading and writing, or assigning.

Data is taken from the right and assigned to the left, just as it is with a := b.  So, like io.Copy, you know that the reader (source) is on the right, the writer (destination) is on the left:  io.Copy(dest, src).

If you ever think “man, someone should have made a helper function to do this!”, chances are they have, and it’s in the std lib somewhere.

Working at Canonical

I’ve been a developer at Canonical (working on Juju) for a little over 3 months, and I have to say, this is the best job I have ever had, bar none.

Let me tell you why.

1.) 100% work from home (minus ~2 one week trips per year)
2.) Get paid to write cool open source software.
3.) Work with smart people from all over the globe.

#1 can’t be overstated. This isn’t just “flex time” or “work from home when you want to”.  There is literally no office to go to for most people at Canonical.  Working at home is the default.  The difference is huge.  My last company let us work from home as much as we wanted, but most of the company worked from San Francisco… which means when there were meetings, 90% of the people were in the room, and the rest of us were on a crappy speakerphone straining to hear and having our questions ignored.  At Canonical, everyone is remote, so everyone works to make meetings and interactions work well online… and these days it’s easy with stuff like Google Hangouts and IRC and email and online bug tracking etc.

Canonical’s benefits don’t match Google’s or Facebook’s (you get the standard stuff, health insurance, 401k etc, just not the crazy stuff like caviar at lunch… unless of course you have caviar in the fridge at home).  However, I’m pretty sure the salaries are pretty comparable… and Google and Facebook don’t let you work 100% from home.  I’m pretty sure they barely let you work from home at all.  And that is a huge quality of life issue for me.  I don’t have to slog through traffic and public transportation to get to work.  I just roll out of bed, make some coffee, and sit down at my desk.  I get to see my family more, and I save money on transportation.

#2 makes a bigger difference than I expected.  Working on open source is like entering a whole different world.  I’d only worked on closed source before, and the difference is awesome.  There’s purposeful openness and inclusion of the community in our development.  Bug lists are public, and anyone can file one.  Mailing lists are public (for the most part) and anyone can get on them.  IRC channels are public, and anyone can ask questions directly to the developers.  It’s a really great feeling, and puts us so much closer to the community - the people that have perhaps an even bigger stake in the products we make than we do.  Not only that, but we write software for people like us.  Developers.  I am the target market, in most cases.  And that makes it easy to get excited about the work and easy to be proud of and show off what I do.

#3 The people.  I have people on my team from Germany, the UK, Malta, the UAE, Australia, and New Zealand.  It’s amazing working with people of such different backgrounds.  And when you don’t have to tie yourself down to hiring people within a 30 mile radius, you can afford to be more picky.  Canonical doesn’t skimp on the people, either.  I was surprised that nearly everyone on my team was 30+ (possibly all of them, I don’t actually know how old everyone is ;)  That’s a lot of experience to have on one team, and it’s so refreshing not to have to try to train the scrappy 20-somethings to value the things that come with experience (no offense to my old colleagues, you guys were great).

Put it all together, and it’s an amazing opportunity that I am exceedingly pleased to have been given.

60 Days with Ubuntu

At the end of July, I started a new job at Canonical, the makers of Ubuntu Linux.  Canonical employees mostly work from home, and use their own computer for work.  Thus, I would need to switch to Ubuntu from Windows on my personal laptop.  Windows has been my primary operating system for most of my 14 year career.  I’ve played around with Linux on the side a few times, running a mail server on Mandrake for a while… and I’ve worked with Cent OS as server for the software at my last job… but I wouldn’t say I was comfortable spending more than a few minutes on a Linux terminal before I yearned to friggin’ click something already…. and I certainly hadn’t used it as my day to day machine.

Enter Ubuntu 13.04 Raring Ringtail, the latest and greatest Ubuntu release (pro-tip, the major version number is the year it was released, and the minor version number is the month, Canonical does two releases a year, in April and October, so they’re all .04 and .10, and the release names are alphabetical).

Installation on my 2 year old HP laptop was super easy.  Pop in the CD I had burned with Ubuntu on it, and boot up… installation is fully graphical, not too different from a Windows installation.  There were no problems installing, and only one cryptic prompt… do I want to use Logical Volume Management (LVM) for my drives?  This is the kind of question I hate.  There was no information about what in the heck LVM was, what the benefits or drawbacks are, and since it sounded like it could be a Big Deal, I wanted to make sure I didn’t pick the wrong thing and screw myself later.  Luckily I could ask a friend with Linux experience… but it really could have done with a “(Recommended)” tag, and a link for more information.

After installation, a dialog pops up asking if I want to use proprietary third party drivers for my video card (Nvidia) or open source drivers.  I’m given a list of several proprietary drivers and an open source driver.  Again, I don’t know what the right answer is, I just want a driver that works, I don’t care if it’s proprietary or not (sorry, OSS folks, it’s true).  However, trying to be a good citizen, I pick the open source one and…. well, it doesn’t work well at all.  I honestly forget exactly what problems I had, but they were severe enough that I had to go figure out how to reopen that dialog and choose the Nvidia proprietary drivers.

Honestly, the most major hurdle in using Ubuntu has been getting used to having the minimize, maximize, and close buttons in the upper left of the window, instead of the upper right.

In the first week of using Ubuntu I realized something - 99% of my home use of a computer is in a web browser… the OS doesn’t matter at all.  There’s actually very little I use native applications for outside of work.  So, the transition was exceedingly painless.  I installed Chrome, and that was it, I was back in my comfortable world of the browser.

Linux has come a long way in the decade since I last used it.  It’s not longer the OS that requires you drop into a terminal to do everyday things.  There are UIs for pretty much everything that are just as easy to use as the ones in Windows, so things like configuring monitors, networking, printers, etc all work pretty much like they do in Windows.

So what problems did I have?  Well, my scanner doesn’t work.  I went to get drivers for it, and there are third party scanner drivers, but they didn’t work.  But honestly, scanners are pretty touch and go in Windows, too, so I’m not terribly surprised.  All my peripherals worked (monitors, mouse, keyboard, etc), and even my wireless printer worked right away.  However, later on, my printer stopped working.  I don’t know exactly why, I had been messing with the firewall in Linux, and so it may have been my fault.  I’m talking to Canonical tech support about it, so hopefully they’ll be able to help me fix it.

Overall, I am very happy with using Linux as my every day operating system.  There’s very few drawbacks for me.  Most Windows software has a corresponding Linux counterpart, and now even Steam games are coming to Linux, so there’s really very little reason not to make the switch if you’re interested.

Statically typed generic data structures in Go

I gave a talk at the Go Boston meetup last night and figured I should write it up and put it here.

The second thing everyone says when they read up on Go is “There are no generics!”.

(The first thing people say is “There are no exceptions!”)

Both are only mostly true,  but we’re only going to talk about generics today.

Go has generic built-in data structures - arrays, slices, maps, and channels. You just can’t create your own new type, and you can’t create generic functions. So, what’s a programmer to do? Find another language?

No. Many, possibly even most, problems can be solved with the built-in data structures. You can write pretty huge applications just using maps and slices and the occasional channel. There may be a tiny bit of code duplication, but probably not much, and certainly not any tricky code.

However, there definitely are times when you need more complicated data structures. Most people writing Go solve this problem by using Interface{}, the empty interface, which is basically like Object in C# or Java or void * in C/C++.  It’s a thing that can hold any type… but then you need to type cast it to get at the actual type. This breaks static typing, since the compiler can’t tell if you make a mistake and pass the wrong type into something that takes an Interface{}, and it can’t tell until runtime if a cast will succeed or not.

So, is there any solution? Yes. The inspiration comes from the standard library’s sort package. Package sort can sort a slice of any type, it can even sort things that aren’t slices, if you’ve made your own custom data structure. How does it do that? To sort something, it must support the methods on sort.Interface. Most interesting is Less(i, j int). Less returns true if the item at index i in your data structure is Less than the object at index j in your data structure. Your code has to implement what “Less” means… and by only using indices, sort doesn’t need to know the types of objects held in your data structure. 

This use of indices to blindly access data in a separate data structure is how we’ll implement our strongly typed tree. The tree structure will hold an index as its data value in each node, and the indices will index into a data structure that holds the actual objects. To make a tree of a new type, you simply implement a Compare function that the tree can use to compare the values at two indices in your data structure. You can use whatever data structure you like, probably a slice or a map, as long as you can use integers to reference values in the data structure.

In this way we separate the organization of the data from the storage of the data. The tree structure holds the organization, a slice or map (or something custom) stores the data. The indices are the generic pointers into the storage that holds the actual strongly typed values.

This does require a little code for each new tree type, just as using package sort requires a little code for each type. However, it’s only a few lines for a few functions, wrapping a tree and your data. 

You can check out an example binary search tree I wrote that uses this technique in my github account


or go get the runnable sample tree:

go get github.com/natefinch/treesample

This required only 36 lines of code to make the actual tree structure (including empty lines and comments).

In some simple benchmarks, this implementation of a tree is about 25% faster than using the same code with Interface{} as the values and casting at runtime…. plus it’s strongly typed.

Go is for Open Source

The Go programming language is built from the ground up to implicitly encourage Go projects to be open source. If you want your project not only to contribute to open source, but to encourage other people to write open source code, Go is a great language to choose.

Let’s look at how Go does this. These first two points are overly obvious, but we should get them out of the way.

The language is open source

You can go look at the source code for the language, the compilers, and the build tools for the language. It’s a fully open source project. Even though a lot of the work is being done by Google engineers, there are hundreds of names on the list of contributors of people who are not Google employees.

The standard library is open source

Want to see high quality example code? Look at the code in the standard library. It has been carefully reviewed to be of the best quality, and in canonical Go style. Reading the standard library is a great way to learn the best ways to use and write Go.

Ok, that’s great, but what about all the code that isn’t part of Go itself?
The design of Go really shows its embrace of open source in how third party code is used in day to day projects.

Go makes it trivial to use someone else’s code in your project

Go has distributed version control built-in from the ground up. If you want to use a package from github, for example, you just specify the URL in the imports, as if it were a local package:

import (
“bytes” // std lib package
“github.com/fake/foo” // 3rd party package

You don’t have to go find and download fake/foo from github and put it in a special directory or anything. Just run “go get github.com/fake/foo”. Go will then download, build, and install the code, so that you can reference it… nicely stored in a directory defined by the URL, in this case $GOPATH/src/github.com/fake/foo. Go will even figure out what source control system is used on the other side so you don’t have to (support for git, svn, mercurial, and bazaar).

What’s even better is that the auto-download happens for anyone who calls “go get” on your code repository. No more giving long drawn-out installation instructions about getting half a dozen 3rd party libraries first. If someone wants your code, they type “go get path.to/your/code”, and Go will download your code, and any remote imports you have (like the one for github above), any remote imports that code has, etc, and then builds everything.

The fact that this is available from the command line tools that come with the language makes it the de facto standard for how all Go code is written. There’s no fragmentation in the community about how packages are stored, accessed, used, etc. This means zero overhead for using third party code, it’s as easy to use as if it were built into the Go standard library.

Sharing code is the default

Like most scripting languages (and unlike many compiled languages), using source code from another project is the default way to use third party code in Go. Go creates a monolithic executable during its build, so there are no DLLs to create and distribute in the way you often see with other compiled languages. In theory you could distribute the compiled .a files from your project for other people to link to in their project, but this is not encouraged by the tooling, and I’ve personally never seen anyone do it.

All Go code uses the same style

Have you ever gone to read the source for a project you’d like to contribute to, and had your eyes cross over at the bizarre formatting the authors used? That almost never happens with Go. Go comes with a code formatting tool called gofmt that automatically formats Go code to the same style. The use of gofmt is strongly encouraged in the Go community, and nearly everyone uses it. Most text editors have an extension to automatically format your code with gofmt on save, so you don’t even have to think about it. You never have to worry about having a poorly formatted library to work with… and in the very rare situation where you do, you can just run it through gofmt and you’re good to go.

Easy cross platform support

Go makes it easy to support multiple platforms. The tooling can create native binaries for any popular operating system from the same source on a single machine. If you need platform-specific code, it’s easy to specify code that only gets compiled for a single platform, by simply appending _<os> to a file name .e.g path_windows.go will only be compiled for builds targeting Windows.

Built-in documentation and testing

Go comes with a documentation generator that spits generates HTML or plain text from minimally formatted comments in the code. It also comes with a standard testing package that can run unit tests, performance benchmarks, and runnable example code. Because this is all available in the standard library and with the standard tools, nearly everyone uses it… which means it’s easy to look at the documentation for any random Go package, and easy check if the tests pass, without having to go install some third party support tool. Because it’s all standardized, several popular websites have popped up to automate generating (and hosting) the documentation for your project, and you can easily run continuous integration on your package, with only a single line in the setup script - “language: go”.


Everything about Go encourages standardization and openness… which not only makes it possible to use other people’s code, it makes it easy to use other people’s code. I hope to see Go blossom as a language embraced by the open source community, as they discover the strengths that make it uniquely qualified for open source projects.

What I love about Go

The best things about Go have nothing to do with the language.

Single Executable Output

Go compiles into a single executable that runs natively on the target OS. No more needing to install java, .net, mono, python, ruby, whatever. Here’s your executable, feel free to run it like a normal person.  And you can target builds for any major OS (windows, linux, OSX, BSD).

One True Coding Style

GoFmt is a build tool that formats your source code in the standard Go format. No more arguing about spacing or brace matching or whatever. There is one true format, and now we can all move on… and even better, many editors integrate GoFmt so that your code can be automatically formatted whenever you save.

Integrated Testing

Testing is integrated into the language. Name a file with the suffix _test.go and it’ll only build under test. You run tests simply by running “go test” in the directory. You can also define runnable example code with output that is checked at test time.  This example code is then included in the documentation (see below)… now you’ll never have examples in documentation with errors in them.  Finally, you can have built-in benchmarks that are controlled by the go tool to automatically run enough iterations to get a significant result, displayed in number of operations per second.

Integrated Documentation

HTML documentation is built into the language. No need for ugly HTML in your source or weirdly formatted comments. Plaintext comments are turned into very legible documentation, and see above for examples that actually run and can have their output tested as a part of the tests.


Support for distributed version control is built into the language. Want to reference code from a project on github?  Just use the url of the project as the import path in your code, e.g. import “github.com/jsmith/foo”   When you build your code it’ll get downloaded and built automatically.

Want to get a tool written in go?  From the command line type “go get github.com/jsmith/bar” - go will download the source, build it, and install the executable in your path.  Now you can run bar.

Any git, SVN, mercurial, or bazaar repository will work, but all the major public source code sites are supported out of the box - github, bitbucket, google code, and launchpad.

Other Cool Stuff

Debugging with gdb
Integrated profiling tools
Easy to define custom includes per targeted OS/architecture (simple _windows will only build if targetting windows)
Integrated code parsers and lexers.

Do you even care about the actual language anymore?  I wouldn’t.  But just in case:


I recently got very enamored with Go, and decided that I needed to write a real program with it to properly get up to speed. One thing came to mind after reading a lot on the Go mailing list: a code generator.

I had worked with Ned Batchelder at a now-defunct startup, where he developed cog.py. I figured I could do something pretty similar with Go, except, I could do one better - Go generates native executables, which means you can run it without needing any specific programming framework installed, and you can run it on any major operating system. Also, I could construct it so that gocog supports any programming language embedded in the file, so long as it can be run via command line.

Thus was born gocog - https://github.com/natefinch/gocog

Gocog runs very similarly to cog.py - you give it files to look at, and it reads the files looking for specially tagged embedded code (generally in comments of the actual text). Gocog extracts the code, runs it, and rewrites the file with the output of the code embedded.

Thus you can do something like this in a file called test.html:

<!– [[[gocog
print “<b>Hello World!</b>“
gocog]]] –>
<!– [[[end]]] –>

if you run gocog over the file, specifying python as the command to run:

gocog test.html -cmd python -args %s -ext .py

This tells gocog to extract the code from test.html into a  file with the .py extension, and then run python <filename> and pipe the output back into the file.

This is what test.html looks like after running gocog:

<!– [[[gocog
print “<b>Hello World!</b>“
gocog]]] –>
<b>Hello World!</b>
<!– [[[end]]] –>

Note that the generator code still exists in the file, so you can always rerun gocog to update the generated text.  

By default gocog assumes you’re running embedded Go in the file (hey, I wrote it, I’m allowed to be biased), but you can specify any command line tool to run the code - python, ruby, perl, even compiled languages if you have a command line tool to compile and run them in a single step (I know of one for C# at least).

“Ok”, you’re saying to yourself, “but what would I really do with it?”  Well, it can be really useful for reducing copy and paste or recreating boilerplate. Ned and I used it to keep a schema of properties in sync over several different projects. Someone on Golang-nuts emailed me and is using it to generate boilerplate for CGo enum properties in Go.

Gocog’s sourcecode actually uses gocog - I embed the usage text into three different spots for documentation purposes - two in regular Go comments and one in a markdown file.  I also use gocog to generate a timestamp in the code that gets displayed with the version information.

You don’t need to know Go to run Gocog, it’s just an executable that anyone can run, without any prerequisites.  You can download the binaries of the latest build from the gocog wiki here: https://github.com/natefinch/gocog/wiki

Feel  free to submit an issue if you find a bug or would like to request a feature.

Go Win Stuff

No, not contests, golang (the programming language), and Win as in Windows.

Quick background - Recently I started writing a MUD in Go for the purposes of learning Go, and writing something that is non-trivial to code.  MUDs are particularly suited to Go, since they are entirely server based, are text-based, and are highly concurrent and parallel problems (which is to say, you have a whole bunch of people doing stuff all at the same time on the server). 

Anyway, after getting a pretty good prototype of the MUD up and running (which was quite fun), I started thinking about using Go for some scripty things that I want to do at work. There’s a bit of a hitch, though… the docs on working in Windows are not very good.  In fact, if you look at golang.org, they’re actually non-existent.  This is because the syscall package changes based on what OS you’re running on, and (not surprisingly) Google’s public golang site is not running on Windows.

So, anyway, a couple notes here on Windowy things that you (I) might want to do with Go:

Open the default browser with a given URL:

import (

func OpenBrowser(url string) {
exec.Command(“rundll32”, “url.dll”, “FileProtocolHandler”, url)
Example of a wrapper for syscall’s Windows Registry functions:

import (

func ReadRegString(hive syscall.Handle, subKeyPath, valueName string) (value string, err error) {
var h syscall.Handle
err = syscall.RegOpenKeyEx(hive, syscall.StringToUTF16Ptr(subKeyPath), 0, syscall.KEY_READ, &h)
if err != nil {
defer syscall.RegCloseKey(h)

var typ uint32
var bufSize uint32

err = syscall.RegQueryValueEx(
if err != nil {

data := make([]uint16, bufSize/2+1)

err = syscall.RegQueryValueEx(
if err != nil {

return syscall.UTF16ToString(data), nil