Webinar
Smarty

Testing in Go by example: Part 2

Tests that aren't easy to execute will be ignored.
February 27, 2015
Tags

Here's part 2 of our "Testing in Go" series. If you're new, feel free to catch up before reading on.


Basics

You've already learned how to execute tests in Go for a single package.

$ go test

There's a bit more to it, though. You can run any package from anywhere if you provide the import path. For example, this command runs the actual tests for the "testing" package from the standard library:

$ go test -v testing

If you've already run go get github.com/bradfitz/http2 you can execute those tests from anywhere with this:

$ go test -v github.com/bradfitz/http2

Try running multiple packages with a single command:

$ go test -v testing encoding/json

If your project has lots of packages, you can run this at the root of the project to gather and run all tests:

$ go test ./...

Better workflows

If you practice TDD/BDD then you are probably alternating very frequently between a text editor (to write tests and production code) and the command line (to run your tests). This gets really tedious. There are various things you can do to make running tests more automatic:

1. Text editor plugins

There are lots of plugins that allow more seamless integration with the Go tool chain from within your editor of choice:

I hope that Jetbrains will jump on the bandwagon soon and create an IDE for Go. Their products always have nice test runners. We'll see if that happens anytime soon.

2. Auto-run Script

But even the plugin-style integrations can get in the way. So here's one thing I've tried:

#!/usr/bin/env python

"""
//gist.github.com/mdwhatcott/9107649

This script scans the current working directory for changes 
to .go files and runs `go test` in each folder where *_test.go 
files are found. It does this indefinitely or until a 
KeyboardInterrupt is raised (<Ctrl+c>). This script passes the 
verbosity command line argument (-v) to `go test`.
"""


import os
import subprocess
import sys
import time


def main(verbose):
    working = os.path.abspath(os.path.join(os.getcwd()))    
    scanner = WorkspaceScanner(working)
    runner = TestRunner(working, verbose)

    while True:
        if scanner.scan():
            runner.run()


class WorkspaceScanner(object):
    def __init__(self, top):
        self.state = 0
        self.top = top

    def scan(self):
        time.sleep(.75)
        new_state = sum(self._checksums())
        if self.state != new_state:
            self.state = new_state
            return True
        return False

    def _checksums(self):
        for root, dirs, files in os.walk(self.top):
            for f in files:
                if f.endswith('.go'):
                    try:
                        stats = os.stat(os.path.join(root, f))
                        yield stats.st_mtime + stats.st_size
                    except OSError:
                        pass


class TestRunner(object):
    def __init__(self, top, verbosity):
        self.repetitions = 0
        self.top = top
        self.working = self.top
        self.verbosity = verbosity

    def run(self):
        self.repetitions += 1
        self._display_repetitions_banner()
        self._run_tests()

    def _display_repetitions_banner(self):
        number = ' {} '.format(
            self.repetitions if self.repetitions % 50 else
            'Wow, are you going for a top score? Keep it up!')
        half_delimiter = (EVEN if not self.repetitions % 2 else ODD) *\
                         ((80 - len(number)) / 2)
        write('\n{0}{1}{0}\n'.format(half_delimiter, number))

    def _run_tests(self):
        self._chdir(self.top)
        if self.tests_found():
            self._run_test()
        
        for root, dirs, files in os.walk(self.top):
            self.search_for_tests(root, dirs, files)

    def search_for_tests(self, root, dirs, files):
        for d in dirs:
            if '.git' in d or '.git' in root:
                continue

            self._chdir(os.path.join(root, d))
            if self.tests_found():
                self._run_test()

    def tests_found(self):
        for f in os.listdir(self.working):
            if f.endswith('_test.go'):
                return True

        return False

    def _run_test(self):
        subprocess.call('go test -i', shell=True)
        try:
            output = subprocess.check_output(
                'go test ' + self.verbosity, shell=True)
            self.write_output(output)
        except subprocess.CalledProcessError as error:
            self.write_output(error.output)

        write('\n')

    def write_output(self, output):
        write(output)

    def _chdir(self, new):
        os.chdir(new)
        self.working = new


def write(value):
    sys.stdout.write(value)
    sys.stdout.flush()


EVEN = '='
ODD  = '-'
RESET_COLOR  = '\033[0m'
RED_COLOR    = '\033[31m'
YELLOW_COLOR = '\033[33m'
GREEN_COLOR  = '\033[32m'


def parse_bool_arg(name):
    for arg in sys.argv:
        if arg == name:
            return True
    return False


if __name__ == '__main__':
    verbose = '-v' if parse_bool_arg('-v') else ''
    main(verbose)

The script has its origins in an extinct script by Jeff Winkler called nosy, which I used several years ago, and which has been reborn among the python community and is still known as nosy.

Magical auto-updating web UI

That python script worked pretty well, but once you get lots of tests and lots of packages being tested it became difficult to sort through the output for a failure or error. As I was using this script I had an idea:

"What if the results were shown in a web browser? And what if failures and errors automatically bubbled up to the top? Yeah, the browser client could maintain a persistent connection to a server that basically does what the auto-run python script did, and pipe the results down to the client whenever a change happened?"

I was already working on a new testing package and decided to bundle this idea along with it. The result is goconvey, which comes with just such a tool.

Here's how to install and run the goconvey web UI:

$ go get github.com/smartystreets/goconvey
$ $GOPATH/bin/goconvey

Then, open a browser at //localhost:8080 and you will see the results of running go test in every package at and below your current working directory. Try creating/deleting/saving a *.go file and watch the UI update with the latest results.

GoConvey

What are you waiting for? go test!

Subscribe to our blog!
Learn more about RSS feeds here.
rss feed iconSubscribe Now
Read our recent posts
A spicy take on address autocomplete and verification
Arrow Icon
Do you know what's more satisfying than watching address autocomplete perfectly fill out online forms? Absolutely nothing—except maybe biting into a warm cinnamon roll on National Cinnamon Day. Mmmmmmm. Speaking of warm and sweet experiences, today we're exploring how address autocomplete (when configured without best practices in mind) can turn from a delightful treat into a bitter disappointment and how address verification can save the day. Personalize the experienceYou know how cinnamon shows up uninvited in every fall-themed product from September through December? No one asked for that, and it's pretty presumptuous.
How property data APIs streamline utilities and mortgages
Arrow Icon
According to a study by AIThority, the mortgage industry hasn't quiiiiiite caught the vision of property data analytics. More than a third of lenders (37%) are what we might call property data dabblers—they've got access to the basics but not much more. Another 36% of mortgage businesses have enough game to conduct descriptive, post-mortem analyses. Still, a mere 3% of companies are found worthy of wielding a property data API for effective prescriptive analytics. For mortgage, repair, and utility companies willing to put in the work, a proper analytics program fueled by address data APIs is a game-changer.
Inside Smarty® - Caroline Roweton
Arrow Icon
We’ve said it once, and we’ll say it a zillion more times—nobody puts the “care” in Caroline as Ms. Roweton does! Often referred to as the chill vibes chick around the office, Caroline has a calming presence (unless she’s pulling some seriously rad pranks like stealing Randy Buttons in the dark of night and holding him hostage for days). As an Associate Product Manager, Caroline is responsible for researching and defining Smarty’s international products. She also performs competitive research, communicates with customers, gathers and prioritizes feature requests from them, supports sales through custom deals and pricing, and is pretty much just a superhero.