Codementor Events

Ad hoc unit testing in NodeJS

Published Jun 26, 2018Last updated Dec 23, 2018
Ad hoc unit testing in NodeJS

Lately I've been jamming away at coding up a prototype desktop app using Electron and Vue.

It's been really fun to let go of all those "best practices" and simply code as much and as quickly as I can.

One of those best practices I've passed up is unit testing. I 100% believe in the value of it, but only in the right circumstances.

In this stage of my project, I don't have a defined spec, my features come and go as I feel, and the code I'm writing is very procedural (e.g. hooking up my database to my Vue components).

For me, the real benefit of unit testing appears when you're using logical operators (ie if this then that) in your code. I don't have a lot of that right now.

But... there was one component that required a little bit of data manipulation. I needed to turn an array of file paths into a structured object.

I need to turn this:

['./test/specs/a.js', './test/specs/b.js', './test/specs/a/a.js']

In to something like this:

[{
  title: 'test',
  children: [{
    title: 'specs',
    children: [{
      title: 'a.js'
    }, {
      title: 'b.js'
    }, {
      title: 'a',
      children: [{
        title: 'a.js'
      }]
    }]
  }]
}]

At the time I worked on the code, I knew it would be a great chance to utilize unit tests. I knew what my input was and I knew what I wanted my output to be.


Quick note: I put together a video covering all of this on my YouTube channel


A set of unit tests would really help me verify my code was working, plus give me clear goals and immediate feedback, both essential conditions to getting into a good flow state.

Despite that, I didn't want to distract myself from writing the actual code.

I hadn't written any unit tests for the project yet, so I didn't have a test framework set up. They're not too complicated to get running these days, but I really didn't want to end up going down a rabbit hole researching the best framework, mock library, etc to use and how to incorporate all that in to an Electron/Vue app.

I really needed a cheap, simple alternative to a test framework, and that's where this idea of 'ad hoc' unit testing comes in.

Writing a very basic unit test framework

There are two main features you need to run a unit test: a test runner and an assertion library.

NodeJS comes with a simple assertion library as a core module. And a very basic test runner can be written in about 10 lines of code.

With that, I had a basic plan in place to unit test my code:

  • Move the function I want to test to a separate file, to make it easier to load
  • Create a new test file next it
  • In that file, load the 'assert' library and my function file, write some tests, and add my mini-runner to the end.
  • Run my tests on the command line using the node cli

Moving my function to a separate file

Technically I didn't need to do this, but there were a lot of good reasons to.

Most important, it makes it a lot easier to load my function in my test file.

Since I'm building out a Vue application, I'm using the .vue file syntax, which is not straight JavaScript.

This means I'd need to do some magic to make my test file understand how to load that Vue component so that I could get to the code I wanted to test.

I didn't want to do any of that, so instead I just moved the code out to a separate file, then required it in my Vue component. Thank goodness for module support in Node/Webpack!

Another good reason for moving the functionality I wanted to test is that it forces me to remove any hard-coded integration in to Vue, as that would cause trouble with my unit tests.

For example, at the end of one of my functions, I assign the final parsed value to my Vue component using this.data = parsedData.

This was a dumb line of code for me to write, as it mixed in integration code with functional code.

Instead, I should just return that parsedData value back to whatever code called it, and let it handle the integration. This would keep all my functional code separate from the rest, helping with separation of concerns and such.

Without writting a single test I've already improved my code by cleaning up a couple bad habits (throwing everything in to a single file and mixing concerns in the same function).

Here's a dummy file (we'll call it doSomething.js) to give you an idea of what my new file looks like:

function doSomething(input) {
  // do some stuff to input
  let output = input * 2

  // if not right, do it again
  if (output < 10) {
    output = doSomething(output)
  }
  
  // some other condition that I need to test
  if (output > 10 && input === 3) {
    // here's some strange edge case I need to handle
    output += ' was 3'  
  }
  
  // return the value
  return output
}

module.exports = {
  doSomething
}

Creating my test file

With my code moved and cleaned up a little, I can now start testing it.

I created my test file in the same folder as my function file, as this keeps them close by so I remember that the test file is there.

To name it, I take whatever name I gave my function file and added .test in there. So given doSomething.js, I name my test file doSomething.test.js.

This way I (and any program I use) can differentiate between code files and test files, despite keeping the two right next to each other.

Now it's time to layout my test file.

The first thing I need to do it require my function file and Node's Assert library. That's easily done:

const assert = require('assert');
const { doSomething } = require('./doSomething.js')

With that, I can write my first test, which will be a simple assertion that doSomething loaded. I do that by checking that it's a function:

const actual = typeof doSomething;
assert(actual === "function", `Expected ${actual} to be "function"`);
console.log('Test Passed')

That's actually all I need to do to have my first test written and ready to run.

If I run that code via node doSomething.test.js, and everything is good, it looks like:

Terminal output with success message

If there were something wrong with my code (say I forgot to export that function), the assertion would throw an error and look like this:

Terminal output with error message

Because the assertion throws an error, the console message is never written out, as node stops executing immediately after the error is thrown.

Here's the code so far

Simple, effective test organization

I could keep writing my assertions like this, but it would quickly become unwieldy, plus that assertion error message sure is an ugly beast.

I'd really like to name my tests too, that way I can get some good organization going on and get a hint at what the test is checking for when I forget next week (along with helping that error message).

Because almost everything in JavaScript is an object, I should make my tests an object too!

I'll show why in a second, but here's what I'm thinking:

const tests = {
  'doSomething should be a function' : function () {
    const actual = typeof doSomething;
    assert(actual === "function", `Expected ${actual} to be "function"`);
  }
}

It's a little bit more code, but it'll really pay off in a second.

In this new format, my check won't automatically run anymore. I need to call it at the end of my file to make the magic happen.

I could do that by running tests['doSomething should be a function']() but gosh that's a bloated solution.

Instead, I can loop through my object properties, running each test function programmatically.

I can do this by getting an array out of the tests object using Object.keys, then looping through that array with forEach.

Object.keys(tests).forEach((test) => {
  tests[test]()
})

No matter what happens out there, just keep testing

With that change, now no matter how many tests I write, they'll all run at the end of the file without any extra work.

Except if one of them doesn't pass, then it'll stop execution immediately at that point.

That kinda sucks.

Let's fix that by using a try...catch block.

Try...catch blocks are perfect for situations where you're running some code (usually calling a separate function), and there's a slight chance it'll explode.

Instead of dealing with a RUD (rapid unscheduled disassembly), the try...catch block allows us to handle the error a bit more gracefully. It also gives us the ability to continue running the remainder of our code, despite the thrown error.

To use it, we wrap the error-prone function in a try block, then handle any errors in our catch block:

Object.keys(tests).forEach((test) => {
  try {
    tests[test]()
    console.log(`Passed: '${test}'`)
  } catch (e) {
    console.error(`Failed: '${test}' - ${e.message}`)
  }
});

Now all our tests will run, even if one of them fails. And, we bring back the success message along with prettying up the test failure message.

Here's a successful run:

Terminal message with custom success message

And here's a failing run:

Terminal message with custom failure message

And here's the updated code

That sure is a lot nicer of an error message, right?

But it failed, shouldn't that mean anything?

There are these little things called 'exit codes' that programs use to let other programs know if they ran successfully or not.

They're really handy for build systems, as you can let the parent process know that the child process messed up somehow, allowing it to stop moving forward and giving you the chance to deal with the problem right away.

In Node, exit codes are automatically sent under a variety of conditions, but the main two are:

0 - Nothing went wrong, file completed running as hoped
1 - Uncaught Fatal Exception (e.g. something blew up)

When we were letting our assertion blow up without that try...catch block, NodeJS would exit with a code of 1, letting any other process know about it.

But when we added our try...catch block, we stopped throwing errors, and Node started returning a code of 0 for every test run, even the ones with failures.

That exit code functionality was pretty nice, and it'd be really cool having it back.

Well, we can do that; all we need to do is call Node's process.exit function and pass in the status we want to send.

To do so, we'll define a variable, set it to 0, then change it over to 1 if any of our tests fail. After all tests have run, we'll send that variable to the process.exit function letting Node know what's up:

let exitCode = 0;
Object.keys(tests).forEach((test) => {
  try {
    tests[test]()
    console.log(`Passed: '${test}'`)
  } catch (e) {
    exitCode = 1
    console.error(`Failed: '${test}' - ${e.message}`)
  }
})

process.exit(exitCode)

Okay, that fixes it for the computers, but what about us humans? We'd like some sort of hint at the status as well!

Right now, all the messages kind of look the same. It'd be really nice if the failing tests were bold, letting us know something funky happened.

As we're running this code in the terminal, we can send escape sequences in to our console output to change how it's displayed.

There are two we'll want:

  • Bright ("\x1b[1m"), which is basically just bolding
  • Reset ("\x1b[0m"), which resets the formatting; important for tests run after a failure

We can pass these codes in to our 'console' calls just like we do strings.

Here's what the updated console.error call will be:

console.error('\x1b[1m', `Failed: '${test}' - ${e.message}`, '\x1b[0m')

The 'bright' setting is added at the beginning, then the 'reset' sequence is set at the end to turn down the brightness.

After adding a few more tests (purposefully failing one), here's how the output looks:

Full terminal output with success and 'bright' error messages

And here's the updated code

Did this even save time?!?

So that's my ad hoc testing setup. All said and done, I likely spent more time trying it out and writing this up than I would have spent just sticking with one of the popular frameworks out there.

But I really enjoyed this exercise and think it's a neat approach to simple unit testing, especially when you don't want to install any external dependencies.

It's also nice because I can treat tests as little utensils for writing better code, rather than some chore to check off the "real programmer" list.

And for those of you who are code coverage addicts, here, have a "100% coverage" badge to post on your repo readme:

meaningless "100% coverage" badge :p


Header Photo by Artem Sapegin on Unsplash

Discover and read more posts from Kevin Lamping
get started