In this post, I’m going share my strategy for endpoint testing. It has a few cornerstones:
-
It should test against a running server by sending HTTP requests to it, instead of hooking onto the server instance directly, like supertest does. This way, the strategy becomes agnostic and portable - it can be used to test any endpoint server, even servers written in other languages, as long as they communicate through HTTP.
-
Each suite should be written as a narrative. To this end, BDD-style testing is very suitable. As an example, consider the narrative describing the authentication flow for an app:
I register as a user, providing a suitable email and password. The server should return a 200 response and an authentication token. Then, I login using the same email and password as earlier before. The server should return a 200 response and a authentication token. I login using a different email and password. This time, the server should return a 401 response. If I register with the same email as before, the server should return a 422 response and an error message in the response body indicating that the email has been taken.
A few points to take note of:
-
Even though the strategy is meant to be as agnostic as possible, you need to find a way to run the server with a empty test database, and then have some (hopefully scripted) way to drop it once the tests are complete. This part will depend on what database adapter/ORM you are using. I will share my solution for an Express server backed by RethinkDB later.
-
Remember that the database is a giant, singular hunk of state. If you’re going to be adopting this style of testing, there is no way around this. You’re not just going to be running GET requests - you’re going to be running POST and PUT and DELETE requests as well. This means that you need to be very careful about tests running concurrently or in parallel. It’s great to have performant tests, but don’t trade performance for tests that are easy to reason about and which reveal clearly which parts of your app are breaking.
I tried Ava first, and was actually halfway through writing the test suite for a project with it. I really liked it, but Ava was built for running tests concurrently and in parallel. There came a point where the test suite would fail unpredictably depending on the order in which the tests were run. Although it’s possible to run Ava tests in serial, I felt like I was fighting against the framework.
I also considered Tape, but I consider Ava to be superior to Tape for stateless unit testing. If you’re using Tape, do consider checking out Ava for future projects. Their syntaxes are very similar, except Ava is noticeably faster.
In the end, I settled with Jasmine, although I imagine Mocha would be equally suitable. There are three technical issues I would like to talk about: how I write the Jasmine specs in ES2015 JavaScript, how and why I used Chai, and how to setup and teardown the test database.
ES2015
There is only 2 words to describe why this is so important here:
async/await
(I know - technically, it’s not part of the ES2015 spec, but let’s dispense with the pedantism here.)
Thankfully, jasmine-es6 exists, and installing it is exactly the same as plain Jasmine. It ships with async
support out of the box.
Chai
Jasmine ships with its own BDD-style expect
assertions, but I chose to overwrite it in favour of Chai’s assertions instead, which features a much richer plugin ecosystem. In particular, the existence of chai-http prompted the switch. chai-http
provides assertions for HTTP testing, as well as a thin superagent
wrapper with which to make requests with. Perfecto!
It’s not really difficult to roll your own assertions and request wrapper, as I did with Ava, but why bother if you can piggyback on the hard work of others?
Database Setup/Teardown
Setup is quite straighforward - depending on what server framework you’re using, configure it (ideally using environment variables passed in through the command line) to connect to a test database using a different set of credentials from your usual development credentials.
I also reset the database in between each narrative (or what Jasmine calls specs). I find that this is a good balance between not resetting at all, which would make keeping track of database state untenable, or resetting after each expectation, which makes setup and teardown much more tedious and slows testing down (e.g. registering a user before each expectation).
With that in mind, a good rule of thumb emerges. If a narrative becomes so long as to make the database state confusing to reason about, it’s probably time to split it up.
As for database teardown, I rolled my own solution. For this particular project, I’m using thinky, a ORM for RethinkDB. thinky exposes the RethinkDB driver r
, which allows me to write this:
// spec/utils/teardown.js
var config = require("../../config"); // this file contains database connection credentials
var thinky = require("thinky")(config.rethinkdb.test);
thinky.r.dbDrop(config.rethinkdb.test.db).run(function () {
console.log("Tests complete. Test database dropped.");
process.exit();
});
which can then be run after the tests are complete:
"scripts": {
"start-test-server": "env NODE_ENV=test nodemon index.js",
"test": "jasmine ; node ./spec/utils/teardown.js"
}
Generally speaking, as long as you have access to the exposed database driver, you can write a variant of the above.
#Code Examples
Below, I show an abridged snippet from the test suite I wrote using this strategy:
// spec/AuthSpec.js
import chai from "chai";
import chaiHttp from "chai-http";
import { resetTables } from "./helpers/databaseHelper";
chai.use(chaiHttp);
// overwrite Jasmine's expect global with Chai's
const expect = chai.expect;
const req = chai.request("http://localhost:9005/");
// helper function to avoid ugly try/catch clauses in async calls
async function tryCatch(promise) {
try {
return await promise;
} catch (e) {
return e;
}
}
describe("Authentication", () => {
// we reset the tables before and after each spec
beforeAll(async () => await resetTables());
afterAll(async () => await resetTables());
it("should fail registration without any parameters", async () => {
const res = await tryCatch(req.post("auth/register"));
expect(res).to.have.status(422);
});
it("should pass registration with appropriate email and password", async () => {
const res = await req
.post("auth/register")
.send({ email: "a@a.com", password: 12341234 });
expect(res).to.have.status(200);
expect(res.body).to.have.all.keys(["token"]);
});
it("should fail registration with the same email", async () => {
const res = await tryCatch(
req.post("auth/register").send({ email: "a@a.com", password: 12341234 })
);
expect(res).to.have.status(422);
});
it("should pass login with correct email and password", async () => {
const res = await req
.post("auth/login")
.send({ email: "a@a.com", password: 12341234 });
expect(res).to.have.status(200);
expect(res.body).to.have.all.keys(["token"]);
});
it("should fail login with incorrect email and password", async () => {
const res = await tryCatch(
req.post("auth/login").send({ email: "b@a.com", password: 12341234 })
);
expect(res).to.have.status(401);
});
});
The code for resetting the database tables between each spec is as follows:
// spec/helpers/databaseHelper.js
var config = require("../../config");
var thinky = require("thinky")(config.rethinkdb.test);
var r = thinky.r;
var testDb = config.rethinkdb.test.db;
export async function resetTables() {
// first, get the list of all the tables in the database
const tableList = await r.db(testDb).tableList();
// then we create an Array of all the promises returned by r.table(table).delete() and await on them to complete before the function returns
await Promise.all(tableList.map(table => r.table(table).delete()));
}
I spent a week, on and off, working on refining this strategy, and I hope it will prove to be portable across future projects for me.