At Wealthfront, we believe that all code should look like it was written by the same person. In practice, that means that across our frontend stack we enforce consistent patterns, follow code style guidelines, and use the same technologies. This enables engineers to easily contribute to projects they have never worked on before by reducing the cognitive overhead associated with onboarding.
This is a lot of configuration code duplicated in each repo, but at least our test suites all behaved the same. However, when we decided we wanted to stop using
chai.assert.equal, we realized we would have to copy and paste the following snippet into every project we maintained!
There has to be a better way!
A test-setup package
We wanted to be able to abstract out our test-setup into its own package so that we could replace all of our test setup files with the following:
If we could do this, we’d be able to consistently and reliably set up our test suite in each package. This would then enable us to make the
chai.assert.equal change in one place and get the benefits everywhere.
One of the challenges of pulling out our test setup into its own package is that our projects all have slight variations in dependencies. Some projects use React, others use a library to write DOM fixtures to the document, and not all need
sinon-as-promised. Each of those modules require slightly different things in the test setup. The commonality between all of our test suites was that if a project used a certain package, we wanted to configure that library consistently.
For our test-setup package to work for different variations of dependencies, we need some way to detect what is being used. If we could wrap our
require statements in a try / catch, we’d be able to do that. Something like this:
This won’t work!
This works exactly how we want it to in Node, but it won’t work when using a module loader like browserify or webpack. Browserify will throw an exception when processing code that tries to require a non existent file. We want to be able to catch non-existent requires and continue, so we need to convert those compile time exceptions to runtime ones.
A handy package,
browserify-optional, will do just that.
We can use
browserify-optional by putting the following in our test-setup’s
browserify-optional is the only dependency we need.
When a project depends on
test-setup and runs through browserify (which we do when running karma),
browserify-optional would convert the above try/catch into the following code snippet if
chai doesn’t exist.
Using this structure, we can now check for the existence of modules and configure them, giving ourselves a consistent test runner across all of our packages.
Testing our test-setup
In order to feel confident that
browserify-optional works for us and that
test-setup properly handles different combinations of dependencies, we need to be able to test this setup package.
To do this, we created many sub packages that have their own test suite. The test runner for test-setup can then iterate through each package fixture, run
npm install and
npm test. Below is the file structure of our
test-setup repo and the fixture projects that exist in the test suite.
chai-and-sinon is exactly what you’d expect:
We previously had problems keeping our test suite configuration up to date since we maintain many internal NPM packages. By creating this test-setup package, we have enabled our engineers to have consistent expectations about how the test suites behave. When we want to change what we can use in our tests or the behavior of a package, we can make that change in one place and have it propagated everywhere. Having this package has drastically reduced the overhead of managing multiple NPM packages on our frontend. It might do the same for you.