Reflective Metatesting. Protecting You from Yourself and Everybody Else.

Infrastructure projects often take user-defined configuration classes as input. I’ve worked on several during my tenure at Wealthfront. You cannot trust the validity of these configurations, even if you wrote them yourself. A pattern I’ve grown to love is writing JUnit tests that use reflection to gather all the input classes to run a suite of tests against them. Tests that automatically validate all classes of a given type increase the chance that failures will be caught at build time while reducing the amount of testing code engineers have to write.

A nice feature of this methodology is that when adding testing for cases that you missed or handling new preconditions you only need to add a new check in one place. This removes duplicate testing code and allows engineers who have not worked with the system to be confident that their configurations are valid if the JUnit tests pass, without having to write any additional tests themselves.

In order to illustrate the concept of testing classes using reflection, I am providing a very simplified version of the classes that define jobs to be run in our scheduler framework.

An example configuration:

The goal here is to write a unit test that will be applied to all job configurations to ensure their validity at build time. We can see obvious things to test such as ensuring the retries are nonnegative and that the run command is non-null. We also want to ensure each of these jobs can be instantiated. In order to gather all classes of a certain type we utilize the Reflections Library. This library has a variety of methods to gather grouped classes from getting all subtypes of a class to grouping by annotation. Here we want to get all subtypes of the JobConfigurationBase class that are not abstract or anonymous classes such as test classes, given that they cannot be instantiated.

Now that we have written this test we can be sure that every single job configuration is tested for instantiability, for having a valid number of retries, and a non-empty, non-null run command. I prefer to collect all failures in a list rather than failing the test upon the first invalid class. If we ran this check, it would fail with the output “Retries must be non-negative for BadJobConfiguration, Command must exist for BadJobConfiguration.”

This is a very simple example that can be extended to have more sophisticated checks. In the real job configuration repository we have the idea of dependencies whereby each configuration lists the other configurations that must succeed before the job runs. By collecting all job configuration classes that have dependencies, we are able to build a graph to ensure that no dependency cycles exist. Another important check tests the serializability of every class as we update our scheduler with JSON serialized versions of these classes. These tests have saved me from myself more than once.

We use this pattern to ensure that ETL jobs on production databases are hitting existing tables and the schema of the ETL matches the database schema. As we use Google Guice for dependency injection, we have injector tests that will ensure all classes are instantiable. We also utilize reflection to ensure our Spark jobs can be run locally with default data and ensure they are run in order of specified dependencies. Since we only have to write a single test to capture potential failures, we are extra thoughtful to ensure we think of all possible edge cases and can easily add additional tests to each class by adding the check to that one “metatest”.

I’ve found that reflection can be a powerful tool to ensure good test coverage of similar classes. Having access to all the classes allows us to perform powerful checks on the interactions of each class, especially when dealing with dependencies. I find my metatests provide better coverage than my individual tests as I am able to constantly improve test coverage for every class whenever a new requirement or edge case arises. Individual tests are important for functionality, but whenever you find yourself writing duplicate test code for multiple classes, consider consolidating the tests into a single test suite and applying the tests to each class.