At Wealthfront, we are committed to automation – both in the products we build, and in how we ensure quality. In this post we’ll explore how our automated test harness enables Android developers to quickly build tests for almost any scenario.
Part 1: Writing “fluent” Espresso test code
Since they are designed to operate in black-box testing, Espresso matchers and actions can be disorienting to developers. Consider a test that needs to click a login button and then acknowledge a message on the following screen. Our series of commands might look something like:
onView(withText(“Please login”)).check(matches(isDisplayed()))
onView(withId(R.id.email)).perform(replaceText(username))
onView(withId(R.id.password)).perform(replaceText(password))
onView(withId(R.id.submitButton)).perform(scrollTo(), click())
onView(withText(“Account Dashboard”)).check(matches(isDisplayed()))
To interpret what a test of this form is doing, developers must mentally map each view action to relevant high-level objects and verbs in their codebase: arrive at the Login screen, enter a username and password, click the login button, and finally arrive at the Dashboard screen. While the example above is relatively readable, you can imagine how the pattern wouldn’t scale to more complex cases involving many screens or subviews.
A first step at making this more readable might be to simply group related actions and assertions into functions with descriptive names, e.g. fun loginToDashboard(username: String, password: String), to hint at what is happening.
We can do much better however – by defining chainable test APIs for each screen, we can get the same readability plus the advantages of IDE code-completion. Given some scoped action and assertion classes:
class LoginScreenAction {
fun loginWithUserNameAndPassword(...): DashboardAction {
onView(withId(R.id.email)).perform(replaceText(username))
onView(withId(R.id.password)).perform(replaceText(passwordString))
onView(withId(R.id.loginButton)).perform(scrollTo(), click())
return DashboardAction()
}
fun check() = LoginScreenAssertion()
}
class LoginScreenAssertion {
fun onLoginScreen(): LoginScreenAssertion { onView(withClassName(equalTo(LoginView::class.java.name))).check(matches(isDisplayed()))
return this
}
fun action() = LoginScreenAction()
}
class DashboardScreenAction {
fun check() = DashboardScreenAssertion()
}
class DashboardScreenAssertion {
fun onDashboardScreen(): DashboardScreenAssertion { onView(withClassName(equalTo(DashboardView::class.java.name))).check(matches(isDisplayed()))
return this
}
}
The same test can now be:
LoginScreenAssertion()
.onLoginScreen()
.action()
.loginWithUserNameAndPassword()
.check()
.onDashboardScreen()
Part 2: Supporting testability of HTTP responses
Most of our integration tests make API calls against a near-copy of our production environment that we call our “integration server”. This setup is an excellent choice for verifying “happy paths” – in fact, we automatically fail these tests whenever any kind of API error is detected.
However, we sometimes need fine-grained control over API behavior – perhaps to simulate a network timeout or a request resulting in a 5xx code – which is very difficult with this paradigm. Replacing our API client with a test double provides a simple solution (although it does present a risk of diverging from production behavior) for those tests.
This blog post won’t go into the details of how best to employ test doubles to simulate desired API behavior. Instead we will describe how our test harness splits our integration test suite into those which employ a mock API client, and those that do not.
Multiple Gradle tasks to invoke Flank jobs
We separate the two types of tests by package name in our codebase. This will come in handy when we need to specify test targets to our Gradle task (see below). We also give each kind of test its own base class, in order to contain the type’s setup logic.
To build and run the tests, we maintain two Android flavors that provide different API client implementations (one mocked client, one real client that is pointed at our integration server). There is an implicit pairing of each build variant to a type of test, for example mockApiDebug and integrationServerDebug.
Let’s define a dedicated Gradle task for running each variant of integration tests. Here is (a simplified version of) our script to generate a mocked-API task:
register("runMockTests", Exec::class.java) {
workingDir("${project.flankDir}/")
environment(mapOf("GOOGLE_APPLICATION_CREDENTIALS" to projectConfig.serviceAccountCredentials))
commandLine("java", "-jar", "flank_v21.09.0.jar", "firebase", "test", "android", "run")
val assembleReleaseApkTask = findByName("assembleProductionRelease")
val assembleAndroidTestApkTask = findByName("assembleProductionReleaseAndroidTest")
dependsOn(
named("writeMockApiConfigYaml"),
assembleReleaseApkTask,
assembleAndroidTestApkTask)
}
}
Configuring Flank jobs
You might have spotted that our task depends on another, called writeMockApiConfigYaml. Flank is a library that allows easy test parallelism on Firebase Test Lab. This Gradle task creates the configuration file Flank will look for. To register this task, with its implementation defined by YamlConfigWriterTask:
register("writeMockApiConfigYaml", YamlConfigWriterTask::class.java, someProjectConfig).configure {
outputs.upToDateWhen { false }
}
The details of YamlConfigWriterTask will greatly differ for each team’s needs, and are largely left as an exercise to the reader. But for purposes of this post, it’s important to mention that the output YAML must declare test-targets, app and test in a way that respects testing flavors. To allow this we’ll pass the necessary configuration via a constructor argument – e.g. here, a data class instance named someProjectConfig. For example, test-targets for our mock API build might be package com.wealthfront.test.mockapi.
With these dual Flank jobs defined in Gradle, we can easily invoke them from our CI system of choice: ./gradlew runMockApiTest runIntegrationServerTest.
Conclusion
I hope this post helps you build your own team’s testing infrastructure. Keep an eye out for a future post where we discuss how our team battles unreliability (AKA “flakiness”) in our UI tests. If you’re as committed to quality test coverage as we are, and would like to help Wealthfront build amazing financial products, take a look at our careers page!
Disclosures
The information contained in this communication is provided for general informational purposes only, and should not be construed as investment or tax advice. Nothing in this communication should be construed as a solicitation or offer, or recommendation, to buy or sell any security. Any links provided to other server sites are offered as a matter of convenience and are not intended to imply that Wealthfront Advisers or its affiliates endorses, sponsors, promotes and/or is affiliated with the owners of or participants in those sites, or endorses any information contained on those sites, unless expressly stated otherwise.
All investing involves risk, including the possible loss of money you invest, and past performance does not guarantee future performance. Please see our Full Disclosure for important details.
Wealthfront offers a free software-based financial advice engine that delivers automated financial planning tools to help users achieve better outcomes. Investment management and advisory services are provided by Wealthfront Advisers LLC, an SEC registered investment adviser, and brokerage related products are provided by Wealthfront Brokerage LLC, a member of FINRA/SIPC.
Wealthfront, Wealthfront Advisers and Wealthfront Brokerage are wholly owned subsidiaries of Wealthfront Corporation.
© 2022 Wealthfront Corporation. All rights reserved.