“Best Practices in Automated Testing”
Supercharging your test suite for robustness and performance
It’s an honor to present at Mobile DevOps Summit, and share the latest best practices in automated testing:
- Creating test suites that are performant, robust, and scale well over time.
- Getting past the typical anti-patterns of Test-Driven Development (TDD).
- Building applications that are reliable, by integrating QA into the development process itself.
Please see the complete video on YouTube (above).
NOTES:
The talk is intended to be conceptual, aspirational (and hopefully inspirational).
Some of the recommendations are different from common TDD and QA practices, though, and there are legitimate questions around putting it all into practice.
So, some thoughts …
TDD & Unit Testing
The term “unit testing” is often used interchangeably with “automated testing.” However, unit testing is just one style of how automated tests can be done.
And as explained in the talk, it’s a very limiting style.
Classic TDD usually teaches us to write implementation code — and then, in tandem, write tests with a 1:1 correspondence with those classes/functions/etc.
That can be a fast path to technical debt.
- By coupling our tests to our implementation, those tests become brittle: when implementation details change, the tests break. Or we’re reluctant to refactor the implementation, for fear of breaking the tests.
- In order to test that underlying code, the tight coupling also tends to encourage breaking of encapsulation. That can happen in many ways — for example, by injecting special objects to inspect private variables. Or sometimes, breaking encapsulation is even formalized in the language itself (e.g., @testable in Swift).
When we feel the need to break encapsulation in order to test, that’s a code smell.
Behavioral Testing & Overall Architecture
Effective behavioral testing requires good application architecture. And architecture — especially clean architecture — is a topic unto itself.
Here’s part of it:
Conceptually, it’s helpful to think of our business logic as a “black box.” We shouldn’t care about how the work gets done — but simply focus on the results we get back. [And worth noting: the illustration used in the video literally uses a black box.]
This elevates our thinking — and that of our tests — to a higher conceptual level. And then it’s easier to collaborate with the rest of the team in defining those requirements, because we’re already operating at that higher conceptual level.
As for other architectural considerations (and sample implementations), please see: Reduxion-iOS, or The Composable Architecture.
BDD-Testing Frameworks
A good BDD workflow takes the behavioral scenarios the team creates, and auto-generates stubbed-out testing functions in the code. The developer’s job, then, is “simply” to fill in that testing code … and then create the implementation that makes it work.
And if we automatically ingested the BDD scenario text files into the code — say, directly from a project management system or CMS — the entire team could collaborate directly, with a single source of truth. And that might even supply both the iOS and Android codebases.
UI-Testing Frameworks
As described in the talk, UI-based testing easily becomes brittle and bloated, and doesn’t scale well. It can also become its own form of tech debt.
So be wary of UI-testing frameworks, and the tradeoffs of that style of testing. Use them for the minimum needed — and not for testing behavioral scenarios.
If you find yourself doing much of that, consider it a code smell. Something about your testing methodology or architecture might benefit from being done differently.
Presentation Layer vs. Business Logic
One ongoing question:
When do we put our code into the presentation layer (activities / view controllers), or into the business logic?
Here’s a simple rule of thumb …
If we need to write test(s) for it, put it into business logic.
Examples:
Data formatting for display
If it’s trivial or inconsequential, and you don’t need to test it, then it may not matter. But if you find yourself writing test(s) that instantiate view controllers or activities in order to test it — think twice. As described in the talk, that approach doesn’t scale well.
Navigation
Navigational logic is often embedded in activities / view controllers / storyboards / XIB / XML files, etc. However, UI is tricky and expensive to test.
A “coordinator”-style pattern allows us to encapsulate those navigation decisions into a piece of business logic instead — and that becomes lightweight and efficient to test, however needed.
This is one key part of making the UI layer “dumb as a rock.”
Code Coverage
Be wary of metrics that incentivize the wrong things.
For the reasons described, simply maximizing “unit test” coverage … might put us on the fast path to a new form of technical debt. Though that’s something we may not realize until later, after that debt has accrued.
And even when done well, we won’t get to significant test coverage overnight. That’s especially true for existing projects with legacy code.
However, given a clean architecture, 100% coverage is an attainable goal for “net-new” work. Using our behavioral scenarios, we ideally test all code paths, and to the degree necessary to assure correctness.
Existing code can be refactored into these new patterns as time allows.
Snapshot Testing
Automatically capturing screen snapshots can be valuable.
- It may take the place of some of our manual QA testing.
- It makes defects visible directly, for the team to see.
Caveat:
As mentioned in the talk, graphical user interfaces are computationally ‘expensive’ to construct, as well as brittle to test. A focus on testing UI may also reinforce some of the anti-patterns we discuss, under the guise of a new (and novel) form of automation.
So, be wary of snapshots as a substitute for behavioral testing of our underlying business logic. That’s what should comprise the bulk of our automated test suite.
Continuous Integration
Many projects use a C.I. server to generate builds, and to run automated tests.
But that isn’t enough. An automated test suite is only as good as what it tests.
Be mindful of automation for the sake of automation. Just because something’s automated doesn’t mean it’s good … or effective at furthering our goals as an engineering organization.
Manual QA Testing and Continuous Delivery
Back-end teams use continuous delivery — so why shouldn’t we?
Because mobile applications, and almost anything user-facing, are qualitatively different.
One key difference is the type of user “interface.”
- With a purely-textual interface (e.g., an API with a REST query string), it’s possible to automatically test functionality thoroughly — since that interface is free from all the nuances of a GUI. Thus, back-end services lend themselves more readily to C.D.
- Mobile apps (and even web apps) do have graphical user interfaces. And because of the many nuances of a GUI, continuous delivery without the benefit of human eyes and hands might be impractical … if the goal is to ensure an application that looks and feels right to a human end-user.
However, we can still minimize the amount of time and effort done in QA after the C.I. build. If we’ve designed our automated test suite well, the QA effort at this stage would more resemble smoke-testing.
Last, keep in mind that simply delivering a new version of a mobile app doesn’t guarantee any given user is running the latest build. This is different from back-end services and web apps, which update automatically for the end user with the simple refresh of a page.
Full Regression Testing vs. Basic Acceptance Testing (BAT)
If our full test suite is performant, then running the whole suite should happen quickly — ideally, in 15 minutes or less.
By contrast, having a “BAT” is a workaround … for when the full suite takes much longer to perform (manually or automatically).
If we can embrace the newer set of practices described in the talk, there’s no reason to run only a subset of tests. We just run the full suite, early and often.
And with that, our confidence in what we’re delivering increases accordingly.
Platforms and Frameworks and Dependencies — Oh, My!
Always beware the possibility that the patterns embodied in frameworks — even those provided by platform vendors — don’t always represent ideal architecture.
Clean architectural principles are violated all the time. Learn the difference, and be ever-vigilant about keeping those dependencies at “arm’s length” from your precious application code.
If you care about architecture, this talk from “Uncle Bob” Martin is worth an hour of your time:
You may never think of the subject the same way again. :)
Thanks for watching — and happy testing! 🚀