More than a Hot Take: An Assertion Statement Should Serve as the Focal Point for Any Well-Written Automated Test Specification

Photo by Balint Mendlik on Unsplash
What software development organizations need from automated tests are the same two things they needs from any testing: serviceability and efficiency.
In order to be serviceable, automated tests need not only to function as expected (consistently) but also provide coverage of the system under test, so that they can gather sufficient feedback to inform an assessment of the functional state of work product. In order to be efficient, in addition to running efficiently (and consistently), automated tests need to be easy to read, easy to refactor/ rework, easy to organize, and potentially easy to remove from service.
Some of the most time-consuming activities an organization will undertake related to automated testing involve maintaining them, so the easier the organization makes any of the above within a test specification, the more it does itself a favor that likely increases its chances of paying off the longer the specification remains in service.
Within this, assertion statements play a critical role that might not immediately seem obvious. Assertion statements provide three main benefits (objectively, probably more like three-and-a-half at first glance) that help make maintenance of automated tests more efficient:
- They define a clear objective for the inquiry that the specification executes in service of.
- They provide clear constraints (by way of criteria for success) for test runtime and as well as clear reporting when a behavior exhibited by the system under test overshoots those constraints.
- They provide a clear focal point to organize test composition around. Much like the role a topic sentence plays within a paragraph, assertion statements provide a description of the topic (by way the objective for inquiry) for the specification.
At the same time assertion statements make it clear what is being evaluated (i.e. which output is being compared to expectations) and how that evaluation will be conducted (i.e. what the expectations actually are), they also provide a clear point that test composition (and test composers) can focus on. In these ways, an assertion statement functions much like a target for the associated test specification. By using them as a target (much like the targets used in certain forms of archery), organizations can hopefully enjoy tests that are not just serviceable but also likely efficient to maintain throughout their service life.
Despite the clear benefits, I have encountered each of the thee following test composition issues in nearly every suite of automated tests I've worked within:
- Test specifications that do not use assertion statements at all (for example: the test passes if all operations complete as expected, but at no time is an assertion statement used to compare output extracted from the system under test to expectations).
- Test specifications that do not establish a clear relationship between the way output is to be compared to (both) expectations within the test runtime and the inqiury the specification is responsible for executing in service of.
- Test specifications that use many assertion statements (whether to evaluate different sets of output or to stop test runtime in cases where state or status do not conform to expectations) of varying relationships (if any) to the specification's general focus for inquiry.
This post will develop more clearly how assertion statements provide each of these benefits within automated test specifications that use them. It will also (if one accepts the metaphor comparing an assertion statement to targets used in certain forms of archery) develop briefly how the sorts of test composition issues listed above effectively miss the mark.
An Assertion Statement Sets a Clear Objective for Inquiry
Any scripted test (automated or manual) that is relevant executes in support of an inquiry of concern to the software development organization that depends on it.
As is often the case, the organization is likely aware of a concern (or set of concerns) relevant to stakeholders that defines part of the reasoning why testing is executed to begin with. In response to this concern, a test is devised to evaluate (in essence, inquire into) the current functional state of the feature set the concern relates to. By way of comparing output extracted from (or with manual testing: observed/ identified within) the work product/ the system under test to a set of predefined expectations, it should be possible to provide feedback (in the form of what is effectively a true-or-false answer) whether output from the feature set being evaluated matches the expectations. The organization can then use this feedback to make an assessment whether the current functional state of the feature set meets organizational expectations more generally, related to its concerns.
Within this, an assertion statement (for automated test specifications) does the work of comparing output extracted from the system under test (often referred to as the actual result) to expectations (the expected result). This is how automated test specifications produce the feedback that the organizations that depend on them use to inform assessments relevant to the concerns that (most likely) informed the need for the test to begin with.
At the same, then, that an assertion statement serves as a sort of functional goal for test runtime, it also serves as the goal for the inquiry that an automated test specification is tasked with executing. At the same time it makes clear what needs to happen in runtime, it makes clear what needs to happen (from a value perspective) in order for the feature set being tested to meet expectations.
So from multiple perspectives (here, from the perspective of test runtime and the perspective of the set of organizational concerns that informed the inquiry the test executes in support of), an assertion statement work as lot like a target: it sets a clear objective for the inquiry the specification is responsible for executing.
By contrast, a test specification with no assertions has no clear objective. Imagine, for example, an archer who shoots all day without aiming at a target: despite all of the activity, assessing the success (or failure) of the archer's activity would not be focused, meaningful, or even simple. Along the same lines, the looser the relationship (if any) between the the inquiry the specification executes in service of and the feedback the test produces, the more complex it is to understand what the specification's objective is. Automated tests with multiple assertion statements (especially the more loosely they are related) suffer from this problem and others: imagine, for example, that the same archer was given one arrow to hit multiple targets.
An Assertion Statement Provides Clear Constraints for Success (and Clear Reporting in Case of Failure)
If a scripted test executes in support of an inquiry, it should make conditions for success clear, and it should report clearly if the behavior exhibited by the system under test does not meet the specified conditions.
As noted above, what automated tests provide that is key to organizations that depend on them (as they attemp to make informed assessments related to the functional state of work product) is clear feedback. It seems reasonable to imagine that this is a lot like the feedback provided by the rings on a target: if an archer did not strike the bullseye, it's also valuable to assess how close they got. The rings on a standard archery target (and the point system they help to define) provide a means of making this feedback clear.
Automated tests also provide feedback, and in order for that feedback to be useable, tests need to manage the signal and noise they produce as they execute. If an automated test passes, it should be clear that it passed because (and only because) the way the system under test (and, really, the rest of the automated test) behaved as expected; passing for any other reason produces noise. And if a test fails, it should be clear both why the failure occurred and that the failure occurred because (and only because) the system under test failed to behave in a manner that meets expectations.
In case of failure, the more clearly an automated test can report on the nature of the failure, the less overhead it presents to those responsible for troubleshooting. In a number of ways, it can help to think of any scripted test (manual or automated) as working like a ready-made bug report. The clearer a bug report can make it what led to an issue and what an issue looks like when it occurs, the easier it should be (read: the less overhead it should require) for Software Engineers to determine the root cause, for Product Owners to assess impact, for Quality Engineers to plan necessary testing, and so-on.
Much the same way, the clearer the feedback a failing test can provide what exactly went in wrong in case of a test failure, the less effort should be needed (and, as noted previously, the less overhead should be required) to troubleshoot the failure.
Additionally, if one agrees that there are two distinct ways test runtime can produce a non-passing result (a test failure and at test error), clear distinction between the two can be used as signal defining what the nature of an issue might be that caused a test to exit with non-passing status. Specifically:
- A test failure occurs when a specification exits runtime with a non-passing status because the comparison defined by an assertion statement (between the expected result and actual result) failed.
- A test error occurs when a specification exits runtime with a non-passing result for any reason other than an assertion failure. For example, if an error is thrown arbitrarily in test runtime, the test will encounter a test error, and most test runners will report the specification as having exited with non-passing status.
If the specification is written well, then, a test failure can serve as clear signal that output from the system under test (specifically the feature set or behavior being examined within the inquiry that the test specification executes in support of) failed to meet expectations. And if a specification can enforce integrity during runtime (for example, using exceptions to report on situations where state and status deviate from expectations), it can increase the amount of clarity this distinction provides.
Finally, assertion statements define a clear set of constraints within test runtime for a test to pass. Again: in order to provide clear signal, a well-written specification should pass if (and only if) the behavior it seeks to evaluate conforms to expectations. The more adeptly a specification can use any of the above, the more it should hopefully be able to constrain a specification that could pass for any of a number of reasons to a specification that passes only for one reason.
As an alternative (where signal is less clear), imagine a target in archery with no rings and/ or no bullseye. A test specification without an assertion statement assumes no bullseye (and possibly no rings): without a clear indication of conditions for success, it is difficult to assess how well performance meets expectations other than that the archer hit the target somewhere (or perhaps in a location with an unclear relationship to the main goal: the bullseye). A test specification that does not make the relationship between the inquiry and the assertion statement clear assumes an inconsistent (or unpredictable) ring layout. And the more loosely any of a number of assertion statements connect to either this constraint or the feedback the test is expected to return, the more it resembles a target with multiple bullseyes.
An Assertion Statement Establishes a Focal Point around Which to Organize the Specification
In order to be easy to refactor/ rewrite as needed, easy to organize, and potentially easy to remove from service, a test specification also needs to be easy to read.
To begin with, all of the above applies here. The clearer the specification's objective (as a function of supporting a specific inquiry), and the clearer the specification makes it how it will provide feedback in case of failure and constrain conditions for success, the easier it will likely be to maintain.
At the same time, an assertion statement establishes a focal point that provides much the same value to the readability of a specification as a topic sentence to a paragraph.
Composing a paragraph (however unaware we may be of it) in support of a single topic or argument rendered within a topic sentence makes a paragraph easier to read. It's easier to follow because it is focused and well-organized. The same way it leaves the composer with a limited number of strategies to thread through additional ideas in a manner that connects them to the main idea, it also leaves the reader with a limited number of paths the paragraph can follow to ultimately resolve. So the better (in conventional writing) a writer can organize thoughts around a topic, an argument, an idea, or a specific train of thought, the better the chances the reader will be able to follow how what's been rendered within the paragraph (also likely with less effort).
This principle is directly transferable to composing readable test specifications, as well: the clearer the clearer an assertions statement is within a test specification, and the clearer the specification can make it how operations defined within the specification relate- or lead to the assertion statement, the easier it will be to follow that thread, as well. And where organizing a paragraph around a topic sentence provides a useful set of constraints that support readability, organizing a test specification around an assertion statement provides the same sorts of constraints that limit opportunities for very similar sorts of readability problems.
Archers do not simply aim for a point on a target: they focus on their goal, they align how they position themselves (and with that, the bow and the arrow) in in support of that focus, and they release. Repeating this over time, they generally develop a sense of awareness and muscle memory that allow them to make minor adjustments with confidence in how those adjustments will affect the outcome once the arrow is released. With discipline, they are able to read their own movements and use that as prediction of the effectiveness of their performance.
Conclusion
Well-written automated test specifications should make it as clear as possible how they mean to compare extracted output to expectations, how that comparison will be executed, and how (if there is a mismatch in this comparison) tests can be expected to report on the nature of any mismatch. Ideally, this comparison should serve as both a point of constraint within test runtime (in essence a moment of truth where the system under test either conforms to expectations or it does not) and a point of focus within test composition that informs how the rest or the specification will be organized. Much like targets are used in target archery (and, really, any of a number of activities involving shooting), they should serve both as the objective and as a source of feedback. They do both by providing a test (literally) of whether the system under test hits- or misses the mark as expected and by making the terms of the evaluation of that effectiveness clear and easy to assess.
Meanwhile, some of the most expensive activities involved in automated testing (beyond executing test runs) frequently involve maintaining them. Troubleshooting failed test runs, refactoring existing specifications and test support code, and organizing- or migrating specifications or suites of tests can all become the sort operations problem that challenges the strategic benefit of testing efforts if they do not deliver on the following:
- Clear value/ relevance. Does the specification add value to delivering feedback that suits the needs of the organization?
- Clear signal for noise. In case of failure, is the feedback returned by the specification actionable? In case of success, is the feedback meaningful?
- Test readability. Is it easy to get an idea of what the specification is doing without intimate knowledge of either how test/ test support code or the system under test is implemented?
- Openness to modification/ revision. How simple does the specification make it to somehow modify either test composition or the way test operations have been defined in code?
Good use of assertion statements can contriute to all of these things because (as developed throughout this post, even if not by name), within well-written test specifications it helps improve and enforce test determinism. By defining (and limiting focus to) their own target, making the conditions for success clear, and by establishing a focal point to organize the rest of the test with respect to, assertion statements make the test's specification's inquiry, operations, feedback that will be provided in case of failure, and the general compositional organzation (all) deterministic. Meanwhile, specifications that make use of no assertion statment, assertion statements with loose relationship to the specification's line of inquiry, and specifications with many loosely-related assertion statements ultimately challenge the determinism of an automated test specification.
Somewhat more abstractly (although, arguably, no less relevant), same way assertion statmeents provide a target for the system under test itself, they also provide a target for the authors to focus on, to aim for and seek feedback from, and to align efforts to. When repeated over time, the discipline it takes to define and strike a target of this nature reliably pays off.
As a general rule of thumb, then: the the fewer assertions (especially defining comparisons not directly linked to the inquiry the specification has been tasked with executing in support of) a test specification can make, and the better a test specification can focus on (thereby organize around) the limited assertion/s it does make, the better the specification will generally serve as a test. Where this improves test determinism, it can also generally be expected to help make testing more serviceable (by way of deliberately targeting and aligning around clear conditions for success and failure) and efficient (by way of making tests easier to troubleshoot, easier to read, easier to rewrite/ refactor, and easier to organize and potentially remove from service if needed).
In this, making the assertion statement the focal point of a test specification works a lot like beginning with the end in mind: at the same point assertion statements serves as a sort of destination to take aim for in automated test runtime, they also serve as a great place to start when defining that runtime, as well as the inquiry it has been composed to execute in support of.