Why test automation is easy, but testing is tough
As a Test Lead for DWP Digital, I wanted to provide some insight on the tools and techniques we are using on our products and services and how test automation fits into our plans for service modernisation.
We are evolving from building monolithic products to microservice-based ones. Doing this allows a large application to be separated into smaller independent parts, with each part having its own realm of responsibility.
This transformation needs to be supported by a robust test strategy. To match the speed and scale at which the wheels of transformation are in motion, test automation is the only answer. DWP Digital’s delivery squads are also on the journey of automating everything – from infrastructure to build, test and deploy and then repeating this cycle.
Procuring the right tools
Investment has been made in Gitlab Ultimate with the ability for Continuous Integration (CI) and Continuous Delivery (CD). The Gitlab Ultimate tool provides tools and capability which traditionally were used as end of delivery lifecycle testing. For example, an IT health check for security, accessibility and performance testing.
Test engineers in DWP Digital are also using accessibility tools like Pa11y, which allows user interfaces to be verified for accessibility compliance during every code merge, as and when code changes. It also provides K6, an open-source load testing tool for engineering teams, as an inbuilt performance test tool. Finding the right tool for the right job is important in maintaining efficiency.
The evolution of tooling
New tools are important for the evolution of tooling. These new tools have better features than their older counterparts, and in general, if they are effectively used, they can and do add value.
However, while embracing new tools and new ways of working, we also need to remember some of the old tried and tested principles.
The testing pyramid, for example. Whether you’re working with monolithic or microservice applications, using waterfall or agile delivery methodology, manual or automation testing, the testing pyramid principles are like a north star.
The key principles of the test pyramid are:
- More granular tests in lower environments and a continuous integration pipeline
- Shift left (in a test pyramid context – test more in lower environments)
- Fast feedback in lower environments
- As we move up to higher environments, the focus of testing needs to change to interface design, business driven covered by integration test, end to end testing respectively.
- Less number of tests in higher environment, as it’s expensive due to dependencies on different products. Test all possible scenarios in lower environments so there’s no need to repeat these tests at a higher level.
There are other test techniques which are equally important, such as boundary test analysis and test coverage.
Test automation doesn’t make the quality of your software good. It is the testing which is carried out, such as the quality of test case and test scenarios, which drives the quality of software. Test automation just makes it run fast, repetitively and without human intervention.
Remove the blinkers
In microservice architecture-based products, it’s very easy to focus only on your own product behaviour. And, to a large extent, that is the right approach. However, as a tester, we need to think from business or end user perspectives.
Ask questions: what does the business want? What is this data telling us? Is your product sending the right output based on the data? Is this really what the business wants?
It’s so important to speak to your business analyst and, if possible, the business analyst for the end product, to clarify the product behaviour based on the data.
Remember, humans can apply common sense, but the products cannot. So, this needs to build into product behaviour.
Testing for microservices
The objective of the contract testing phase is to test at the boundary of an external service, verifying that it meets the contract expected by a consuming or provider service. In simple words, contract testing is testing against the ‘live contract’ published by the consumer or provider on the broker accessible by both parties. By doing so, any breaking changes by anyone will immediately get tested in their continuous integration pipeline.
The topic of end-to-end testing has polarised the test community. Before we go into end-to-end testing, let me clarify a few testing terms which are used by different testing resources to mean different things.
To assure that each function integrates together within the microservice boundary (resources, service layer/domain, gateway, data mapper) to provide working software that satisfies the agreed acceptance criteria. Some teams call this integration testing as it integrates components within the microservice.
In the microservice world, this is testing interactions between the product under test with all its interfaces, for example interaction with other microservices or the backend. This is driven by the product under test. Some teams call this an end-to-end test. Maybe it’s end-to-end test with the blinkers on!
In the microservice world, this is testing the end-to-end business scenario to achieve business objectives. This may span across multiple microservices or backend and is driven by the end customer or business.
Rather than getting into what a specific testing phase is called, it is important to talk to interfacing teams and understand the objective of what they are trying to achieve.
End-to-end tests are expensive, data set up can be complex, can run slowly and can fail due to unexpected and unforeseeable reasons.
To make end-to-end valuable, the following principles need to be followed:
- Write as few end-to-end tests as possible – limit to happy path / critical business scenarios
- Focus on personas and user journeys
- Choose your ends wisely
- Rely on infrastructure-as-code for repeatability
- Make tests data-independent
- Business needs to own end to end test scenarios
Value add – there is much more to it than test reports
Testing is much more than just assessing the acceptance criteria and focusing on a binary pass or fail. Checking for functional correction is important, but it shouldn’t be limited to that.
Also, the testing process should not make only testers responsible to report on the status of testing.
The whole delivery squad need to actively listen to the testing heartbeat. What I mean by this is paying attention to test outcomes. Pain points can drive operational intelligence and feed into the operational team who supports your application. User researchers can learn from application journeys. Business analysts and product owners can test and learn the end-to-end process and identify waste in the business process to bring efficiency. Project managers can learn business risk, rather than only focusing on project risk.
Overall, we should focus on quality in our testing, rather than just correctness. Squash the perception of ‘passed test = quality’.
DWP Digital has a complex landscape, with multiple business lines, lots of third-party interfaces and a need for consistent data flow between these systems. The transformation journey to microservices and event-driven architecture will help to achieve improved self-service and increased trust, to provide a holistic customer and colleague experience.
To achieve these objectives, I believe that we should try to automate as much as possible – infrastructure, build, test and deploy – and repeat this cycle. Use the right tools and use its reporting feature effectively to avoid creating technical debt. Do the right testing in the right environment. Think about your business objectives while developing your product. And make testing your project team’s heartbeat.
Are you ready to tackle digital transformation on a huge scale? We are currently recruiting test engineers.