The Space Race of the 20th century left multiple casualties, including the death of several astronauts, space waste left behind after each mission, and significant economic damage that ended up determining the outcome of the cold war in favor of the USA.
Likewise, the Race for software testing automation has the potential to leave casualties for not meeting customer expectations, a countless number of “junk” script that no one will use in the future because of one simple functional change or in the system’s architecture. And, needless to say, a long wait for the promised return of investment and expected savings. All this for not clearly establishing the direction and goals of our automation initiatives from the start of the journey.
A clear path starts with the why
To avoid these dire consequences, it is crucial to answer several questions before starting the automation journey. In this article, we also help you to analyse some of the most important ones yourself:
Why automate software testing?
This question might seem obvious, and many organisations may already have several clear answers in mind: “to safe hours in regression tests,” “to promote agility”, or “to monitor pre-productive environments”, to name a few. Regardless, it is still the most important question of all – and the one to guide the rest of your journey.
And as key as it is to define the why, it is equally crucial to communicate it to your team so that no one can lose sight of the automation goals.
We once had a case that, while the automation objectives were clear and they had been shared with the partner, he took a different path and created a huge number of scripts over the same workflow, with different validations. With so many scripts, it became very difficult to manage and monitor. Then the software changed. It became impossible to maintain all scripts, so it was more convenient to develop it all again from scratch – but this time, doing all the validations of the flow in a single script.
The many hows of automation
Once the objective of the automation is clear, another important question is: “how to do it?”. And several others emerge from this, such as:
- What do we know about automation?
The first thing I’d recommend is to identify the AS-IS in terms of what we know about the subject. The gap between the AS-IS and the TO-BE is one of the first things to be eliminated or reduced. Anything we do must be founded on a solid knowledge of the subject. Having skilled people internally or an experienced partner can be very helpful at this phase.
- Do I have a team or partner with the skills to do it?
Having the right people, or a partner, is key for success. Not only for the technical knowledge but also for the empowerment they take towards the objective.
- How much will the project cost?
Usually, a PoC (proof of concept) or an MVP (minimum viable product) is enough to give us an idea of costs. While most PoCs stay productive, how these are achieved should by no means become the process for the future. A PoC aims to know if “it can be done” – and if it does, it’s time to start planning!
- When should I expect to see results?
At the end of the day, automation is a development project, which makes it for straightforward planning. Or it can be dealt within a sprint, so the times will be determined by the size of what we want to do. The ideal scenario is: take a system that is a good candidate for automation, automate a flow of medium complexity, and then use this reference time to plan the rest of the flows in the system.
- Should I automate everything?
Not necessarily: quality over quantity. Technically, I would dare to say that “100% of the tests can be automated”. However, to achieve this, we would incur very high costs forcing us to discard the idea.
- What is really convenient to automate?
Prioritizing the urgent over the important is a good technique, or the already flawed 80/20 (Pareto). After all, what matters is to be very clear on the value the script that I am developing contributes to the fulfillment of the business objectives.
- How will I measure the result of my automation?
Set KPIs. Like any strategic planning, KPIs must be set based on the main goal and secondary objectives. Consider some quality indicators, which are also relevant to this area. Some examples of KPIs are:
- Defect rate: defects found by the script (ideally using defect density).
- Types of defects: coding, environment, data, etc.
- Failure rate: from 100% of one script execution, which percentage results in error.
- Minutes of manual execution VS minutes of automated execution: this allows to identify the time savings generated by your script.
- % Automation Coverage: automatable flows on the total Universe of test flows of the system (it is not recommended to define a high target. In fact, we should not even have a target).
- % Progress in automation development: automated flows VS automatable flows. Depending on your plan, you have to compare the planned against the realised.
Key lesson: planning
So, up to here, we understand that automating is not as trivial as they told us: assembling a random server (usually any workstation), generating a script with any tool, and dedicating ourselves to executing over and over as if the more executions or script we have, the closer we will be to our goal. It just doesn’t work like that.
What needs to stay after this analysis is that before embarking on the path of automation, you must stop, as long as it’s needed, to clearly: define objectives > plan > prioritise > define KPIs and targets > and establish control points and tracking methods focused on the value of the automation, not on the amount of script we have to display.
And in the case you are already on this path, and outlook is not positive, ask yourself these questions to guide you towards the opportunities to improve your team.
A final word on automation
To finalise, and to come back to the “why”, below are some approaches that can be given to automation while adding great value to the organisation:
- Automate to promote agility
Under this perspective, it is not convenient to automate 100% of test cases. Rather, consider only the most critical functional workflows to avoid any business impact due to a new version (regressive tests). Additionally, it is also good to include those tests that take more time to be executed manually (to focus on manual tests of the new cases / flows that we create due to the change in the code). Finally, include the most complex ones that require advanced functional knowledge (this also makes it possible to dispense with the expert functional analyst who must be focused on other flows). The executions of these tests are ideally triggered in a continuous integration scheme before the push of a new code modification and after the execution of the corresponding data code inspection (ideally also automated).
- Automate to monitor the stability of the systems
When making a major modification to a system, impact analyses often miss some integrations with other systems and it is common for an unidentified impact to affect the operation of one or more systems. This causes tests to stop until the correction or rollback of the change. In this focus of automation, it is advisable to consider only “Happy Paths” and test these workflows several times a day on a scheduled basis (for example, with Jenkins) allowing us to quickly find out through an online Dashboard if a system, service or server encounters problems. If we go further in terms of the “online” notice, we can trigger an email, SMS or a warning message in tools such as HipChat so that team members find out quickly without having to go to look at the dashboard.