Testing is evolving, new tools and technologies are coming to the market so there is always something to learn. However, the basics are still as important as they were before. Writing test cases is still part of our day-to-day responsibility. Every QA writes and documents hundreds of test cases during their career. But are they written effectively? Did you plan them carefully? And did you document them well enough for another tester to be able to update and troubleshoot? There are plenty of questions and a lot of available information on how to actually document tests in the best way. However even knowing all these there are common mistakes that are happening during documenting tests. In this article we will focus on the following mistakes:
A common problem in documenting test cases is lack of detailed steps or even missing steps. When the author writes the test case it might be an obvious scenario for him. However if another person will use it for execution – it might be not clear if steps or details are missing. Having detailed steps can give the test case more clarity and save executors’ time. Adding all steps even when it sounds too obvious is must.
Assume at some point you’ll share the scenario with someone who is not familiar with the app. Assume each test case is standalone, specify complete details, specify data and don’t leave room for ambiguity,
Another common mistake is not to specify initial state before test case execution. Even detailed steps might not help here because test cases might be executed from different invocation points, different environments or versions. At the end it might be run against different functionality that can affect the reporting and correct testing status of the product.
To avoid having these misunderstandings – including detailed initial state can help executors to run tests for particular functionality in a needed environment.
Two QA engineers run the same test. One says “Great. It works!”. The other says “Bummer. I need to open a ticket”. Sounds like a joke but the reality is two QA engineers will asset the desired outcome in a different way. Be specific on how you assert for the desired outcome. Is it a text? element? consistency between backend and frontend? Persistency?
Shifting gears to planning: Test cases are written against requirements – documentation, user stories etc. It is important to test functionality based on requirements. However it is only half of the job. It is a known fact that negative test cases catch most of the bugs. That is why adding negative test cases in addition to positive will improve the quality overall and potentially catch more defects while executing.
There are different techniques that can be used for test design. Here are some of them:
Using these techniques can help to cover as many scenarios as possible. It can prevent tricky bugs from leaking into production in the future.
One of the mistakes is to check several functionalities in one test case. It is still possible to do so however if one of the points in an expected result is failed – the whole test case will be failed which can give wrong data for test execution.
Good approach will be to test only one sub-functionality within one test case. In case of failure it is easier to track what exactly is broken. It is better for reporting also – can give visibility on testing status.
Keeping each test independent will tell you to quickly discover what’s wrong and fix it.
If you have test cases with the same functionality across different projects or platforms it might be time consuming having test cases maintained everywhere.
To make test cases reusable can save time on future maintenance. You just update the test case itself that will be updated automatically in the other places .
Giving an example when reusing it across different platforms – when we have an app for Android and IOS with similar functionality you can just use the same test case instead of having 2 different test cases for each platform. Including a parameter what platform to use can double reduce your time on writing test cases.
Many products can have different user profiles. Some profiles might have access to specific functionality that other profiles won’t have. One should test that the right functionality is served to the right users and that certain profiles are restricted from certain functionality.
Imagine the following horror scenario: your app is an HR app and a buggy release leading to regular users having access to everyone’s salaries.
It is very common to use dependencies when documenting test cases. Dependency in this case means one test case is depending on the outcome of another. When a Test Management tool is used there is a way to link the test case that you are dependent on. Otherwise it is still possible to just add test case ID to the dependent test case that can help to identify what should be done first.
Ideally it is better to reduce such dependencies and keep tests independent because if a test case with dependencies fails – other dependencies will be blocked due to this.
Having properly planned and documented test cases can save a significant amount of time on its maintenance in the future. Moreover it can give clarity to whoever will execute them. Non clear tests can cause wrong output, testing on wrong environment, wrong user and many more that can affect your report and might be cause of bug leakage to production.
With the help of best practices and also keeping in mind what should not be done when writing tests – quality can be improved together with time that you can save on its execution and maintenance.
This post was written by Alona Tupchei. Alona has six years of experience in automation testing and the manual testing of web-based and mobile applications. She’s been working on different projects in the domains of e-commerce, real estate, and airlines. She executes her testing not only from a technical point of view but from the customer’s point of view and believes that usability of a product is as important as functionality.