Best practices for the automation of integration testing

Our senior QA Analyst, Sakina Abedi, describes the vital role software testing plays in delivering high quality products and the best practices involved in automating testing.

 

Why do we do automation testing?

Automation tests speed things up – a machine can do testing for us, as many times as we like, and much faster than if we were to test manually.

A great automation testing setup delivers quality assurance fast, regularly, reliably and cost-effectively.

A well-designed automation test suite is made up of a broad range of tests covering critical pieces of functionality that can be run in a short amount of time.

This allows testing to be run more regularly, quickly providing feedback that new features don’t negatively impact the user experience, and didn't break existing pieces of functionality.

The overall release process is faster and smarter, because the test execution can be triggered automatically in the release pipeline whenever a code change is introduced. The feedback from this execution can determine the quality of code change introduced. 

 

Some context around automation on different testing levels

Automation tests can be run on many test levels – unit testing level, integration testing level and so on.

While automation of tests is great, it is equally important to understand the context of testing for the given test level.

For example, unit testing is done to ensure the given component works as it should, so that any changes made to the code do not break the individual component’s expected behaviour.

Multiple components come together to deliver a piece of functionality. Integration testing is done to ensure the functionality works as expected, so that any changes made to the code do not break the functionality.

Integration testing is usually performed by the tester to validate that the functionality works as intended. Integration tests to cover the critical paths are added to the regression testing suite. This means that every time there is a code change, we would like to ensure that the existing functionality hasn’t been impacted. As regression testing must be performed regularly, it makes sense for machines to step in and do this job for us. This leads to the need for automation of integration tests.

 

15 Best Practices for automation of integration testing

 

1. Tying tests to functionality rather than the implementation

When writing integration tests, they should be written such that they continue to pass unless the functionality breaks. The change in implementation shouldn’t impact it as long as the functionality is still the same.

For example, if a button moves from one side of the page to another, the tests should continue to pass.

This can be achieved by the following best practice #2.

2. Unique identifiers

When selecting elements during integration tests, two things need to be kept in mind.

Firstly, a unique identifier to select the element.

In order of precedence, the attributes used to select could be data-test-id, aria-label, id or class.

For example, a selector for a select dropdown element or an input element that has one of these attributes is a good one, as long as it’s unique enough to identify that one element on the page.

Secondly, a selector that aligns with the user’s perspective when interacting.

For example, a user may look for a button containing certain text. Hence, a selector that looks for it is a good one, as long as it’s unique enough to identify that one element on the page. 

3. Test one piece of functionality at a time

The number of functionalities that are broken by a code change, and the severity of those breaking changes, could help make informed decisions for the upcoming release. 

In a project where Continuous Integration/Continuous Delivery(CI/CD) has been implemented, this would notify the developer of the impact made by the new code change and the developer could work on fixing it at once, so that the pipeline is passing before the code is merged.

To get the best results, each functionality needs to be tested in a test of its own. 

Having one test to execute multiple functionalities would mean that if the first functionality breaks, the status of remaining functionalities would be masked until the first functionality is fixed. At this point it will be difficult to predict the overall impact of the code change. So always test one functionality in one test.

For example, creating a contact would be one test, updating a contact would be another test and deleting contact would be the third. 

The number of functionalities that are broken by a code change, and the severity of those breaking changes, could help make informed decisions for the upcoming release. 

In a project where Continuous Integration/Continuous Delivery(CI/CD) has been implemented, this would notify the developer of the impact made by the new code change and the developer could work on fixing it immediately, so that the pipeline is passing before the code is merged.

4. Bidirectional traceability

While this may differ depending on the software development lifecycle being followed for a given project, it is crucial to be able to trace an automated test all the way back to its requirements and vice versa. This helps measure the value of the test, the test coverage achieved for a given application, the impact made by a code change to the requirements and so on.

For example, Requirement X can have 4 test cases, each test case can have an automated test for it. The test cases for a requirement with high priority have high value and so do their tests.

5. Wait until no loader is present

Although different automation tools strive to do such things under the hood for us, waiting for no loader to be present (when there are one or more) before proceeding to the next step in a test, makes the test more reliable and avoids premature interactions with the User Interface (UI).

6. Avoid using delays

Although loaders are a common indicator when a page is still loading, that is not always the case. Hence the automation test simply rushes to interact with the element, the very instant it is present in the UI. Delays may work but are not a good practice. Moreover, it is not a reliable way since predicting a short enough duration to make it work all the time is not possible.

To deal with this, let’s think of it from a human perspective. What do we look for before we proceed to interact with an element in the UI? We wait for it to appear on the UI.

That’s exactly what we’ll automate to achieve human-like behaviour in our automation test. For elements that will be interacted with, always wait for it to be visible before interacting.

Note: sometimes due to window size and with certain automation tools, you may further need to scroll to the element before interacting with it.

7. Each test should run independently

As a best practice, we need each test to run in isolation. This means that the test is not dependent on the one(s) executed before it in a sequence as a precondition for it to run successfully. Each test that mutates the data or the state of the application, should restore it after its test execution so that it does not interfere with the test(s) that will be executed after it.

Running tests in isolation enables the tests to be executed in any order and also in parallel on different machines to optimise the test run, which is a crucial factor to achieve CI/CD.

8. Each test can create its own test data

While a fixed test dataset can be seeded before the test suite is executed, this may not be a sustainable option. Whenever a test is outdated or descoped, any data that was created for it in the dataset will have to be removed. This step may be missed or skipped most of the time, leaving unwanted data sitting in the dataset. Also it is difficult to say which test(s) is dependent on which part of the dataset. On the other hand, new data might need to be added to the dataset every time a new test with new test data requirements is written.

Instead, we can create the data that a test needs as a test fixture, at the beginning of each test. This will not only save the time taken for the seeding process before the test suite is run but will help create deterministic data in each test to get deterministic results. When tests are run in parallel, the time taken to create test fixtures will also be parallelised.

9. Strong-typed data

Make sure that data is never hard-coded in the test. Anything that is bound to change, can be declared as a variable at the beginning of the test. This will also enable switching to test fixtures implementation in future where the data utilised from the seeded dataset can be replaced with the test fixture easily. This way, the rest of the test stays unaffected.

10. Modularise where possible

When a particular set of steps are repeated across multiple tests, it can be modularised into a helper method aka partial test case however, wee need to carefully assess what we’re trying to modularise here.

For example, if we’re creating a contact in multiple tests, it means we are unnecessarily testing the contact creation functionality in multiple tests(refer to best practice #3 to know why it is not a good idea). So this just means we need one test to test contact creation functionality and the remaining tests just need a contact as test data.

On the other hand, if multiple tests open the browser, visit a given page and wait for the page to load, this is a good candidate. Another example would be a method to wait for a loader to not be present since this would apply regardless of which page of the application or which test it is.

11. … but avoid excessive modularising

While modularising is a good practice, in the case of automation testing, it is also about keeping it simple, maintainable and readable. It is all about having the machine do the testing for us that is close enough to human-like behaviour. It is totally okay to have tests that simply visit a page, interact with a few elements and assert the expected behaviour.

Trying to modularise simple things like interacting with input elements or select dropdowns would not really serve the purpose since different elements on different pages will be selected by different attributes based on what is unique.

For example, one select dropdown may have a data-test-id that we can use whereas another may not, resulting in us utilising lets say, its class attribute.

12. Separate the actual test execution from the setup

Not every line in the test is the actual test itself. The first few lines may just be the test fixture setup. Based on which automation tool and which language is used, it is a good idea to separate the setup process into a before() block or that appropriate comments are used to differentiate between the two.

13. Usage of flow control is a big no-no

Automation tests are basically code written to test the application code. The results from each test for a given test case have to be deterministic all the time. Introducing any flow control indicates that different behaviour is expected from an individual test, which shouldn’t be the case.

Any conditional flow means there is a logic involved expecting different behaviour based on different outcomes.

Any looping involved would mean the same set of steps are being executed multiple times, which is the opposite of best practice #3. Not to mention, any logic for flow control will create the need for testing it which defies the purpose of automation testing.

14. Automation of good candidates only

While automation of tests is great, the cost, effort, and time for setup, implementation, maintenance, and management of test automation should not be underestimated.

For software development lifecycles (especially iterative and incremental) where the project evolves over time and code changes are introduced regularly, frequent testing of existing functionalities(Regression Testing) needs to be executed.  

Hence, test cases in the regression testing suite are good candidates for automation.

The use of automation testing should not be looked at as a replacement where manual testing would be better.

15. Complete assertions

Oftentimes, during the visits and clicks in our automation tests, we may miss a very important step. The expected result itself! This scatters incomplete assertions across the test. When an interaction is done in the UI, it is crucial to complete the test step by asserting that the expected result for that particular test step was achieved.

For example, if clicking on a button with a certain text, results in an overlay on the screen, make sure to assert it. If this step fails, you’ll know straight up that the click itself failed or that the overlay didn’t appear on click. Failing to do so would mean that once the test clicks on the button, it executes the next test step, which is interacting with the overlay. This is where the test would fail instead. Now it makes debugging the failed test a lot more complex. It is only after seeing the artefacts that we would know that it was the failed click or the missing overlay that caused the original issue.

To conclude, the automation of integration tests is powerful and would give us a lot of benefits, from facilitating CI/CD pipeline, to not having to ever manually test a functionality that has been automated already and lots more. By keeping the best practices in mind, we can make the automated integration tests a lot more efficient, valuable, reliable, simple and maintainable.

Previous
Previous

Front-end design is a balancing act

Next
Next

What do cats and automation have in common?