Test automation in software development

// IT Quality Improvement

That which functions without any error is good.

It sounds simple, yet, it is of central importance: Software that functions without any error is good, as software errors cost time and money. That applies to standard software, but of course also to custom applications such as those used at many companies in day to day business. Through automated tests, it is possible to continuously measure and assure software quality. Undesired side effects of software changes can be directly identified and avoided through automated tests.


Manual and automated tests are quite similar in their approach and build on one another, where necessary. An automation specialist can create valid test scripts with existing test scenarios - even without in-depth knowledge of the application to be tested.

Modern test suites provide the option to record scripts and execute them as often as needed. However, each specific case in an individual one and as such a reworking of the scripts is in general necessary to achieve a dynamic in test execution. This is necessary, for instance, to be able to create different users with different roles within the test.

At suitable spots in the test scripts, e.g. after the data from a multi-page dialog has been sent, the script has to compare the system's response with a pre-defined response. When programming the comparison, great care has to be applied to achieve stable test scripts and to keep change costs down. The later is incurred, for example, when a day's date is included in the system response which then changes as a result thereof.

A test has to work with plausible but artificially generated data. Said data has to be generated and, upon completion of the tests, removed from the system. Here, too, the implementation of automated processes is helpful. With the help of scripts, input data for tests can be generated and a data cleanup can be performed automatically upon completion of the tests.


Performing automated tests, especially when using suitable test tools, is very simple to achieve – more or less on the push of a button. The challenge is in analysing the test results.

Test scripts generate actions in individual steps to which the system responds, just like it would to a real user. For each test run, a results report is automatically generated and distributed upon completion of the test. If the system's response in one of the test steps does not match the specified one, the test run is terminated. In the test report, the spot at which the test was terminated is comprehensible and reproducible. An analysis takes place whether the test case is no longer functioning correctly due to an intended software change or whether an actual programming error exists.

In combination with an automated generating of software, e.g. through nightly builds, the constant quality control within a development process is possible.


At times, there are complaints that test automation is expensive and does not pay off. When no due diligence is applied prior to implementing automated tests, this can actually be the case. So, what needs to be taken into consideration?

One prerequisite for automating test are established manual test processes that contain test case descriptions which are documented in detail. Ideally, the test cases are kept together in a tool which performs a versioning of the individual test cases and logs their execution. Manual test cases serve as the basis for creating test scripts through which test can run automatically. Based on the change history of the test cases, in can be determined how stable the tested software is with respect to the specific test content. The number and history of test executions provides an indication of how important and current the test case is.

The following rule applies: If a test case is no longer changed but continues to be frequently used, it has reached a high degree of maturity in these spots. Here, an automation will definitely pay off since the manual efforts can be reduced considerably.

Another point of entry into test automation is a planned migration to a new software version, for instance a pending technology switch from one database manufacturer to another. Since test automation allows for end-to-end testing, the use of automated tests can check in any easy fashion whether the new system will deliver the same results as the old one. If the tests are not successful, the test reports provide any indication of the root cause. This way, manual testing expenses can be reduced without sacrificing quality.


To test the system's behaviour on a day-to-day scale, performance tests are conducted. Therein, the system load is gradually increased. A so-called load generator imitates user actions during the test up to the expected number of users and/or up to the planned system crash due to overload. With the help of these stress tests, the maximum system load possible is checked. They furthermore allows for preventative options in case of loads peaks and for statements with respect to the system's preparedness for the future.

Existing automated test cases can be adapted in modified form for use in performance or stress tests. This saves costs and increases the authenticity of the test.


Automated test – used at the right spots – can serve as a quick and meaningful path for direct feedback with software development. The undesired side effects of software changes can identified early on and avoided. Therefore, the quality of the overall product is testable at any time "at the push of a button". Automated or manual tests are an important aspect of quality control and economic prevention in software development or change. In each individual case, the setup and scope of tests can be specified and limited with experienced experts.

noventum consulting's experts have already been supporting their customers for many years in establishing testing procedures, [selecting and using] the respective tools as well as implementing test Automation.


noventum consulting

Marcus Baetz


Go back