Blogs
Blogs

How not to implement test automation – An illustrative example

In the previous blog post, we took a look at test automation, and how it adds value in the face of increasingly complex software and demanding users. We need to keep in mind that test automation is not a “get out of jail” card that magically solves all problems. Let’s take a look at an illustrative scenario demonstrating how not to implement test automation.

An organization set out to evaluate the various commercial automation tools that were available in the market. They brought in sales staff from the various vendors, sat through product demonstrations and performed a thorough internal evaluation of each automation tool. After thorough deliberations, the organization chose one particular vendor and placed an initial order of over a few million dollars’ worth of product licensing, maintenance contracts, and onsite training. Finally, the tools and training were distributed throughout the organization into various test departments, each with its own set of projects.

There were numerous projects and none of the respective test projects had anything in common. The applications were vastly different. Equally importantly, each of these projects had individual schedules & deadlines to meet.

Every one of the test departments started to separately coding functionally identical common libraries and made routines for setting up the appropriate test environment as well as for accessing the requisite programming interfaces. Further, they continued to make file-handling routines, string utilities, and database access routines, which eventually lead to code and design duplication and increased complexity.

Likewise, for their respective test designs, they each captured application specific interactive tests using the typical capture / replay tools. Further, some test groups went ahead and modularized key reusable sections, creating reusable libraries of application-specific test functions or scenarios. This was intended to reduce the amount of code duplication and maintenance that occurs frequently in purely captured test scripts. Nevertheless, for some of the projects this might have been very appropriate and helpful if done with sufficient planning while making use of an appropriate automation framework. But this is seldom the case. However, now with all these modularized libraries, testers could create functional automated tests in the automation tool’s proprietary scripting language via a combination of interactive test capture, manual editing, and manual scripting.

Now, the only problem was that since the test teams were totally separate, they did not think past their own individual projects. Although they were each setting up something of a reusable framework, each was still completely unique (even where the common library functions were the same). This meant extra work in the form of duplicate development, duplicate debugging, and duplicate maintenance. So, understandably, each individual project still had tight deadlines, and was forced to limit its automation efforts in order to get real testing done.

Above all, changes to the various applications started breaking the existing automated tests. Hence the script maintenance and debugging effort became a significant challenge to the team. Additionally, upgrades in the automation tools themselves caused significant and unexpected script failures. In some cases, the necessity to revert back (downgrade) to older versions of the automation tools was indicated and made mandatory. Further, resource allocation for continued test development and test code maintenance became a difficult issue. Eventually, most of these test automation projects were put on hold. Teams started revisiting to understand and deliberate on the lines ‘what could be the reason and how to fix it to have a most comprehensive test automation solution.

Interested to know more on latest topics