Tuesday, September 30, 2008

One Approach to Automation Frameworks

One Approach to Automation Frameworks

This document is the product of a cursory reflection on my experiences in software development and quality assurance practices. An Automation Framework is a collection of processes, methodologies, tools and data that will aid in increasing product stability and provide metrics over time. Thus, it is important to make a clear distinction between Test Frameworks (strategies) and Test Harnesses (tools). In my opinion, an Automation Framework cannot be developed without first evaluating the existing software development practices throughout the organization. The Automation Framework ties everything else together.

All too often, the idea of automation is generally viewed as a time and money saver. This is quite to the contrary. The initial ramp-up is demanding, and while the new automation project is underway, manual testing must continue to meet production targets. The payoff with automation is realized over time; that is, as test cases are automated, the regression suite grows. Eventually, metrics become available to illustrate growth in test coverage and product stability as the development cycle nears code-freeze. By the second iteration, baselines for performance can be integrated, and subsets of the test cases can be moved to a post-build (or smoke-screen) suite to short-circuit the QA cycle.

Automation places new demands on a QA organization. The availability of a tools framework enables manual testers to elevate their creativity by spending their time trying to break the product in new ways, while the automated regression run takes care of the mundane. Test Automation Developers apply their creativity in a different way: by determining how to automate test cases. Meanwhile, framework tool developers must meet the needs of the automators by providing test execution, analysis, result management and metrics, and treat the framework with as much care as production software.

I believe the best approach to developing something as complex as an automation framework is to evaluate the existing practices and tools in a software development organization, assess the skill levels of various contributors, and take into account project release cycles and deadlines to produce a Requirements Document. From there, the most common use cases should be highlighted and the project’s development should provide usable tools to testers at each milestone. In this way, an organization can transition into a new methodology over time.

Some of the requirements include integration of 3rd party tools (like Silktest or WinRunner for your UI testing, etc.), consideration of existing test harnesses and legacy tools, language selection, proof of test coverage (metrics), and product release schedules. As with all automation frameworks, environmental management for test setup and tear-down is a big consideration.


Documentation

Several departments are required to cooperate to effectively automate. These departments each produce one or more documents to provide enough information to align testing efforts with customer expectations. I will very briefly summarize these here:

SRS – Software Requirements Specification: provided by Product Management, this document should present the feature requirements and clearly state the Use Cases for the feature.

ERS – Engineering Requirements Specification: drafted by Engineering to specify the components which must be developed or altered to support the new feature, including database schemas, network protocols, operating systems, etc., and demonstrate that all Use Cases are supported. This document is based on the SRS.

TP, TC: Test Plan, Test Cases: The Test Plan is drafted by the QA department and is based on the SRS and ERS. Test Cases document the input and output parameters for each configuration possibility, starting with the most common and ending with corner-cases. Each Use Case must be supported by one or more Test Case.

Test Automation Framework: A collection of documents outlining practices, methodologies, tools and infrastructure for test automation.

TAP: Test Automation Plan: The Test Automation Plan is drafted by the QA department and is based on the TP and TC’s. The approach must fit within the Automation Framework guidelines.


Elements Requiring Test Coverage


Software can be tested at various levels of operation. The most common are outlined here:

Unit – Smallest granularity at the code function or module level

Component – Tests behaviors/services provided by elements that provide functionality to one or more features, according to Engineering Requirements Spec. (ERS); minimal interaction with other components to localize bugs

Feature – Tests a subset of the overall product according to Product Management’s Software Requirement Spec. (SRS), ERS and feature Test Plan to prove Use Cases are adequately addressed


Closed or Open Testing

Depending on the openness of the software and the skill level of the tester, different approaches can be used to find software defects:

Black Box – The code isn’t available or considered. Given a set of inputs, a known environment and stated expected behavior, the output can be validated to demonstrate the system is functioning as designed

White Box – The code is open (source debuggers can step it or it can be viewed in clear-text, for example). The source code is evaluated by humans or code coverage tools for syntax and logic errors.

Gray Box – A mix and match of the above techniques.

Sand Box – Oh dear. This shouldn’t have made it to QA.


Types of Testing

Unit – Tests a specific function, collection of functions, or interaction with other components, and should be done within development (or test automation provided to automation/build team for smoke test)

Functional – Tests a component to see if it does what the specification says it should do.

System – End-to-end test of the entire system, best focused on one feature at a time.

Stability/Performance/Load – These should always be together. Without Stability, you can’t have performance. Stablity and Performance tests (sometimes called Load tests) strain the software and system to high levels to assess whether or not the software executes within required thresholds. (The SRS should state performance requirements).

Scenario – Assess the performance of the software under specific environmental configurations (eg. Using a database provided by a helpful customer to assess feature performance).


Possible Test Results


I have reduced all results to 4 possible states:

PASS (XFAIL) – The outcome of the test case meets the Expected result (PASS includes eXpected Failures; some organizations like to note this in their results)

FAIL – The test results did not match the expected results.

ERROR – The test case module/controller crashed, unexpected data wasn’t handled, an exception was thrown and not caught, etc.

BLOCK – Test cannot execute. An automated test should never set a result to BLOCK; rather, the test harness can determine a test is blocked when it has stated dependencies that either FAILed or ERRORed. For example, test execution depended on a previous environmental configuration step which failed, so the Harness refuses to execute.


The Environmental Problem


Configuring the test environment to a known state before an automated run in my experience has been the most difficult challenge of automation. At 2am when the tests kick off, there isn’t a robot that can swap CD’s in the drive to wipe the system, and if the database you needed is corrupt, all your tests will end up in a Blocked state.

Commonly, testers can step on each other’s toes, but a computer will do that consistently if scheduling isn’t handled correctly. Databases should be set to a known, verifiable state, as should an OS config’s file permissions and data files (if that is what you are testing). Automated result checking can be made a little more resilient if the resulting data has a “fuzzy” tolerance (in other words, dynamic results can still be used to evaluate the outcome).

Conclusion


This document is by no means a complete analysis. However, it should illustrate certain ways of thinking about this problem space on a cursory level. A project plan for a complex framework can produce a fairly thick document, not to mention diagrams, and this document should state the current practices and tools in place in an existing development organization.

Architecting an Automation Framework is no small undertaking, and careful attention needs to be applied in the planning phase. Deliverables should produce useful tools at short milestone intervals. Growing a framework from scratch isn’t always the answer, and there are some decent OpenSource and commercial products available, but prototype deployments of these are necessary to ensure they will meet the needs of an organization that is bound to grow in their automation needs.

Good luck with your Automation Framework deployment! Feel free to comment or request more in-depth material and I will write another article.

1 comment:

Anonymous said...
This comment has been removed by a blog administrator.