Tuesday, September 30, 2008

One Approach to Automation Frameworks

One Approach to Automation Frameworks

This document is the product of a cursory reflection on my experiences in software development and quality assurance practices. An Automation Framework is a collection of processes, methodologies, tools and data that will aid in increasing product stability and provide metrics over time. Thus, it is important to make a clear distinction between Test Frameworks (strategies) and Test Harnesses (tools). In my opinion, an Automation Framework cannot be developed without first evaluating the existing software development practices throughout the organization. The Automation Framework ties everything else together.

All too often, the idea of automation is generally viewed as a time and money saver. This is quite to the contrary. The initial ramp-up is demanding, and while the new automation project is underway, manual testing must continue to meet production targets. The payoff with automation is realized over time; that is, as test cases are automated, the regression suite grows. Eventually, metrics become available to illustrate growth in test coverage and product stability as the development cycle nears code-freeze. By the second iteration, baselines for performance can be integrated, and subsets of the test cases can be moved to a post-build (or smoke-screen) suite to short-circuit the QA cycle.

Automation places new demands on a QA organization. The availability of a tools framework enables manual testers to elevate their creativity by spending their time trying to break the product in new ways, while the automated regression run takes care of the mundane. Test Automation Developers apply their creativity in a different way: by determining how to automate test cases. Meanwhile, framework tool developers must meet the needs of the automators by providing test execution, analysis, result management and metrics, and treat the framework with as much care as production software.

I believe the best approach to developing something as complex as an automation framework is to evaluate the existing practices and tools in a software development organization, assess the skill levels of various contributors, and take into account project release cycles and deadlines to produce a Requirements Document. From there, the most common use cases should be highlighted and the project’s development should provide usable tools to testers at each milestone. In this way, an organization can transition into a new methodology over time.

Some of the requirements include integration of 3rd party tools (like Silktest or WinRunner for your UI testing, etc.), consideration of existing test harnesses and legacy tools, language selection, proof of test coverage (metrics), and product release schedules. As with all automation frameworks, environmental management for test setup and tear-down is a big consideration.


Documentation

Several departments are required to cooperate to effectively automate. These departments each produce one or more documents to provide enough information to align testing efforts with customer expectations. I will very briefly summarize these here:

SRS – Software Requirements Specification: provided by Product Management, this document should present the feature requirements and clearly state the Use Cases for the feature.

ERS – Engineering Requirements Specification: drafted by Engineering to specify the components which must be developed or altered to support the new feature, including database schemas, network protocols, operating systems, etc., and demonstrate that all Use Cases are supported. This document is based on the SRS.

TP, TC: Test Plan, Test Cases: The Test Plan is drafted by the QA department and is based on the SRS and ERS. Test Cases document the input and output parameters for each configuration possibility, starting with the most common and ending with corner-cases. Each Use Case must be supported by one or more Test Case.

Test Automation Framework: A collection of documents outlining practices, methodologies, tools and infrastructure for test automation.

TAP: Test Automation Plan: The Test Automation Plan is drafted by the QA department and is based on the TP and TC’s. The approach must fit within the Automation Framework guidelines.


Elements Requiring Test Coverage


Software can be tested at various levels of operation. The most common are outlined here:

Unit – Smallest granularity at the code function or module level

Component – Tests behaviors/services provided by elements that provide functionality to one or more features, according to Engineering Requirements Spec. (ERS); minimal interaction with other components to localize bugs

Feature – Tests a subset of the overall product according to Product Management’s Software Requirement Spec. (SRS), ERS and feature Test Plan to prove Use Cases are adequately addressed


Closed or Open Testing

Depending on the openness of the software and the skill level of the tester, different approaches can be used to find software defects:

Black Box – The code isn’t available or considered. Given a set of inputs, a known environment and stated expected behavior, the output can be validated to demonstrate the system is functioning as designed

White Box – The code is open (source debuggers can step it or it can be viewed in clear-text, for example). The source code is evaluated by humans or code coverage tools for syntax and logic errors.

Gray Box – A mix and match of the above techniques.

Sand Box – Oh dear. This shouldn’t have made it to QA.


Types of Testing

Unit – Tests a specific function, collection of functions, or interaction with other components, and should be done within development (or test automation provided to automation/build team for smoke test)

Functional – Tests a component to see if it does what the specification says it should do.

System – End-to-end test of the entire system, best focused on one feature at a time.

Stability/Performance/Load – These should always be together. Without Stability, you can’t have performance. Stablity and Performance tests (sometimes called Load tests) strain the software and system to high levels to assess whether or not the software executes within required thresholds. (The SRS should state performance requirements).

Scenario – Assess the performance of the software under specific environmental configurations (eg. Using a database provided by a helpful customer to assess feature performance).


Possible Test Results


I have reduced all results to 4 possible states:

PASS (XFAIL) – The outcome of the test case meets the Expected result (PASS includes eXpected Failures; some organizations like to note this in their results)

FAIL – The test results did not match the expected results.

ERROR – The test case module/controller crashed, unexpected data wasn’t handled, an exception was thrown and not caught, etc.

BLOCK – Test cannot execute. An automated test should never set a result to BLOCK; rather, the test harness can determine a test is blocked when it has stated dependencies that either FAILed or ERRORed. For example, test execution depended on a previous environmental configuration step which failed, so the Harness refuses to execute.


The Environmental Problem


Configuring the test environment to a known state before an automated run in my experience has been the most difficult challenge of automation. At 2am when the tests kick off, there isn’t a robot that can swap CD’s in the drive to wipe the system, and if the database you needed is corrupt, all your tests will end up in a Blocked state.

Commonly, testers can step on each other’s toes, but a computer will do that consistently if scheduling isn’t handled correctly. Databases should be set to a known, verifiable state, as should an OS config’s file permissions and data files (if that is what you are testing). Automated result checking can be made a little more resilient if the resulting data has a “fuzzy” tolerance (in other words, dynamic results can still be used to evaluate the outcome).

Conclusion


This document is by no means a complete analysis. However, it should illustrate certain ways of thinking about this problem space on a cursory level. A project plan for a complex framework can produce a fairly thick document, not to mention diagrams, and this document should state the current practices and tools in place in an existing development organization.

Architecting an Automation Framework is no small undertaking, and careful attention needs to be applied in the planning phase. Deliverables should produce useful tools at short milestone intervals. Growing a framework from scratch isn’t always the answer, and there are some decent OpenSource and commercial products available, but prototype deployments of these are necessary to ensure they will meet the needs of an organization that is bound to grow in their automation needs.

Good luck with your Automation Framework deployment! Feel free to comment or request more in-depth material and I will write another article.

Thursday, April 3, 2008

Common Sense for the Flex Developer

Ah, it's nice to see a common-sense contribution to the world of software development from my old friends and colleagues, Tariq Ahmed, Jonathan Hirschi and Frank Krul. I just got an update from Tariq on their new book, "Flex 3 in Action".

Flex is a programming language that sprang up somewhere within the walls of Macromedia (now Adobe) sometime around 2003. It aims to solve many problems facing the Rich Internet Application (RIA) development community by extending the power of HTML applications. Relatively new technologies are tougher to grasp due to the limited availability of information via Google, Amazon or your local computer bookstore. The best resources are most often the early adopters, the users that forge the methods and approaches to gaining the most productivity from a tool, but writing a book is a daunting task.

Leading Flex developers Tariq Ahmed, Jonathan Hirschi and Frank Krul have been busy authoring the new reference, "Flex 3 in Action", a common-sense guide to Flex. With all the RIA madness going on in the Web Development industry, it's good to know that there's a book being developed for the common web application developer to help understand what Flex is all about and determine if it's right for them. As most of us have experienced, too many books are too light on information, when it's really the depth of subject material that is needed when you're facing a problem and need to be unstuck.

Flex 3 in Action strives to give the reader a high level view on each feature and then digs deep into what options are available. The authors then take the time to weigh the pros and cons of taking different approaches
so that you fully understand how the language works and what's available in your arsenal. The learning curve is tackled by relating HTML-based interface methodologies with how a developer might approach their goals within a Flex mindset.

If you're thinking about adopting this web application technology or you are familiar with Flex, help them out and give their work-in-progress a review:

http://www.manning.com/ahmed/

Tuesday, January 1, 2008

Common Sense

Common sense. The problem seems to be it's not all that common.

In today's world of software products, countless frustrations are encountered by users every day. These range from obviously bad UI design to mystical error messages that leave the user (and sometimes the PC) hanging, wondering what to do. A common phrase seen on message boards by frustrated customers is "Was this even tested?!"

I have professionally filled the roles of System Administrator, Quality Assurance Engineer and Software Developer, but fundamentally we are all users of software in some capacity or another. I understand how development and product marketing function under various circumstances - from well-planned and well-executed projects, to those that never had a design document and are hacked out in a myriad of languages. I even understand that sometimes, it's better to release a product with 90% functionality and make the deadline than to release 100% and miss the deadline entirely. And as an advanced user, I accept that most software today is complex and requires teams of people to compose. But this doesn't excuse the developer, the QA staff or the product management organization from overlooking common-sense items that will clearly frustrate their users.

Programming logic is one thing, while common sense is another. I recently encountered a web page that was supposed to send me a code so I could authenticate my bank account login. It had one radio button available and a "Send" button. I checked the radio button and clicked Send. What did I see?

An error occurred sending e-mail. Try again.

Obediently I tried again, but of course nothing changed. In shock I e-mailed the website, ranting that there didn't seem a whole lot of room for human error on my part. This error left me confused, not knowing if it was an error on my end, an application error or an error with the address I had previously provided in my bank account settings. Furthermore, now that I couldn't authenticate, I had to go through the insecure medium of e-mail to contact support.

Frustrating User Interfaces (FUI's) are abounding. Take the popular personal networking site, www.facebook.com. Facebook provides excellent features, but navigating the user interface is extremely frustrating. Common sense dictates that if a site is difficult to navigate, many users will just give up and go do something else. I returned many times over the course of several days and finally discovered how to retrieve a personal message, and this success happened almost randomly. When I click on someone's name, I expect to go to their profile page, but I seem to be taken to all sorts of random pages. It is unclear whether or not I will navigate to another page within my own profile or my friend's page, and where I will land. Facebook is but one of thousands of bad interfaces.

I feel that many engineers either refuse to step outside of their engineering brain to see things as a user, or simply don't care about anything but hitting their deadline, at all costs. The end result is undocumented code, poorly organized source files, untested code paths and strings containing unintelligible instructions or spelling errors. These elements create a perception of lack of quality when viewed by an every day customer.

Could most of us do any better? When designing software from scratch, with nothing to reference as a template, I'd probably say no. But with so many products out there, it seems reasonable that we should be constantly improving on software design and learning from the mistakes of others (including our competitors). Yet common sense in software remains... not so common after all.