Monday, February 2, 2009

The 70/30 Rule to Testing & Automation Strategies

Introduction

Most of us have heard of the 80/20 rule, which simply put, states that 80 percent of the work is done with 20 percent of the effort, and 80 percent of the effort goes into finishing the last 20 percent of the work. Designing, developing and rolling out a test automation framework is no different in terms of work once you get rolling -- but that's only if you've made it past the hardest part of all:

The 70/30 Rule.

This rule says that 70% of all of your testing and automation efforts is nailing down the methodologies and strategies used for testing. Thus, only 30% depends on the tools. Often, companies make the mistake of going through a tool purchasing process or writing up automation frameworks without first fully understanding what it is they are testing. Equally overlooked is spending time documenting how manual testing is performed and what methodologies are most successful in reducing the number of issues that make it to production.

The tools deployed should be used to help streamline or enforce existing methodologies, policies and practices. Thus, methodology is the most important factor to a quality-driven Engineering organization.

This article focuses on some of the things that should be addressed during the 70% phase, which I call the Test Methodology & Strategy Phase.

Test Methodology & Strategy Phase

1. Know your product, know your customer

An experienced tester knows what the product does, at least at the marketing and sales pitch level if not an in-depth technical knowledge. Understanding the customer base is also important. Thinking about this can help to improve test plans and focus the team on targetting more technically complex areas of the product.


2. Know why you're testing.

Every so often in my career I ask myself what the product means for our customer. Does my work provide any real benefit? Have I reduced the frustrations of customer support? I question if the tests I run accomplish the goal of improving the quality of the product. In my last position, the security product was deployed into financial institutions, government agencies, medical research facilities, hospitals and military divisions. If the system failed, the customer might not have received their security reports in a timely manner. My current employer's product is delivered out to millions of retail customers. It has to work, because a product recall could be costly. It is an important motivating factor for the testing team to know why they are testing.


3. Document the Existing Development Process

It is important to take stock of the current environment. This includes development methodologies, source control and build systems, issue tracking and project planning. While there are industry-accepted practices across the board, a new framework of process should evolve at a pace that matches the culture of the organization. It is also important to gain a sense of how testing is viewed in the development organization: is it an afterthought, on an as-needed basis, or is it an integral part of the development methodology?

At present I have the fortune of working with a very quality-minded development team. Quality is so important that unit tests were put in place to execute after each nightly build well before a QA department had been established. While an automated nightly smoke screen is a necessity, it is only the tip of the iceburg, and other QA practices must be established for full coverage.


4. Document Existing Testing Practices

Write down all of the manual and automated tests that are currently executed. Next, begin documenting the gaps, where testing needs improvement or is non-existant. For example, if a set of internal tools are mandatory for building the product but are never tested, it is conceivable that these tools may contain bugs that affect the build. A more obvious example is lack of deployment testing, where the product moves straight from build system into the customers' environments. Instead, the build should go through a staging area in a testing environment for independent evaluation and follow a release process which ends in delivery to the customer.

At this stage I find it advantageous to understand the components of a system. Some components are tightly bound to others, while some stand alone without a dependency. Documenting the interaction and cross-over in some systems can be daunting, but component testing in QA is equally as valuable as end-to-end (system-wide). Understanding the cogs of a system help in documenting the gaps in testing.


5. Instituting New Polices (Code Freeze and Release Practices)

A difficult hurdle to achieving a consistent methodology is to institute practices and policies that work well, gain product quality, and reduce impact to the customer while at the same time allowing the culture of R&D to continue. In other words, policies that counteract creativity and productivity ultimately impact innovation and product quality. Obviously it is important to balance new practices and policies against resource constraints. As always, involving representatives from each team impacted by the proposed changes should increase the adoption rate.

  • Code freeze implies that no new features are checked in unless Product Management is willing to slip the schedule. Otherwise, only bug fixes are allowed. Clamping down on this practice will help in future scheduling decisions, increase the chance of timely releases to market and improve product quality.

  • Release policy documents the "gating" mechanism between R&D and production (be that the end user, customer base or internal group like customer support). Essentially, I believe that no software should ever reach production before QA has signed off. Part of this process should include release notes (change documents, known issues, user documentation updates, etc.) and test results should be documented.


6. Nailing Down the Issue Lifecycle - Don't Drop the Ball

Not everything that needs attention is a bug. Yes, defects and design issues need to be documented in a central location and assigned to the individual responsible for resolution. But new features, code cleanup tasks, infrastructure changes and even automated tools development also fit perfectly into an issue tracking system. Sometimes a cultural change can occur in a subtle manner just by altering the language we use. Move away from the term "bug" and use "issue" instead to encompass all types of work that should be tracked.

As I said in the beginning: 70/30 implies that methodology is more important than the tools used. The bug system should be tailored to meet the methods that fit the organization the best. Document the desired workflow. This includes the product components, versions and target release schedules that are known to the development organization. The bug tracking tool should provide meaningful ways to categorize issues against certain components or features. (I recommend Bugzilla or Atlassian's Jira for small organizations).

Workflow is again the most important item here. Who can open an issue? Does it go straight to development, or to a triage group first? Can a developer close an issue, or do all issues get resolved and then await QA verification? These are important questions to answer.


7. Infrastructure

Document your existing test and development lab infrastructure. This should include a hardware inventory and a diagram of how everything is connected, and why. Identify gaps where new hardware is required, or hardware isn't being used to its full potential. Then write down how the hardware is used for testing (manual and automated). A consisent environment is of utmost importance to successful software testing (especially for benchmarking) and this includes the lab hardware as well as system configuration.


8. Write a Test Strategy

A Test Strategy is a very high level document stating how testing takes place in order to demonstrate product quality. It should reference the build system, test lab environment, documentation repositories (like Wiki or a file server), where results are stored and how the QA team operates. This document should also discuss the Issue Tracking Lifecycle ("bug lifecycle") and how R&D and QA interact. It should state obvious items such as "every feature or component will have a test plan" and "every test plan will be supported by one or more test cases". Keep this document high level. It needs not be more than a few pages.


9. Write Test Plans

A Test Plan is a more granular strategy for a specific feature or product area that is to be tested. It should state methodology for testing, the goals of the testing, required environment and any other information that is important to the technician executing the test. A test plan, however, does not need to contain individual test cases.


10. Write Test Cases

Test cases each document a series of steps used to execute a test, its expected outcome and the final result of Pass or Fail. Many test cases may be required to successfully cover a feature. Test cases typically evolve more quickly than the Test Plan, because with each new bug fix, regression found or feature enhancement, more test cases are added.


11. Develop or Purchase Tools to Support Test Methodology

With a strategy laid out before you that is tailored to your organization, you can make more informed decisions on which tools you need to increase testing efficiency based on your established methodology. The remaining 30% of the work - the tools - will have its own 80/20 rule. In the last 20% of the work, make sure your tools and their configurations are supporting your overall methodology.

Conclusion

Improving methodology and process is a never-ending cycle. Having a grip on where you are and where you're going by documenting some of the areas I've mentioned here will allow you to see gaps that may not have been apparent. Performing this groundwork also lets you know what is already working (if it ain't broke, don't fix it!) and lets your organization concentrate on improving the process in pursuit of improved product quality. Striving to make smart changes while minimally impacting your development organization builds trust and lessens the natural resistance to change, and that will go a long way towards successful testing and automating strategies.

Comments most welcome!


Tuesday, September 30, 2008

One Approach to Automation Frameworks

One Approach to Automation Frameworks

This document is the product of a cursory reflection on my experiences in software development and quality assurance practices. An Automation Framework is a collection of processes, methodologies, tools and data that will aid in increasing product stability and provide metrics over time. Thus, it is important to make a clear distinction between Test Frameworks (strategies) and Test Harnesses (tools). In my opinion, an Automation Framework cannot be developed without first evaluating the existing software development practices throughout the organization. The Automation Framework ties everything else together.

All too often, the idea of automation is generally viewed as a time and money saver. This is quite to the contrary. The initial ramp-up is demanding, and while the new automation project is underway, manual testing must continue to meet production targets. The payoff with automation is realized over time; that is, as test cases are automated, the regression suite grows. Eventually, metrics become available to illustrate growth in test coverage and product stability as the development cycle nears code-freeze. By the second iteration, baselines for performance can be integrated, and subsets of the test cases can be moved to a post-build (or smoke-screen) suite to short-circuit the QA cycle.

Automation places new demands on a QA organization. The availability of a tools framework enables manual testers to elevate their creativity by spending their time trying to break the product in new ways, while the automated regression run takes care of the mundane. Test Automation Developers apply their creativity in a different way: by determining how to automate test cases. Meanwhile, framework tool developers must meet the needs of the automators by providing test execution, analysis, result management and metrics, and treat the framework with as much care as production software.

I believe the best approach to developing something as complex as an automation framework is to evaluate the existing practices and tools in a software development organization, assess the skill levels of various contributors, and take into account project release cycles and deadlines to produce a Requirements Document. From there, the most common use cases should be highlighted and the project’s development should provide usable tools to testers at each milestone. In this way, an organization can transition into a new methodology over time.

Some of the requirements include integration of 3rd party tools (like Silktest or WinRunner for your UI testing, etc.), consideration of existing test harnesses and legacy tools, language selection, proof of test coverage (metrics), and product release schedules. As with all automation frameworks, environmental management for test setup and tear-down is a big consideration.


Documentation

Several departments are required to cooperate to effectively automate. These departments each produce one or more documents to provide enough information to align testing efforts with customer expectations. I will very briefly summarize these here:

SRS – Software Requirements Specification: provided by Product Management, this document should present the feature requirements and clearly state the Use Cases for the feature.

ERS – Engineering Requirements Specification: drafted by Engineering to specify the components which must be developed or altered to support the new feature, including database schemas, network protocols, operating systems, etc., and demonstrate that all Use Cases are supported. This document is based on the SRS.

TP, TC: Test Plan, Test Cases: The Test Plan is drafted by the QA department and is based on the SRS and ERS. Test Cases document the input and output parameters for each configuration possibility, starting with the most common and ending with corner-cases. Each Use Case must be supported by one or more Test Case.

Test Automation Framework: A collection of documents outlining practices, methodologies, tools and infrastructure for test automation.

TAP: Test Automation Plan: The Test Automation Plan is drafted by the QA department and is based on the TP and TC’s. The approach must fit within the Automation Framework guidelines.


Elements Requiring Test Coverage


Software can be tested at various levels of operation. The most common are outlined here:

Unit – Smallest granularity at the code function or module level

Component – Tests behaviors/services provided by elements that provide functionality to one or more features, according to Engineering Requirements Spec. (ERS); minimal interaction with other components to localize bugs

Feature – Tests a subset of the overall product according to Product Management’s Software Requirement Spec. (SRS), ERS and feature Test Plan to prove Use Cases are adequately addressed


Closed or Open Testing

Depending on the openness of the software and the skill level of the tester, different approaches can be used to find software defects:

Black Box – The code isn’t available or considered. Given a set of inputs, a known environment and stated expected behavior, the output can be validated to demonstrate the system is functioning as designed

White Box – The code is open (source debuggers can step it or it can be viewed in clear-text, for example). The source code is evaluated by humans or code coverage tools for syntax and logic errors.

Gray Box – A mix and match of the above techniques.

Sand Box – Oh dear. This shouldn’t have made it to QA.


Types of Testing

Unit – Tests a specific function, collection of functions, or interaction with other components, and should be done within development (or test automation provided to automation/build team for smoke test)

Functional – Tests a component to see if it does what the specification says it should do.

System – End-to-end test of the entire system, best focused on one feature at a time.

Stability/Performance/Load – These should always be together. Without Stability, you can’t have performance. Stablity and Performance tests (sometimes called Load tests) strain the software and system to high levels to assess whether or not the software executes within required thresholds. (The SRS should state performance requirements).

Scenario – Assess the performance of the software under specific environmental configurations (eg. Using a database provided by a helpful customer to assess feature performance).


Possible Test Results


I have reduced all results to 4 possible states:

PASS (XFAIL) – The outcome of the test case meets the Expected result (PASS includes eXpected Failures; some organizations like to note this in their results)

FAIL – The test results did not match the expected results.

ERROR – The test case module/controller crashed, unexpected data wasn’t handled, an exception was thrown and not caught, etc.

BLOCK – Test cannot execute. An automated test should never set a result to BLOCK; rather, the test harness can determine a test is blocked when it has stated dependencies that either FAILed or ERRORed. For example, test execution depended on a previous environmental configuration step which failed, so the Harness refuses to execute.


The Environmental Problem


Configuring the test environment to a known state before an automated run in my experience has been the most difficult challenge of automation. At 2am when the tests kick off, there isn’t a robot that can swap CD’s in the drive to wipe the system, and if the database you needed is corrupt, all your tests will end up in a Blocked state.

Commonly, testers can step on each other’s toes, but a computer will do that consistently if scheduling isn’t handled correctly. Databases should be set to a known, verifiable state, as should an OS config’s file permissions and data files (if that is what you are testing). Automated result checking can be made a little more resilient if the resulting data has a “fuzzy” tolerance (in other words, dynamic results can still be used to evaluate the outcome).

Conclusion


This document is by no means a complete analysis. However, it should illustrate certain ways of thinking about this problem space on a cursory level. A project plan for a complex framework can produce a fairly thick document, not to mention diagrams, and this document should state the current practices and tools in place in an existing development organization.

Architecting an Automation Framework is no small undertaking, and careful attention needs to be applied in the planning phase. Deliverables should produce useful tools at short milestone intervals. Growing a framework from scratch isn’t always the answer, and there are some decent OpenSource and commercial products available, but prototype deployments of these are necessary to ensure they will meet the needs of an organization that is bound to grow in their automation needs.

Good luck with your Automation Framework deployment! Feel free to comment or request more in-depth material and I will write another article.

Thursday, April 3, 2008

Common Sense for the Flex Developer

Ah, it's nice to see a common-sense contribution to the world of software development from my old friends and colleagues, Tariq Ahmed, Jonathan Hirschi and Frank Krul. I just got an update from Tariq on their new book, "Flex 3 in Action".

Flex is a programming language that sprang up somewhere within the walls of Macromedia (now Adobe) sometime around 2003. It aims to solve many problems facing the Rich Internet Application (RIA) development community by extending the power of HTML applications. Relatively new technologies are tougher to grasp due to the limited availability of information via Google, Amazon or your local computer bookstore. The best resources are most often the early adopters, the users that forge the methods and approaches to gaining the most productivity from a tool, but writing a book is a daunting task.

Leading Flex developers Tariq Ahmed, Jonathan Hirschi and Frank Krul have been busy authoring the new reference, "Flex 3 in Action", a common-sense guide to Flex. With all the RIA madness going on in the Web Development industry, it's good to know that there's a book being developed for the common web application developer to help understand what Flex is all about and determine if it's right for them. As most of us have experienced, too many books are too light on information, when it's really the depth of subject material that is needed when you're facing a problem and need to be unstuck.

Flex 3 in Action strives to give the reader a high level view on each feature and then digs deep into what options are available. The authors then take the time to weigh the pros and cons of taking different approaches
so that you fully understand how the language works and what's available in your arsenal. The learning curve is tackled by relating HTML-based interface methodologies with how a developer might approach their goals within a Flex mindset.

If you're thinking about adopting this web application technology or you are familiar with Flex, help them out and give their work-in-progress a review:

http://www.manning.com/ahmed/

Tuesday, January 1, 2008

Common Sense

Common sense. The problem seems to be it's not all that common.

In today's world of software products, countless frustrations are encountered by users every day. These range from obviously bad UI design to mystical error messages that leave the user (and sometimes the PC) hanging, wondering what to do. A common phrase seen on message boards by frustrated customers is "Was this even tested?!"

I have professionally filled the roles of System Administrator, Quality Assurance Engineer and Software Developer, but fundamentally we are all users of software in some capacity or another. I understand how development and product marketing function under various circumstances - from well-planned and well-executed projects, to those that never had a design document and are hacked out in a myriad of languages. I even understand that sometimes, it's better to release a product with 90% functionality and make the deadline than to release 100% and miss the deadline entirely. And as an advanced user, I accept that most software today is complex and requires teams of people to compose. But this doesn't excuse the developer, the QA staff or the product management organization from overlooking common-sense items that will clearly frustrate their users.

Programming logic is one thing, while common sense is another. I recently encountered a web page that was supposed to send me a code so I could authenticate my bank account login. It had one radio button available and a "Send" button. I checked the radio button and clicked Send. What did I see?

An error occurred sending e-mail. Try again.

Obediently I tried again, but of course nothing changed. In shock I e-mailed the website, ranting that there didn't seem a whole lot of room for human error on my part. This error left me confused, not knowing if it was an error on my end, an application error or an error with the address I had previously provided in my bank account settings. Furthermore, now that I couldn't authenticate, I had to go through the insecure medium of e-mail to contact support.

Frustrating User Interfaces (FUI's) are abounding. Take the popular personal networking site, www.facebook.com. Facebook provides excellent features, but navigating the user interface is extremely frustrating. Common sense dictates that if a site is difficult to navigate, many users will just give up and go do something else. I returned many times over the course of several days and finally discovered how to retrieve a personal message, and this success happened almost randomly. When I click on someone's name, I expect to go to their profile page, but I seem to be taken to all sorts of random pages. It is unclear whether or not I will navigate to another page within my own profile or my friend's page, and where I will land. Facebook is but one of thousands of bad interfaces.

I feel that many engineers either refuse to step outside of their engineering brain to see things as a user, or simply don't care about anything but hitting their deadline, at all costs. The end result is undocumented code, poorly organized source files, untested code paths and strings containing unintelligible instructions or spelling errors. These elements create a perception of lack of quality when viewed by an every day customer.

Could most of us do any better? When designing software from scratch, with nothing to reference as a template, I'd probably say no. But with so many products out there, it seems reasonable that we should be constantly improving on software design and learning from the mistakes of others (including our competitors). Yet common sense in software remains... not so common after all.