Monday, February 2, 2009

The 70/30 Rule to Testing & Automation Strategies

Introduction

Most of us have heard of the 80/20 rule, which simply put, states that 80 percent of the work is done with 20 percent of the effort, and 80 percent of the effort goes into finishing the last 20 percent of the work. Designing, developing and rolling out a test automation framework is no different in terms of work once you get rolling -- but that's only if you've made it past the hardest part of all:

The 70/30 Rule.

This rule says that 70% of all of your testing and automation efforts is nailing down the methodologies and strategies used for testing. Thus, only 30% depends on the tools. Often, companies make the mistake of going through a tool purchasing process or writing up automation frameworks without first fully understanding what it is they are testing. Equally overlooked is spending time documenting how manual testing is performed and what methodologies are most successful in reducing the number of issues that make it to production.

The tools deployed should be used to help streamline or enforce existing methodologies, policies and practices. Thus, methodology is the most important factor to a quality-driven Engineering organization.

This article focuses on some of the things that should be addressed during the 70% phase, which I call the Test Methodology & Strategy Phase.

Test Methodology & Strategy Phase

1. Know your product, know your customer

An experienced tester knows what the product does, at least at the marketing and sales pitch level if not an in-depth technical knowledge. Understanding the customer base is also important. Thinking about this can help to improve test plans and focus the team on targetting more technically complex areas of the product.


2. Know why you're testing.

Every so often in my career I ask myself what the product means for our customer. Does my work provide any real benefit? Have I reduced the frustrations of customer support? I question if the tests I run accomplish the goal of improving the quality of the product. In my last position, the security product was deployed into financial institutions, government agencies, medical research facilities, hospitals and military divisions. If the system failed, the customer might not have received their security reports in a timely manner. My current employer's product is delivered out to millions of retail customers. It has to work, because a product recall could be costly. It is an important motivating factor for the testing team to know why they are testing.


3. Document the Existing Development Process

It is important to take stock of the current environment. This includes development methodologies, source control and build systems, issue tracking and project planning. While there are industry-accepted practices across the board, a new framework of process should evolve at a pace that matches the culture of the organization. It is also important to gain a sense of how testing is viewed in the development organization: is it an afterthought, on an as-needed basis, or is it an integral part of the development methodology?

At present I have the fortune of working with a very quality-minded development team. Quality is so important that unit tests were put in place to execute after each nightly build well before a QA department had been established. While an automated nightly smoke screen is a necessity, it is only the tip of the iceburg, and other QA practices must be established for full coverage.


4. Document Existing Testing Practices

Write down all of the manual and automated tests that are currently executed. Next, begin documenting the gaps, where testing needs improvement or is non-existant. For example, if a set of internal tools are mandatory for building the product but are never tested, it is conceivable that these tools may contain bugs that affect the build. A more obvious example is lack of deployment testing, where the product moves straight from build system into the customers' environments. Instead, the build should go through a staging area in a testing environment for independent evaluation and follow a release process which ends in delivery to the customer.

At this stage I find it advantageous to understand the components of a system. Some components are tightly bound to others, while some stand alone without a dependency. Documenting the interaction and cross-over in some systems can be daunting, but component testing in QA is equally as valuable as end-to-end (system-wide). Understanding the cogs of a system help in documenting the gaps in testing.


5. Instituting New Polices (Code Freeze and Release Practices)

A difficult hurdle to achieving a consistent methodology is to institute practices and policies that work well, gain product quality, and reduce impact to the customer while at the same time allowing the culture of R&D to continue. In other words, policies that counteract creativity and productivity ultimately impact innovation and product quality. Obviously it is important to balance new practices and policies against resource constraints. As always, involving representatives from each team impacted by the proposed changes should increase the adoption rate.

  • Code freeze implies that no new features are checked in unless Product Management is willing to slip the schedule. Otherwise, only bug fixes are allowed. Clamping down on this practice will help in future scheduling decisions, increase the chance of timely releases to market and improve product quality.

  • Release policy documents the "gating" mechanism between R&D and production (be that the end user, customer base or internal group like customer support). Essentially, I believe that no software should ever reach production before QA has signed off. Part of this process should include release notes (change documents, known issues, user documentation updates, etc.) and test results should be documented.


6. Nailing Down the Issue Lifecycle - Don't Drop the Ball

Not everything that needs attention is a bug. Yes, defects and design issues need to be documented in a central location and assigned to the individual responsible for resolution. But new features, code cleanup tasks, infrastructure changes and even automated tools development also fit perfectly into an issue tracking system. Sometimes a cultural change can occur in a subtle manner just by altering the language we use. Move away from the term "bug" and use "issue" instead to encompass all types of work that should be tracked.

As I said in the beginning: 70/30 implies that methodology is more important than the tools used. The bug system should be tailored to meet the methods that fit the organization the best. Document the desired workflow. This includes the product components, versions and target release schedules that are known to the development organization. The bug tracking tool should provide meaningful ways to categorize issues against certain components or features. (I recommend Bugzilla or Atlassian's Jira for small organizations).

Workflow is again the most important item here. Who can open an issue? Does it go straight to development, or to a triage group first? Can a developer close an issue, or do all issues get resolved and then await QA verification? These are important questions to answer.


7. Infrastructure

Document your existing test and development lab infrastructure. This should include a hardware inventory and a diagram of how everything is connected, and why. Identify gaps where new hardware is required, or hardware isn't being used to its full potential. Then write down how the hardware is used for testing (manual and automated). A consisent environment is of utmost importance to successful software testing (especially for benchmarking) and this includes the lab hardware as well as system configuration.


8. Write a Test Strategy

A Test Strategy is a very high level document stating how testing takes place in order to demonstrate product quality. It should reference the build system, test lab environment, documentation repositories (like Wiki or a file server), where results are stored and how the QA team operates. This document should also discuss the Issue Tracking Lifecycle ("bug lifecycle") and how R&D and QA interact. It should state obvious items such as "every feature or component will have a test plan" and "every test plan will be supported by one or more test cases". Keep this document high level. It needs not be more than a few pages.


9. Write Test Plans

A Test Plan is a more granular strategy for a specific feature or product area that is to be tested. It should state methodology for testing, the goals of the testing, required environment and any other information that is important to the technician executing the test. A test plan, however, does not need to contain individual test cases.


10. Write Test Cases

Test cases each document a series of steps used to execute a test, its expected outcome and the final result of Pass or Fail. Many test cases may be required to successfully cover a feature. Test cases typically evolve more quickly than the Test Plan, because with each new bug fix, regression found or feature enhancement, more test cases are added.


11. Develop or Purchase Tools to Support Test Methodology

With a strategy laid out before you that is tailored to your organization, you can make more informed decisions on which tools you need to increase testing efficiency based on your established methodology. The remaining 30% of the work - the tools - will have its own 80/20 rule. In the last 20% of the work, make sure your tools and their configurations are supporting your overall methodology.

Conclusion

Improving methodology and process is a never-ending cycle. Having a grip on where you are and where you're going by documenting some of the areas I've mentioned here will allow you to see gaps that may not have been apparent. Performing this groundwork also lets you know what is already working (if it ain't broke, don't fix it!) and lets your organization concentrate on improving the process in pursuit of improved product quality. Striving to make smart changes while minimally impacting your development organization builds trust and lessens the natural resistance to change, and that will go a long way towards successful testing and automating strategies.

Comments most welcome!