Written by Vardhini Srinivasan and Steven Condon
In the last couple of years, many positive changes have been introduced to quality practices & processes within IBM Social Program Management. One of the changes is to transition the test engineer role to a quality analyst role because a test engineer is no longer just a “gatekeeper” at the end of the development process. Instead, a test engineer is now an analyst who ensures quality is engineered at every step of the development process.
Traditionally, test engineers focused on creating test cases based on requirements and other available artifacts. They executed the tests manually, while logging results and defects.
The amount of interaction outside the testing team was limited. As the development team and test engineers operated as silos, there was no transparency with respect to the automation that was done earlier in the cycle. This approach resulted in repetitive tests that targeted the same code path at both white box and black box levels.
We started to focus on automating at an earlier stage and implemented process changes to introduce transparency into the process of writing and automating acceptance tests. The need for the test engineer to evolve into a quality analyst then became very evident.
A quality analyst’s scope of work is expanded beyond being a test engineer, no longer confined to just verifying against requirements, but encompassing analysis of risks, identification of test coverage and regression for code changes.
A quality analyst’s involvement starts with analyzing the backlog items for quality impact, by assessing them on both functional aspects and non-functional aspects. Quality analysts start by working with business analysts and developers to identify acceptance tests that verify the acceptance criteria. Quality analysts then continue to work with developers to implement the automation of acceptance tests. Quality analysts also execute tests that cannot be automated, run complex scenarios, and conduct exploratory testing.
To implement the change, we made improvements to both tooling and the testing process.
To hone testing skills, we conducted workshops on analysis techniques such as mind maps, risk-based analysis, and on test case design techniques, such as combinatorial tests, state-transition diagrams, cause and effect graphs, and fish-bone graphs. We also ran training on how to do exploratory testing, and we developed a generic exploratory test template.
The following diagram shows the flow that we follow for test case development. We use analysis tools such as mind map and risk-based analysis on the backlog items to identify the coverage scope and feature priorities. We use further test design techniques to identify tests, and, again, we use risk-based analysis to prioritize the tests. Finally, we document the tests, and we develop or capture test data for the tests.
We implemented further process updates to support the change:
- Clarified the criteria to document the test cases in RQM (Rational quality management)
- Added custom fields to RQM to enable better categorizing and archiving of test cases
- Enhanced our RTC (Rational team concert) workflow to support the overall process of writing and reviewing acceptance tests
Quality architects supported the transition with workshops, process improvements, coaching, and the sharing of best practices.
The changes have improved the collaboration between developers, business analysts, and quality analysts. Quality is now the focus from the onset of a release and the change has improved the product knowledge of the quality analysts. Developers have gained exposure to writing and automating acceptance tests.
The transformation is still ongoing and will continue to mature as teams adjust their mindset to a shared ownership of quality and give more focus to automation. In a future article, we will talk about the automation strategy and the challenges.