Written by: Caitriona Nic Lughadha and Steven Condon.
The IBM Curam Social Program Management product development team has over 20 years of experience of the challenges of testing a large scale on premise enterprise application. Over the life of the product the team has continually evolved and refined our test processes to keep pace with the growth of the product.A couple of years ago we recognized that while our testing process was very well defined and integrated into the development lifecycle we were still doing a lot of testing very late in the development lifecycle. In the days and weeks leading up to a release, testers were under pressure to catch up on an already squeezed schedule in circumstances where there was little or no time left to fix any defect that they might find. There was a tendency in some cases to “shoot the messenger” for delaying the schedule or at the very least for not finding the defect earlier.A restructure of the product development organisation towards more agile ways of working presented the opportunity to transform the testing organisation and to change the mindset of how we viewed testing.
The breadth and depth of the product had led to our test teams being organized by specialty focusing on either functional testing (including accessibility and globalization) system testing ( including performance, reliability and other non-functionals) or security testing.
Because development and test operated as separate teams it was difficult to get a full picture of what testing was being done. The test team could not see what unit and component level tests were being done by the development team. There was duplication in the testing being performed at the verification stage as some of those tests were also covered by the development team.
The verification stage was heavily dependent on manual testing which was a significant time constraint and an expensive overhead when the same fix was being tested across multiple versions.
Our solution was to move the focus away from “Testing” towards a more holistic view where quality is not just ensuring that we did what we agreed to do but about making a high quality, high performing product. In line with the changes happening across the whole product development organisation, we devolved our QA organisation into new multi-disciplinary agile teams. Each team includes all the skills needed to plan implement and deliver a piece of work (Business Analysts, Coders, Quality Analysts, Technical Writers). A key part of this change involved the reframing of the role of our Testers to Quality Analysts who are no longer simply “gatekeepers” at the end of the process but are involved from the earliest stages in shaping the product and keeping the whole team’s focus on achieving high quality through all stages of a delivery.
The Quality Analysts are supported by Quality Architects who support across all the teams with ways of working, selection of tools, sharing of best practices, education, coaching and mentoring individuals to operate effectively within their team.
Shifting the quality assurance expertise into the agile teams means the entire team shares the responsibility for building in quality from the earliest stage of the cycle.
The Quality Analysts and all the other disciplines are involved from the start of the process bringing the test perspective beyond just technical accuracy to a more holistic view of quality – how the product behaves, the usability of the product, consistency in design and implementation etc
As development and quality analysts are working as one team , tests which can be implemented as unit tests and process level tests early in the cycle are defined by the Quality Analysts and implemented by the developers. These tests are automated reducing the level of manual regression to be done and freeing the Quality Analysts to focus on higher order activities such as more complex scenarios or edge cases which are difficult to automate.