Written by Steven Condon and David O’Connor
When moving from a traditional waterfall model to an agile model one of the first things that needs to be looked at in terms of automation strategy is how we can break the traditional silos that we get with a waterfall methodology. Standing back and taking a holistic view of automation ensures that the team is working as a whole to provide automation at the right technical layers and in the best possible manner. We will look at how we achieved this and how we broke our silos and structured our automation to be target the different technical layers. This is our high-level automation strategy viewpoint, and we will delve into each in more detail in further guidance for each layer individually.
When considering an overall automated test strategy, the traditional starting point is the test pyramid. The test pyramid is designed to ensure a suitable test coverage across the various layers of the application so we can get the most coverage for the least cost. The cost should take into account the time that was spent writing, maintaining, and executing the tests.
In this testing paradigm, we aim to push as much functional testing lower in the pyramid as possible to increase value, while ensuring that the entire pyramid gets covered. The following version of the test pyramid is taken from Martin Fowlerâs blog article about the topic. The article also deals with many of the issues that we have faced in product development while modernising our test infrastructure.
At the top of the pyramid we have the UI layer, slowest and most expensive to write, run and maintain. Many organisations use test recorders, such as Selenium or TestcafĂ© Studio, to record huge test suites automatically in an effort to make tests easier and cheaper at the top of the pyramid.
Using test recorders can result in an undue focus on UI tests, as Martin Fowler notes:
ââŠthis kind of approach quickly runs into trouble, becoming an ice-cream cone (inverted). Testing through the UI like this is slow, increasing build timesâŠUsually these cannot easily be run in a “headless” mode, monitored by scripts to put in a proper deployment pipeline.
Most importantly such tests are very brittle. An enhancement to the system can easily end up breaking lots of such tests, which then have to be re-recorded. You can reduce this problem by abandoning record-playback tools, but that makes the tests harder to write.â .
One solution, which is our recommended solution, is to invest in creating more robust tests at lower technical layers of the application. We have worked extensively on this solution, which we represent with our own testing pyramid as shown in the following diagram. In pushing our tests to the lowest levels of the pyramid, we have stopped it becoming inverted, that is, UI test heavy. However, we do acknowledge that we still need UI coverage. This is particularly important for a product like IBM Social Program Management, which is heavily customisable with numerous possible configurations. In such a product, it is difficult to ensure that your unit tests are testing UI features and workflows in a manner similar to how a customer would use the UI features and navigate the application.
Our UI testing focus has shifted from functional testing through the UI (testing all aspects of the functionality), to focussing on only testing the functional items that need to be tested through the UI, for example, navigational flow, layout, styling, alignments of screens, and so on. Functional testing should be pushed as low down the technical layers as possible to provide greater stability, quicker execution, and faster feedback.
All layers of the test pyramid require coverage, which, in product development, is provided by the following strategy that we have defined for each of these layers.
At this layer we are testing individual, atomic actions. Specifically, in Java, these are tests that verify a method. Unit tests should adhere to the FIRST principle:
- Tests should execute fast, run quickly â typically milliseconds of execution.
- Run any test class or method in isolation.
- Independent of database, time zone, date/time, or files.
- Indifferent to order of execution.
- Tests shall run anywhere: with or without network connection, any OS, time of the day, day, month of the year.
- Test case shall result in a distinct pass or fail, failures should never be subjective.
- In a product development context, tests should be written just before the production code that makes them pass.
The broad idea behind unit tests is that each action is isolated so, for example, if a database call is required, this violates the Independent principle of unit tests, and the test actually becomes a code integration test.
A good way to avoid this common pitfall is to use techniques like mocking and stubbing. These techniques preserve the unit testsâ independence and isolation, encapsulating test functionality from dependencies on outside resources and code. The techniques also increase the speed of the unit tests, which should be the overall bulk of the test suite. Integration tests are useful and important, but live at a higher layer in the pyramid, and there should be less of them than unit tests.
Code Integration Tests
Code integration tests verify that multiple units integrate and work together as expected.
Again, test double techniques can be used to isolate the methods or API under test from other classes and components with which they do not interact. One way that we can write integration tests is to look at the unit tests for the public methods of a Java class and create code integration tests from them by removing mocks. This will help to quickly create tests that verify a chain of method calls or an internal API.
Acceptance tests verify that the functional requirements from a user story have been met. More specifically, acceptance tests map to the acceptance criteria for a user story or the reproduction steps for a defect. We write these in a Given-When-Then style which allows for easy understanding by all disciplines when working with the acceptance tests.
Techniques like boundary value analysis, equivalence partitioning and decision tables are used to identify positive, negative, and boundary functional tests.
Acceptance tests can cover just one API that our customers can code against (either internal/Java, used by our frontends or external/REST/SOAP) or a sequence of API calls.
Acceptance tests are reviewed within the team by both business and implementation teams to ensure adequate coverage, and then implemented as automated tests. Implementing automated tests enables us to build regression into the automated tests automatically. It also enables the resources within the team to perform more value add type manual testing, in the areas of edge case scenarios that are harder to automate, or exploratory testing that does not have a specific set of steps but is more evolutionary in its execution.
Scenario tests verify that the functional requirements from an epic, or a group of user stories within an epic, have been met. More specifically, scenario tests ensure that user stories that work in isolation at the acceptance test level, also work as a whole when chained together.
Scenario tests are defined at the same time as the acceptance tests are created. However, if the functionality under test is complex, sometimes scenario tests can be automated only when all of the stories that provide the functionality have been implemented.
Again, both the business and implementation teams review the scenario tests to ensure adequate coverage, before the tests are implemented as automated tests. If navigational items are included in the scenario, a decision needs to be made as to the actual purpose of the test. If the purpose of the scenario test is specifically to test functionality, then it should be implemented below the UI layer. However, if the purpose of the scenario test is to test a user flow scenario, it might need to be considered as a candidate for automation at the UI layer. This is a decision that the team should discuss and agree upon, with the main goal being to limit the number of UI driven tests, while making sure that the purpose of the test is verified fully.
We can think of user journeys as an end to end test of the product. User journeys are typically a long but narrow path through the product that verifies a chain of functionality and workflow through different components. When we define a user journey, we try to find critical paths that cover high traffic in connected core areas of the product.
User journeys should be thought of as a strand of wire through the application. The user journeys will not touch all areas of functionality, but will typically follow the manual âhappy pathâ through the application that is taken by the real-life end-user. As a result, the tests are the most likely candidates for automation at the UI layer.
1. TestPyramid (2012) online
Available at: https://martinfowler.com/bliki/TestPyramid.html [Accessed 23 July 2019]