This article originally posted on the main Developer Works site here.
Integration testing is where the systems delivered are validated. It’s where the business can really see applications and determine whether or not development has built what was required. As software systems become increasingly componentized and are made of more and more services, the lag time from code change to integration testing is a key predictor of time to market and developer productivity.
The ideal process is simple. Every time a developer changes code, all tests are run quickly and feedback to the developer delivered. The changed components are built, unit tested, deployed to an integration environment, and all integrations test run in just a few minutes.
Unfortunately, that ideal is not reality for many teams. Automated tests can be too few or take too long. Continuous integration might not be set up. Automated deployments of complex applications can require special tools.
Solutions to these challenges are fairly well understood today. Tests should be automated with a heavy weight towards API testing. Setting up a continuous automated build processes is simple so there is no excuse for not having one. Deployment automation tools are now well established.
However, an increasingly common challenge for many organizations is a lack of integration testing environments. They may be incomplete. They may be inconsistent. There just may not be enough of them. This article looks at why these problems exist and what to do about it.
Limitations on environments
To understand how to get additional and higher-quality testing environments to speed feedback, you need to understand the constraints on environments. That knowledge helps you resolve the issues.
- Limited hardware: Resources are required to run test environments. Those resources aren’t free.
- Expensive to setup: Setting up a new test environment requires provisioning servers, configuring middleware and getting the applications to run. Those tasks take considerable effort.
- Expensive to maintain: It takes effort to maintain configuration, patch levels, etc. as the number of test environments increases.
- Inconsistent utilization: Sometimes a team needs multiple environments, other times they need just a few.
- Precious components: Some application components are expensive to use for testing, thus limiting how frequently you want to test against them. Third party web services that charge by the transaction, mainframe components, and appliance applications are also limiting factors.
- Missing components: Sometimes another team owns a service you need to test against but they haven’t yet delivered it. This leaves you with an incomplete solution.
- Broken components: When numerous components change frequently the likelihood that a given component is broken at any time is high.
In general, these characteristics of integration test environments reinforce each other. For example, expensive environment setup is tolerable if it is for long-lived environments, but due to inconsistent utilization the need may be short-lived. Maintaining those environments is easier if they are always turned on. Unfortunately, because of hardware costs it’s desirable to shut them down when not utilized.
Techniques to resolve the bottleneck
There are three techniques that smooth out the issues with integration test environments and promote their availability: Environment reservation, environments as a service, and service virtualization. Each technique solves different parts of the problem.
The simplest strategy is to actively schedule and manage environments. This is usually the responsibility of a release manager. Integration environments are treated as precious resources and allocated to release testing based on release priority and distance from release date. Modern release management tooling, such as IBM UrbanCode Release, can provide formal environment tracking, scheduling, and conflict detection, but spreadsheets are still commonly used.
Clearly delineating which releases can use an environment and when gives development and test teams needed predictability and maximizes the value derived from limited resources.
While environment reservation helps ensure that limited resources are used well, it doesn’t provide more environments or assist with environment inconsistencies.
Software Defined Environments
The ability to request a test environment tailored to your application and have it provisioned and configured in a few minutes is extremely powerful. Cloud technologies (public or private) are combined with an environment pattern engine like UrbanCode Deploy with Patterns to spin up environments, configure them, and retire them when needed.
Using environments as a service solution drastically reduces the labor of setting up test environments. Solutions that also update the configuration in place keep maintenance under control, while improving environment fidelity to production. Overall, teams can get the integration environments they need when they need them. Environments as a service technology should be a cornerstone of your integration testing strategy.
Cheaply creating environments tends to encourage more environments. Hardware expenses can be a concern, especially with very large integration environments. Environments built on top of relatively cheap cloud resources help reduce setup costs. Because environments are both easy to create and retire, there is less inclination to hold onto seldom used environments. Instead, you can reclaim the resources when unused and spin the environment back up if needed.
Because this strategy is built on cloud and virtualization, many of the “precious” components do not fit neatly into the environments as a service strategy. They either need to be shared by multiple provisioned environments or virtualized.
Broken components from other teams can be a concern, but if each big team has their own integration environments and use automated deployments of only known good versions of other components, they can use environments as a service for isolation.
Service virtualization replaces some of the components in the system with “stubs” or “mocks”. Mocking is an approach that has been in use for a long time. Developers write a service that functions like a full service and tests against it. For example, if a stock quoting service provided by a third party charges per transaction, a developer might create a stand-in service that has the same API but always returns the same value for testing. Service virtualization tools like those in IBM Rational Test Workbench streamline the process of creating these stubs, managing where they are running, and how they perform.
Service virtualization provides a clean way to handle the “precious” components. Stubs can stand in for components that charge per use or are unique (mainframes, expensive middleware, or appliances).
Stubs can also stand in for components from other teams. This has three key advantages:
- A degree of isolation. You have an integration test environment that stays useful even when another team breaks their stuff.
- In resource use. The server capacity required to run those other components isn’t needed anymore.
- A stub can stand in for components that aren’t complete yet, giving you access to integrated testing scenarios earlier in the lifecycle.
Tests are more relevant when you test against the real stuff rather than a stub. The same isolation capabilities that keep another team from breaking your work, also defer testing of the fully integrated system. Deferring testing has a cost because it slows feedback. Service virtualization also doesn’t help with managing the environments of the components that are present.
A realistic scenario that brings the techniques together
The fictitious example of a major system called Marketplace shows how to use the tools together. Marketplace is made up of many pieces.
- 60 web services that are somewhat tightly coupled. Four teams each own 15 services.
- Mainframe components contribute to 20% of transactions; the components rarely change and are owned by another team.
- The front end website, in front of the services, is owned by the dot-com team.
- Data feeds from 2 third parties are used (via web service). One is metered on transactions, the second is not.
The Marketplace release team had one large Integration Test Environment (INT) and a Performance Testing Environment (PERF). Each of the six teams now has a small test lab where they can test some of the components, but they cannot test any integrated scenarios. Integration testing is on the release schedule and release management has governed access to the INT and PERF environments.
Team level integration environments
To drive developer productivity, the Marketplace organization decided to ensure that each development team had their own integration test environments and the ability to spin up extra environments if they want to test extra code lines or if development spikes.
- Environments as a Service (EaaS): The EaaS tooling spins up a copy of the application using the company’s internal cloud. Only enough virtual machines for that dev team’s services, and the latest working copy of the UI is used.
- Service virtualization: Services from the other web service teams and the mainframe are virtualized. The metered third party service is virtualized, but the other feed is used live.
- Environment reservation: For visibility, the reservation system in the Release Management tool knows which environment hosts work for which release. However, since there is no shortage of environments, team level environments are not reserved.
In the end, each development team has a number of small, cheap environments that use service virtualization heavily. Although they are able to test their components within the larger system, they are isolated from other teams. These other teams might break them or they worry that they will break the other teams’ components. However, the heavy utilization of virtualization means that integration issues across services will not be found immediately.
Release level integration environments
System integration testing environments for each release are provisioned. These are mostly complete and only use minimal service virtualization. To enter this environment, changes must have successfully passed a robust set of automated tests in the team level environments.
- Environments as a Service: EaaS can provision many environments that tend to be long-lived:
- A test environment for patches to the current release
- A heavily used environment for the upcoming release
- An occasionally used one for big development efforts coming later
The EaaS is mostly responsible for keeping the environments in line with the correct infrastructure.
- Service virtualization: Only the mainframe and the metered web service are virtualized.
- Environment reservation: These environments are big, and therefore expensive in terms of hardware. Environment reservation is used to watch for instances when extra environments might be needed and to minimize them if possible. Like the team environments, this system is primarily used to ensure that everyone uses the right environment for the right release.
Because of the heavy integration testing that occurs in the team level environments, changes that interfere with testing in this environment are very rare. For the manual testers, these environments are where they spend their time, and they benefit from a combination of high availability and always being on the latest good code.
Performance testing is largely unchanged and remains the largest environment. Service virtualization is available for both web services and the mainframe. For the mainframe and the metered service, virtualization is occasionally used during high transaction count testing. In other performance testing scenarios, stubs for both 3rd party web services are set to respond slowly to requests to validate the behavior of the application when the 3rd party providers are having trouble.
Final integration testing
The integration testing environment is used for final integration validation. Because it contains access to scarce resources like the live version of the metered service and the mainframe components, it is managed and scheduled. The environment reservation system is still used to allocate this to expected releases.
The pattern examined in the example above is pretty common. Small heavily virtualized environments are used closer to developers. Because they are likely to come and go, they take advantage of environment provisioning aggressively as well. In the middle environments, environment provisioning is used to build out more complete integration environments and fewer things are provided by service virtualization. In late test environments, precious resources are scheduled and allocated to releases. Use each approach in its own sweet spot, and compensate for limitations of one approach with the strengths of another. This way, teams get the most mileage out of the combination of scheduling, service virtualization, and environments as a service.