Are you worried that all your code works in a production environment the same way it does locally? Do topics like Test Driven Development (TDD), code refactoring and test suite make your head spin? Join us as we explore TDD in the Enterprise.

In this article, we will cover why TDD is important in this Software as a Service (SaaS) world. We will discuss how we are using TDD in multiple real-world software development projects. We will cover how TDD can help in your projects in all phases of development – from initial design to production monitoring. We will show how you are able to expedite application development by using TDD alongside tools, such as Mocha, SuperTest, and Swagger to define a common REST interface that both back-end and front-end developers can code against simultaneously. We will also cover how to test your micro services locally, in a Kubernetes cluster, and through the ingress controller. Lastly, we will show how to use your TDD tests to monitor your environment.

This article will concentrate on the use of Node.js, but the topics can be used for other languages as well. Our objectives for this article are to:

  • Explain TDD and its benefits at a high level
  • Show how it can be implemented at the enterprise level
  • Show usage of TDD for monitoring

What is test driven development (TDD)?

The TDD cycle

Figure 1. The TDD cycle

Test driven development is much like it sounds: the tests you write drive your development. To get started with TDD choose the testing framework of your choice (we use Mocha in this article) and write a test case describing the output you are trying to produce. Figure 1 shows the TDD Cycle. You start at the top by writing a test. When you initially run the test it will fail (obviously), but in the next step (Test Passes) your goal is to write the minimal code possible to make this test case pass. When you first start you may not have much to refactor as shown in the blue state, however, as you continue to write more tests and thus more code, the Blue state will be an important part of the process to keep your code clean and minimal.

Why use TDD?

One of the best things about this process is that you will never write a piece of code that is untested. Thus, you know any new code you write will not break anything else. As your programs get larger, this will be key to keeping your development time low. TDD has shown to lead to better code structure since it forces you to think about the implementation prior to writing the code.

Example: Creating a RESTful API using TDD

In this example, we want to create an API route that will return the roll of two dice. Our input is a GET request to get the roll of two dice. The output is a JSON with the result of the roll. The following example will demonstrate the creation of an end-to-end test to test the HTTP requests made to our service.

Start with failure – It can only get better from here

We can start by creating a sample test which calls an API route hosted locally for a route that does not yet exist. The request is called using the SuperTest module, an http client library. Here we are using Chai for the assertions in order to check that the result is a number and that it is between 2 and 12 as you would expect when rolling 2 dies.

Write a test that fails

Figure 2. Start at failure by writing a test that fails

Figure 2.1 below shows the execution of mocha via the npm test command and subsequent output. This test fails as we would expect since the route does not yet exist. Not explicitly shown, but by including a debugging package such as “debug”, you can enable debugging output from the command line by specifying DEBUG=debug before the npm run test command.

Result from running failing test

Figure 2.1. Result from running the initial failing test

Write code for passing tests

Create a route

Now that we have a failing test, we should begin writing our minimal amount of code. To start with our dice API example, we will define the route in Swagger (Figure 3). The route is called /roll, which invokes our “roll” controller. A successful response returns an object with a result that is a number between 2 and 12. Note that we have not implemented any logic yet, we have just set up the route to call.

Sample swagger

Figure 3. Sample Swagger definition

Test a route without functionality implemented

If we run our test again, we still get a failure. However, this failure is no longer due to the route not being found, but rather the logic not yet being implemented.

test route without functionality implemented

Figure 4. Testing a route without a functionality implemented

Test with mock data

Mocking the data allows the API to serve sample data which can be consumed by another development team, usually a front-end team. This allows them to get sample data for developing their application and do not have to wait for the API to be fully implemented. For our use case when developing APIs with Swagger there is an environment variable called swagger_mockMode that can be set when running your API and it will return mock data. The mock data can be generated by Swagger if the type and range of data are defined in the route definition (as shown in Figure 3 above). Another option would be to create your own samples of mock data and return those when a route is invoked.

Test case passes

Figure 5. Test case passes with mock data

This method of mocking data is crucial for allowing multiple development teams to work in parallel. Figure 5 shows the test being run with mock mode on by setting the swagger_mockMode environment variable to true. Now we can actually pass our test using mock data! This API will now give the API consumer sample data to work with and the backend team time to implement the logic to their API.

Although we are writing a simple function that only takes a minute to implement, you can imagine how useful mock data becomes when the API gets more complex and may be calling other APIs or services.

Implement functionality

To continue with our example, let’s assume that the backend team has finally implemented the highly complex roll function! You can see in Figure 6 that when a request comes in, we “roll” two dice by generating random numbers and return the result of those numbers.

Functionality implemented for route

Figure 6. Functionality implemented for the route

Figure 6.1 shows that when we run our test without the mock mode enabled, we get a passing test case!

Test case passes

Figure 6.1 Test case passes with functionality implemented

With this being the first iteration of our TDD process, we don’t have anything to refactor yet. However, this step will be important later on as the codebase grows more complex. Imagine that we wanted to create another route to roll 1 die and then another to roll 3 dice. It would be a good practice to refactor the roll route to roll n dies rather than construct multiple new routes. Now we are ready to write another failing test case for our next feature!

Extending the TDD test suite to other environments

TDD is great for local development use, but how can we leverage TDD in development, stage, and production environments? Also, how can we involve QA to execute and extend the test suite to give them better visibility into the products and understand test passing and failures better.

Can we extend TDD to assist in health monitoring? Usually another group would have to write separate tests to check the application in production. However, since the developer knows what the outputs should be, can we leverage TDD here as well?

Different environments

Local HTTP

Taking the same test suite that was generated for local testing, we can add environment variables to direct the target of the tests. By default, when no environment variable is set, we still want this to run locally our tests locally as we normally would. By using environment variables, we can set the base URL for the SuperTest client pointing it to the different environments. In the following example, on lines 44-46, by specifying the environment variable ENV with the value LOCAL_HTTP we are able to execute the tests against an already running local server. If we do not specify the ENV variable, lines 47-52, it will default to starting an express server as specified in the server code.

test pointing to local http

Figure 7. Test pointing to Local HTTP

By specifying LOCAL_HTTP as the value for the ENV environment variable prior to test execution, this will allow you to test an already running server.

Results of test pointing to local http

Figure 7.1 Results of test pointing to Local HTTP

Kubernetes

Another useful testing scenario would be to test your application running in a remote environment. In this case, we are using a proxy to connect to our Kubernetes cluster. Figure 8 shows how to add a new environment variable check in our test case, and then sets that environment variable when calling the test. As you can see, by setting the environment variable ENV to KUBE we assign the appropriate baseURL to test the application when connected to the Kubernetes cluster. This allows us to test the same REST API against the remote cluster using the same tests. This is powerful for the DevOps group to ensure the code is functioning exactly as the developer intended prior to pushing the code to stage and/or production. This also allows them to run one command to automatically run all the tests.

test pointing to kubernetes cluster

Figure 8. Test pointing to Kubernetes Cluster

Figure 8.1 shows how to run the command and the same test suite now running against the Kubernetes environment.

Results of test pointing to Kubernetes cluster

Figure 8.1 Result of test pointing to Kubernetes Cluster

Container with an ingress controller

Now let’s expand this to our remote development and production environments, which have a publicly accessible URL. We can again configure the appropriate baseURL for our environment by setting the environment variable ENV to either DEV or PROD.

test pointing to ingress controller

Figure 9. Test pointing to Ingress Controller

As previously shown, by simply altering the ENV environment variable prior to executing the script, we can control which environment the test suite is run against.

Result of test pointing to ingress controller

Figure 9.1 Result of test pointing to Ingress Controller

This also allows for a top-down approach to debugging issues by following the data flow. If you encounter an error, you can start debugging your public facing route by setting the environment variable ENV to PROD. If that is unsuccessful, you can test your production Kubernetes cluster by setting the environment variable ENV to KUBE. If this test is successful, this will point to the firewalls/load balancers between the production route and the Kubernetes cluster causing the issue. If the tests are still unsuccessful, then you can continue to the individual microservice(s) for further debugging.

Doing this allows you to easily drill down into your environment and find issues that are apparent at different levels. For example, you may have just developed a new service that connects to a new database. Your tests for this service work on your machine since you have set up all the new configurations during your development. Now you push the service to your DEV environment which no longer has the same configuration you have locally. Using the same testing suite, you can now see your tests passing with the ENV set to point locally, but they fail when pointing to your DEV environment. Although this is a simple scenario, you can imagine the benefits when working with large development teams pushing multiple services to multiple environments continuously.

QA

As developers write the passing tests to identify what the code should do, the QA group can run and confirm the output of the tests for what is desired. This takes care of the rote mundane testing tasks. The QA group can then spend their time looking for more fringe test cases, such as buffer overflows that the developers might have overlooked. They can give this feedback back to the developers to correct as part of the development cycle and add to test suite.

Release cycle

As a developer releases to the DEV environment, using continuous integration tools such as Travis can run the tests automatically. This again streamlines the job of QA and once satisfied, the test suite can be turned over to the DEVOPs group to run the tests after the updated code is pushed to the production environment to ensure it is functioning as desired.

Monitoring

In production, it’s not enough to say that the product is up and has a heartbeat, we want to know if the production services are functioning the way the developer intended. Since microservices could be connected to different applications on the backend, such as databases other APIs, we want to be notified when those services are not available or returning the desired data.

When we run our tests (we are using mocha for our testing here, but this functions like other tools), if the tool encounters an error condition it will return a non-zero return code. We can use this fact to wrap the call to the tests in a script.

monitoring

Figure 10. Monitoring

Figure 10 shows a sample script wrapping the call to the tests. In the highlighted area, we specify which environment we want to test and run the command capturing both standard output (stdout) and standard error (stderr) to a file called output_file. As we also want to ensure the services are responding in a timely manner, we also specify a timeout value. This will cause an error if any one test exceeds the timeout threshold.

Then on line 9 we check the return code for a non-zero value. If it is non-zero, meaning an error occurred, the recipient of the email will receive the output of the test including those that failed as shown above. You can add this script to a cron file to perform monitoring on a reoccurring basis.

This is extremely useful as you will get notified immediately when a test fails, when the data being returned no longer matches what you expect. By manually running the test and setting the ENV variable to the different layers of your production environment, you can identify at what layer the issue resides. You can then dig deeper by looking at the service logs for that service to identify the root cause of the error. You can even re-run the specific test in question to watch the execution in the logs real time.

Summary

Leveraging TDD can expedite development, which everyone wants. TDD allows the use of the mock data from the microservice that will unblock the front-end consumer of the data, allowing for parallel development between the two teams.

The ability to point to tests to different environments ensures each environment is functioning as it should. This allows you to pinpoint which route in which microservice is having the issue, and more specifically what return code or data isn’t matching what is expected – allowing for easier problem resolutions. Users may not be able to give the full picture of what happened when they encountered an error. You can run the entire test suite and identify the error.

Wrapping the TDD tests in a script allows for monitoring the production environment. This in turn allows us to find the problems before the users do. With Mocha we can also specify the timeout value so we can ensure the data is being returned within the specified time to provide a better user experience.