Error-free code is critical to keeping your apps and business processes running smoothly. Testing and linting (running a tool to spot potential errors in your code) are just two of the many actions you can take to ensure the quality of your code. And the best thing about testing and lifting? Both processes can be fully automated. In Unit 9 of my Node.js Learning Path, learn how to automate testing and linting in order to build error-free Node.js applications. You’ll also be introduced to some of the most popular tools for test automation in Node:

  • Mocha: A test framework
  • Chai: An assertions library
  • Sinon: A library of test doubles
  • Istanbul: A library for code coverage
  • ESLint: A linting utility

All of these tools are part of the npm ecosystem and can be plugged into your npm lifecycle.

Get the code

The code you need to follow along with the examples in this learning path is in my GitHub repo.

Overview of test automation

For the purpose of this unit, testing is the act of checking your code against program requirements to ensure the two match. Automated testing is the act of letting the computer run this check for you, rather than doing it manually.

Here’s the test process in a nutshell (note that I advocate test-driven development):

  1. Write your test cases, meaning code that is governed by your application requirements.
  2. Write the code to implement your application requirements. This is the code under test.
  3. Run your test cases against the code under test until all the test cases pass.

Automated testing has a few key advantages:

  • Writing test cases in the form of code leads to higher code quality because it reduces the possibility of human error in the testing process. You write test cases up front, and then the computer can run them over and over again against the code you are testing.

  • You can also add new test cases to your test suite (your collection of test cases) any time you need to. Should you forget to write a test case, you can simply add it to the collection later.

  • If you make a mistake when writing a test case, you can debug the test case itself, since it’s code. From that point forward, it should run predictably.

Contrast this to manual testing, where you have to remember to run all of your test cases every time you change something. This results in longer test cycles that are susceptible to human error, including forgetting to run a test.

In the next sections, I introduce you to the tools we use for automated testing in this tutorial. These are by no means the only testing tools available in the Node ecosystem. They’re just a few of the more popular tools, which also happen to work very well together.

Mocha: A test framework

A test framework is software that defines and provides several things:

  • A test API, which specifies how to write test code.
  • Test discovery, to determine which JavaScript files are test code.
  • The test lifecycle, which defines what happens before, during, and after a test is run.
  • Test reporting, to log what happened when the tests were run.

Mocha is one of the most popular testing frameworks for JavaScript, so you’re very likely to come across it in your development. Jest is another popular testing framework for Node. For this tutorial I don’t have time to introduce both, so I chose Mocha.

To tell Mocha your JavaScript code is a test, you use special keywords that are part of Mocha’s test API:

  • describe() denotes an arbitrarily nested grouping of test cases (a describe() can contain other describe()s).
  • it() denotes a single test case.

Both functions take two arguments:

  • A description that appears in the test report
  • A callback function

We’ll come back to these later in the tutorial.

Writing test suites in Mocha

The simplest possible test suite contains just one test:

Listing 1. A Mocha test suite with just one test

const {describe} = require('mocha');

const assert = require('assert');

describe('Simple test suite:', function() {
    it('1 === 1 should be true', function() {
        assert(1 === 1);
    });
});

In the above listing, describe is already in the global context; so `require(‘mocha’) is unnecessary, but I wanted to include it for illustrative purposes.

Here’s the output of this test:

$ cd src/projects/IBM-Developer/Node.js/Course/Unit-9
$ ./node_modules/.bin/mocha test/example1.js


  Simple test suite:
    ✓ 1 === 1 should be true


  1 passing (5ms)

In this example, I’ve used Node’s assert module, which is not the most expressive assertion library. Fortunately, Mocha doesn’t care what library you use, so you’re free to choose whichever library you like best.

Chai: An assertion library

Chai is one of the most popular assertion libraries for JavaScript testing. It’s easy to use, works well with Mocha, and offers two styles of assertion:

  • Assert: assertEqual(1, 1)
  • BDD (behavior-driven development): expect(1 === 1).to.be.true or expect(1).to.equal(1)

Chai also lets you plug in your own assertion library, but I won’t cover that in this tutorial.

I like BDD-style assertions because they’re more expressive than assert-style assertions. Expressive assertions make test code more readable and easier to maintain. We’ll use Chai’s BDD-style assertions for this tutorial.

A test suite with Chai

Let’s modify the test suite in Listing 1 to use Chai.

Listing 2. A Mocha test suite using Chai’s BDD-style assertion library

const {describe} = require('mocha');

const {expect} = require('chai');

describe('Simple test suite (with chai):', function() {
    it('1 === 1 should be true', function() {
        expect(1).to.equal(1);
    });
});

Here’s the output:

$ cd ~/src/projects/IBM-Developer/Node.js/Course/Unit-9
$ ./node_modules/.bin/mocha test/example2.js


  Simple test suite (with chai):
    ✓ 1 === 1 should be true


  1 passing (6ms)

The example in Listing 2 may not look much different from the first test suite, but using Chai’s BDD-style assertion syntax makes the test more readable.

Sinon: A library of test doubles

A test double is a block of code that replaces some portion of production code for testing purposes. Test doubles are useful when it’s inconvenient, or even impossible, to run test cases against production code. This could happen when the production code needs to connect to a database, or when you need to obtain the precise system time, as you’ll see shortly.

Writing such code yourself risks creating technical issues. Fortunately, the Node community has developed several packages for test doubles. One such library called Sinon is very popular. You’re likely to run across it, so let’s take a look here.

Testing with spies, stubs, and mocks

There are several types of test doubles:

  • A spy wraps a real function in order to record information about it, like how many times it was called and with what arguments.
  • A fake is a spy object that does nothing but pretend to be a real function, so that it can record information about that function.
  • A mock is like a fake function, but with expectations you specify, such as how many times the function is called and with what arguments.
  • A stub is like a spy, but replaces the real function with behavior you specify.

Listing 3 shows how to create a spy.

Listing 3. A Sinon spy that spies on console.log()

const {describe, it} = require('mocha');

const {expect} = require('chai');

const sinon = require('sinon');

describe('When spying on console.log()', function() {
    it('console.log() should still be called', function() {
        let consoleLogSpy = sinon.spy(console, 'log');
        let message = 'You will see this line of output in the test report';
        console.log(message);
        expect(consoleLogSpy.calledWith(message)).to.be.true;
        consoleLogSpy.restore();
    });
});

The output looks like this:

$ cd ~/src/projects/IBM-Developer/Node.js/Course/Unit-9
$ ./node_modules/.bin/mocha test/example3.js


  When spying on console.log()
You will see this line of output in the test report
    ✓ console.log() should still be called


  1 passing (7ms)

Notice the console.log output shows up in the test report. This is because a spy doesn’t replace the function; it just spies on it. You use a spy when you want the real function to be called, but you need to make assertions about it. You can see this in Listing 3, where the test stipulates that console.log() must be called with a precise message.

How to write a stub

To replace console.log() with a function where you provide the implementation, you would use a stub or a mock. Listing 4 shows how to write a very simple stub.

Listing 4. A Sinon stub that replaces console.log() with a function that does nothing

const {describe, it} = require('mocha');

const {expect} = require('chai');

const sinon = require('sinon');

describe('When stubbing console.log()', function() {
    it('console.log() is replaced', function() {
        let consoleLogStub = sinon.stub(console, 'log');
        let message = 'You will NOT see this line of output in the test report';
        console.log(message);
        consoleLogStub.restore();
        expect(consoleLogStub.calledWith(message)).to.be.true;
    });
});

Here’s the output from this example:

$ cd ~/src/projects/IBM-Developer/Node.js/Course/Unit-9
$ ./node_modules/.bin/mocha test/example4.js


  When stubbing console.log()
    ✓ console.log() is replaced and the stub is called instead


  1 passing (8ms)

Notice that in this example, the message does not show up in the test report. This is because the real console.log() was replaced with the stub. Also note that consoleLogStub.restore() must be called before the expect() assertion call, or the test report won’t look right.

Mocks are very similar to stubs, so I won’t show you how to write one here. We’ll revisit mocks and stubs later in this unit.

Istanbul: A library for testing code coverage

Code coverage is a code quality metric that measures how much of the potentially executable code under test was actually executed when the tests ran (that is, during a single invocation of npm test, as you’ll see shortly).

In automated tests, you want to exercise as much executable code as possible. The more code you exercise, the more bugs you uncover. So for code coverage you want to strive for 100 percent, meaning that every possible line of executable code is run with your automated tests.

Istanbul is a popular library for testing code coverage in JavaScript, and there is a Node module for it. Istanbul instruments your code on the fly, before your tests run, and then keeps track of how much of your code was executed during the test run.

You’ll use the Istanbul’s command-line interface, nyc, later in this unit.

ESlint: A pluggable linting utility

A linter is a tool that analyzes your code for potential errors, which is sometimes called static code analysis.

Running a linter on your code is called linting, a technique that can be very handy for discovering issues like:

  • Undeclared variables
  • Unused variables or functions
  • Long source code lines
  • Poorly formatted comments
  • Missing documentation comments
  • Lots more

You might remember from Unit 7 that the npm registry contains many, many tools for linting. ESLint is my tool of choice for this tutorial. ESLint will report different issues depending on the configuration plugin (config), and some are more forgiving than others. In addition to the eslint:recommended config, we use shareable eslint-config-google. There are lots of shareable configs to choose from: search the npm registry and see for yourself.

Later in this unit, I show you how to configure and run the linter as two separate npm scripts in your npm test lifecycle.

Set up the example project

So far you’ve had an overview of testing techniques and some of the tools you can use to automate testing in your npm lifecycle. In this section we set up an example project and write a few tests. The examples are hands on, so you should install all of the packages as instructed in the next sections. Likewise, you should carefully follow the instructions for writing tests, in order to study and learn from the test code.

Project setup

To set up the example project and tests, you need to do the following:

  1. Initialize the project
  2. Install the following packages:
    • Mocha
    • Chai
    • Sinon
    • Istanbul CLI – nyc
  3. Configure package.json

We’ll go through these steps together. Once you have the example project setup, you can begin writing test code.

Step 1. Initialize the project

The easiest way to get started is to open a terminal window or command prompt, navigate to the Unit-9 directory and enter this command: touch package.json. This will create an empty package.json file.

Replace the empty package.json with the following:

{
  "name": "logger",
  "version": "1.0.0",
  "main": "logger.js",
  "license": "Apache-2.0",
  "scripts": {
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/jstevenperry/node-modules.git"
  }
}

Step 2. Install the test packages

Next, you’ll install each of the test packages.

Install Mocha

From the terminal window, enter this command: npm i --save-dev mocha

The output looks like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm i --save-dev mocha
npm notice created a lockfile as package-lock.json. You should commit this file.
+ mocha@5.2.0
added 24 packages from 436 contributors and audited 31 packages in 1.739s
found 0 vulnerabilities

This installs the latest version of Mocha and saves mocha in the devDependencies section of your package.json.

Install Chai

From the terminal window, enter this command: npm i --save-dev chai.

The output looks like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm i --save-dev chai
+ chai@4.1.2
added 7 packages from 20 contributors and audited 39 packages in 1.157s
found 0 vulnerabilities

This installs the latest version of Chai and saves chai in the devDependencies section of your package.json.

Install Sinon

From the terminal window, enter this command: npm i sinon.

The output looks like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm i --save-dev sinon
+ sinon@6.1.3
added 11 packages from 317 contributors and audited 57 packages in 1.687s
found 0 vulnerabilities

This installs the latest version of Sinon and saves sinon in the devDependencies section of your package.json.

Install the Istanbul CLI

From the terminal window, enter this command: npm i --save-dev nyc.

The output looks like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm i --save-dev nyc
+ nyc@12.0.2
added 284 packages from 148 contributors and audited 2246 packages in 5.744s
found 0 vulnerabilities

This installs the latest version of the Istanbul command-line interface (called nyc) and saves nyc to the devDependencies section of your package.json.

Step 3. Configure package.json

Now open package.json and add the following to the scripts element:

    "test": "nyc mocha ./test"

This script tells npm to invoke the Istanbul CLI (nyc) along with Mocha, which will discover and run tests that are located in the ./test directory.

Before you go any further, make sure your package.json looks like Listing 5 below.

Listing 5. package.json with Mocha, Chai, Sinon, and Istanbul installed, along with the test script

{
  "name": "logger",
  "version": "1.0.0",
  "main": "logger.js",
  "license": "Apache-2.0",
  "scripts": {
    "test": "nyc mocha ./test"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/jstevenperry/node-modules.git"
  },
  "devDependencies": {
    "chai": "^4.1.2",
    "mocha": "^5.2.0",
    "nyc": "^12.0.2",
    "sinon": "^6.1.3"
  }
}

Don’t worry if your installed package versions are different from mine in Listing 5. I’ve installed the latest versions at the time of this writing, but these will be updated over time.

Writing test code

This section prepares you for the final exercise in this unit, where you’ll complete test and implementation code for logger.js. Before you actually write code, I suggest reading through the entire section. It will help you to have a good overview of what you need to do before jumping into the logger.js and test-logger.js modules. You may also refer back to this section as you begin writing code, which you will do very soon.

Write a test using Mocha and Chai

For this example, we use Mocha as the test framework and Chai as the assertions library supporting BDD-style assertions.

In Listing 2 you saw that writing tests with Mocha and Chai is pretty easy. You saw how to create whatever test suit grouping you need by nesting a describe() function call within another describe() function call (and another within that, and so on). Now you’ll write a test case using the it() function, as shown here:

Listing 6. Test code from test-logger.js

.
.
describe('Module-level features:', function() {
    // TRACE
    describe('when log level isLevel.TRACE', function() {
        it('should have a priority order lower than Level.DEBUG', function() {
            expect(Level.TRACE.priority).to.be.lessThan(Level.DEBUG.priority);
        });
        it('should have outputString value of TRACE', function() {
            expect(Level.TRACE.outputString).to.equal('TRACE');
        });
    });
.
.

The nested test suite in Listing 6 contains two test cases:

  • The first test case ensures that the Level.TRACE priority property value is lower than that of Level.DEBUG.
  • The second test case ensures that the outputString property is TRACE.

Notice the use of function chains in the assertions expect().to.be.lessThan() and expect().to.be.equal(). Function chains are common in BDD-style assertions and make them very readable. It should be obvious what each of these assertions is doing just by looking at its function chain.

When you run the test case from Listing 6, the output will look like this (don’t try this just yet or you will get lots of errors):

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm test
.
.
> logger@1.0.0 test /Users/sperry/home/development/projects/IBM-Developer/Node.js/Course/Unit-9
> nyc mocha ./test

.
.
  Module-level features:
    when log level isLevel.TRACE
      ✓ should have a priority order lower than Level.DEBUG
      ✓ should have outputString value of TRACE
.
.

I’ve removed some of the output from the full test suite, just to show you the output from the part that appeared in Listing 2.

Write a test using Mocha and Sinon

Sinon is a test library that lets you use test doubles in your tests. In this section, you’ll learn more about using stubs and mocks in your Mocha tests, with examples for both.

Write a stub

A stub function is a test double that replaces the behavior of some function with custom behavior you write yourself. This is particularly handy when you want to, say, replace a call to Date.now() with a value of your choosing.

A stub is a type of spy, but whereas a stub replaces a real function with your function, a spy does not. Instead, it merely observes and reports back, just like a real spy would.

Open the logger.js module, and look at the log() function. You’ll see this:

 01 function log(messageLogLevel, message, source, logFunction) {
 02   let computedMessage = null;
 03   if (messageLogLevel.priority >= logLevel.priority) {
 04     let now = Date.now();
 05     let outputString = now.toString() + ':' + messageLogLevel.outputString;
 06     computedMessage = outputString + ': ' + ((source) ? source + ': ' : '') +
 07       message;
 08     (logFunction) ? logFunction(computedMessage) : logMessage(computedMessage);
 09   }
 10   return computedMessage;
 11 }

Test cases are all about predictability, and the logger uses Date.now() (line 4) as part of the computedMessage (line 6) which is the string that actually gets logged (line 8).

Using stubs for timestamps

As you probably know, Date.now() returns the number of milliseconds since the Unix Epoch, and so is always changing. How can you properly assert the value of an expected message?

The solution is to stub out the real Date.now() function and replace it with one that returns a value of your choosing, and use that value in your assertions.

Creating a stub is super easy using Sinon:

let dateStub = sinon.stub(Date, 'now').returns(1111111111);
.
// Use the stubbed version
.
dateStub.restore();

First, you call the sinon.stub() function, passing the object to be stubbed (the Date class) and the function you want to stub (now()). Then you add a call to returns() onto the returned stub (the Sinon API is fluent), instructing the stub to return 1111111111 whenever it is called.

From that point forward (until dateStub.restore() is called), whenever Date.now() is called, it will be replaced by your stub. The value 1111111111 will be returned instead of the running millisecond value normally returned by Date.now().

Remember that stubs replace real functions, so you need to call the restore() function on the stub if you want Date.now() to work correctly after your test runs.

Write a mock

A mock function is a test double that replaces a real function with a set of expectations. Unlike a stub, a mock does not provide an implementation.

If you examine the log() function of the logger.js module, you’ll notice that the last parameter is a function that performs the actual logging of the message. Take a look at the helper functions and you’ll notice that they all use this parameter, as well.

If you omit this parameter, log() calls logMessage(), an internal function that delegates to console.log(). For testing, we don’t want to clutter the test report output (which goes to the console by default) with program output. We’ll replace the last parameter to each logger helper method with a mock. We can then verify that it has been invoked the expected number of times.

Creating a mock is super easy using Sinon:

 01      describe('Default log level of INFO', function() {
 02          let mockLogFunction;
 03
 04          before(function() {
 05              // Make sure the default is correct
 06              logger.setLogLevel(logger.DEFAULT_LOG_LEVEL);
 07              mockLogFunction = sinon.mock().exactly(4);
 08          });
 09          after(function() {
 10              mockLogFunction.verify();
 11          });
 12          // TRACE
 13          it('should not log a TRACE level message and return null', function() {
 14              expect(logger.trace('BAR', 'foo()', mockLogFunction)).to.be.null;
 15          });

Remember that one of the things a test framework does is to define a test lifecycle. You can see this in the above example, where Mocha’s before() function runs once before any test cases within the group, and its after() function runs once all the test cases have run.

The mock is set up on line 7, where the expectation is that the function in this particular test group will be called four times.

Once the test cases have all been run in after(), the verify() function is called to check the expectation. If the expectation wasn’t met, an exception will be thrown.

That’s enough test examples with Mocha. Before you start writing code, let’s install, configure, and run the linter.

Install ESLint

From the terminal window, enter this command: npm i --save-dev eslint eslint-config-google.

The output looks like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm i --save-dev eslint eslint-config-google
+ eslint-config-google@0.9.1
+ eslint@5.2.0
added 119 packages from 146 contributors and audited 2491 packages in 7.696s
found 0 vulnerabilities

This installs the latest version of ESLint and the Google shared config for ESLint. npm also saves both eslint and eslint-config-google to the devDependencies section of your package.json, which now should look like this:

  "devDependencies": {
    "babel-eslint": "^8.2.6",
    "chai": "^4.1.2",
    "eslint": "^5.2.0",
    "eslint-config-google": "^0.9.1",
    "mocha": "^5.2.0",
    "nyc": "^12.0.2",
    "sinon": "^6.1.4"
  }

Configure and run the linter

There are a number of ways to configure ESLint, but I recommend using package.json in order to minimize the number of metadata files you have to drag around in your projects. For this example, we use both the eslint:recommended and google configs for ESLint.

To start, paste the following snippet into your package.json just after the devDependencies section. You will need to add a comma to the end of that section so the JSON is well-formed:

  "eslintConfig": {
    "extends": ["eslint:recommended", "google"],
    "env": {
        "node" : true
    },
    "parserOptions": {
      "ecmaVersion": 6
    },
    "rules" : {
      "max-len": [2, 120, 4, {"ignoreUrls": true}],
      "no-console": 0
    }
  },
  "eslintIgnore": [
    "node_modules"
  ]

The extends element is an array of the configs to use, in this case eslint:recommended and google.

Adding the env element and the node value of true enables using global variables like require and module.

We’ll use ES6 syntax (parserOptions). Also note that under rules I’ve made the max-len more to my liking (the default is 80).

We also definitely don’t want to lint all of the JavaScript code in node_modules –that would produce a frightening amount of output.

Additional scripts

Next, add the following snippets to the scripts element in your package.json:

    "lint": "eslint .",
    "lint-fix": "npm run lint -- --fix",
    "pretest": "npm run lint"

The first script, lint, actually runs the linter (eslint) and tells it to lint everything in the current directory. ESLint will also lint every subordinate directory, except ones you’ve explicitly told it to ignore (via the eslintIgnore element, as shown above).

I created the second script, lint-fix, to allow the linter to automatically fix common code gotchas, like extra spaces on a line, or no space after a single-line comment. With this script installed, all you have to do is run npm run lint-fix and the linter will clean up your code for you. (Maybe one of these days I’ll remember to add a space after the // when I comment out a line of code, but it isn’t looking good.)

Finally, npm’s built-in pretest script will ensure your code gets linted every time you run npm test. You can remove the pretest script if it gets to be too annoying, but I like having it in there so I don’t forget to lint the code every time I make a change.

A note about directory structure (keep it clean)

If you ask 10 developers where to put unit tests in a Node project, you’re likely to get a few different answers. Personally, I like to keep the source as close to package.json as possible, with minimal directory structure. I want a layout that is clean and maintainable, so that I can easily find what I’m looking for.

The upcoming exercise uses the following directory layout:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ tree .
.
├── logger.js
├── solution
│   ├── exercise-1
│   │   ├── logger.js
│   │   └── test
│   │       └── test-logger.js
│   ├── logger.js
│   ├── package.json
│   └── test
│       └── test-logger.js
└── test
    ├── example1.js
    ├── example2.js
    ├── example3.js
    ├── example4.js
    └── test-logger.js

5 directories, 11 files

Bottom line: If your company has a standard, follow it. Otherwise, just adopt a minimal directory structure that keeps your code clean.

Exercise: Unit testing logger.js

You’ve seen how to write Mocha tests with Chai and Sinon, and how to run the linter. Now it’s time to write some code, which you’ll do on your own.

To help get you started, I’ve created skeleton versions of the following:

  • The test module you need to write first, located in ./test/test-logger.js
  • The implementation code, located in logger.js

I recommend test-driven development (TDD), so make sure to write all the test code first, then run npm test.

Because the linter runs first in pretest, you will need to work through any linting errors before your test cases will run at all. Don’t get down on yourself if you see lots of errors at first. The two configs for this exercise are pretty picky, but the end result is super clean code.

Once the test cases finally begin to run, your first task is to watch them all fail. Then (and only then), should you begin writing the logger.js implementations until the tests pass.

There are TODO: comments in the skeletons where you will need to write code. Everywhere you see a TODO: comment, follow the instructions. I’ve also left a few reference blocks of code to assist you.

Good luck!

If all else fails

If you just can’t seem to get the code working, and you really want to see it work, there is a special script in solution/package.json called test-solution that runs the solution code. To run it:

  1. Copy solution/package.json to the Unit-9 directory (this will overwrite any changes you’ve made so far to your Unit-9/package.json).
  2. Invoke npm to run the script: npm run test-solution.

You should see output like this:

Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ cp solution/package.json .
Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$ npm run test-solution

> logger@1.0.0 test-solution /Users/sperry/home/development/projects/IBM-Developer/Node.js/Course/Unit-9
> npm run pretest && nyc mocha ./solution/test


> logger@1.0.0 pretest /Users/sperry/home/development/projects/IBM-Developer/Node.js/Course/Unit-9
> npm run lint


> logger@1.0.0 lint /Users/sperry/home/development/projects/IBM-Developer/Node.js/Course/Unit-9
> eslint .



  Module-level features:
    when log level isLevel.TRACE
      ✓ should have a priority order lower than Level.DEBUG
      ✓ should have outputString value of TRACE
    Level.DEBUG
      ✓ should have a priority order less than Level.INFO
      ✓ should have an outputString value of DEBUG
    Level.INFO
      ✓ should have a priority order less than Level.WARN
      ✓ should have specific outputString values
    Level.WARN
      ✓ should have a priority order less than Level.ERROR
      ✓ should have an outputString value of WARN
    Level.ERROR
      ✓ should have a priority order less than Level.FATAL
      ✓ should have an outputString value of ERROR
    Level.FATAL
      ✓ should have a priority order less than Level.OFF
      ✓ should have an outputString value of FATAL
    Level.OFF
      ✓ should have an outputString value of OFF
    Default log level of INFO
      ✓ should not log a TRACE level message and return null
      ✓ should not log a DEBUG level message and return null
      ✓ should log an INFO level message
      ✓ should log a WARN level message
      ✓ should log an ERROR level message
      ✓ should log a FATAL level message

  Log Levels and the API:
    when current log Level=TRACE
      ✓ should output TRACE level message
      ✓ should output DEBUG level message
      ✓ should output INFO level message
      ✓ should output WARN level message
      ✓ should output ERROR level message
      ✓ should output FATAL level message
    when current log Level=DEBUG
      ✓ should not output TRACE level message
      ✓ should output DEBUG level message
      ✓ should output INFO level message
      ✓ should output WARN level message
      ✓ should output ERROR level message
      ✓ should output FATAL level message
    when current log Level=INFO
      ✓ should not output TRACE level message
      ✓ should not output DEBUG level message
      ✓ should output INFO level message
      ✓ should output WARN level message
      ✓ should output ERROR level message
      ✓ should output FATAL level message
    when current log Level=WARN
      ✓ should not output TRACE level message
      ✓ should not output DEBUG level message
      ✓ should not output INFO level message
      ✓ should output WARN level message
      ✓ should output ERROR level message
      ✓ should output FATAL level message
    when current log Level=ERROR
      ✓ should not output TRACE level message
      ✓ should not output DEBUG level message
      ✓ should not output INFO level message
      ✓ should not output WARN level message
      ✓ should output ERROR level message
      ✓ should output FATAL level message
    when current log Level=FATAL
      ✓ should not output TRACE level message
      ✓ should not output DEBUG level message
      ✓ should not output INFO level message
      ✓ should not output WARN level message
      ✓ should not output ERROR level message
      ✓ should output FATAL level message
    when current log Level=OFF
      ✓ should not output TRACE level message
      ✓ should not output DEBUG level message
      ✓ should not output INFO level message
      ✓ should not output WARN level message
      ✓ should not output ERROR level message
      ✓ should not output FATAL level message

  Code Coverage:
    ✓ should invoke logMessage() at least once so coverage is 100%


  62 passing (42ms)

-----------------|----------|----------|----------|----------|-------------------|
File             |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
-----------------|----------|----------|----------|----------|-------------------|
All files        |      100 |      100 |      100 |      100 |                   |
 solution        |      100 |      100 |      100 |      100 |                   |
  logger.js      |      100 |      100 |      100 |      100 |                   |
 solution/test   |      100 |      100 |      100 |      100 |                   |
  test-logger.js |      100 |      100 |      100 |      100 |                   |
-----------------|----------|----------|----------|----------|-------------------|
Ix:~/src/projects/IBM-Developer/Node.js/Course/Unit-9 sperry$

Conclusion to Unit 9

This unit started with a high-level introduction to testing and linting. I then showed you how to:

  • Write Mocha tests using Chai and Sinon
  • Plug in Istanbul for code coverage metrics
  • Configure and run ESLint to analyze your code for potential mistakes and bugs

The unit concludes with a solo exercise, which is to finish writing the tests and implementation code found in the skeleton logger files.

In Unit 10 you’ll learn how to use Winston, one of the most popular logging packages in the Node ecosystem.

Answers to challenge questions

Challenge question #1: How did I get that test (example1.js) to run

First, I installed Mocha:

npm i --save-dev mocha

Unless you install a package globally (using the -g flag) all packages are installed relative to the current directory. Packages like Mocha include executable programs that are always installed in ./node_modules/.bin. The Mocha executable (mocha) takes as argument a list of files, or a directory where the tests are located.

The source code is available in GitHub, and when you clone it to your computer, the relative path to the source for Unit 9 is ./IBM-Developer/Node.js/Course/Unit-9. On my computer the full path is ~/src/projects/IBM-Developer/Node.js/Course/Unit-9.

To run a single test, just pass the name of the JavaScript module that contains the test, like this:

./node_modules/.bin/mocha test/example1.js

Challenge question #2: Why is it necessary to call restore() on a Sinon stub before invoking any assertion logic

When Sinon creates a test double, such as a stub, it replaces the original function with the function implementation you specify in the JavaScript code running in the V8 engine. If you don’t call restore() the system will continue to use the stub for all calls to the stubbed function in that instance of the V8 engine.

Video

Quiz: Test your understanding

Answer true or false

  1. A function stub allows you to replace a real function whose behavior you specify.

  2. A linter is used to run automated unit tests based on source code comments, or “lints.”

  3. A code coverage tool is used to ensure that you have a unit test for every function in your module.

Check your answers

Choose the best answer

  1. Which of the following statements are true about the pretest script in package.json?

    A. It runs before every unit test in the suite.

    B. It runs before the npm test script.

    C. It never runs, and is used to document the test script.

    D. It runs both before and after the test script.

  2. Which of the following is not a benefit of using a test framework like Mocha?

    A. It discovers tests to be run.

    B. It defines an API for writing tests.

    C. It reports test results.

    D. It automatically runs a code coverage tool that you plug into its configuration.

  3. Which of the following are not reasons to perform automated testing?

    A. Automated tests are prone to human error because they run so often that the test results have lost all meaning.

    B. Automated testing reduces the development lifecycle because tests can be run by the computer instead of running them manually.

    C. Automated tests can be auto-generated by npm and plugged into the autotest phase of the npm lifecycle.

    D. Automated testing reduces the possibility of human error because the test code itself can be debugged, and then run consistently and predictably thereafter.

    E. None of the above

Check your answers

Fill in the blank

  1. The two major topics covered in this chapter are _ and _.

  2. When writing a Mocha test, you use the _() function to group test cases, and the _() function to write a test case.

  3. If you just need to replace a real function with a custom implementation that you provide (and don’t care about checking expectations), you should use a __ type of test double.

  4. In this unit you used _ for creating test doubles, __ for code coverage, _ for assertions, and _ as the test framework.

Check your answers

Bonus programming exercises

  1. Add a new helper function called severe() to the logger API. The new function should have a corresponding Level that has a priority that is between ERROR and FATAL, and an outputString of SEVERE. Add unit tests to ./test/test-logger.js first, then write the implementation until the tests pass.

  2. Replace the Sinon dateNowStub stub in test-logger.js with a fake. Does it work the same? Why can you not replace dateNowStub with a spy?

Check your answers

Check your answers

Answers for true or false questions

  1. True: By stubbing the real function, you replace it with your own. This is useful when using live functions like Date.now(), whose output you want to control.

  2. False: A linter is a tool that runs a static analysis on your code to spot potential errors and violations of coding conventions, which are provided by the config you use.

  3. False: A code coverage tool like Istanbul’s CLI (nyc) instruments your code dynamically. It then runs your unit tests and reports the percentage of executable code that is actually executed.

Answers for multiple choice questions

  1. B: The pretest script runs before the test script when you run npm test.

  2. D: While you can use a code coverage tool with a test framework, you do this outside the test framework. Code coverage is not something the framework provides.

  3. B and D: A is gibberish, and there is no such thing as the autotest phase of the npm lifecycle.

Answers for fill in the blank questions

  1. Testing, Linting

  2. describe, it

  3. Stub (though if you do care about setting and checking expectations–for example, how many times the function was called–you should use a mock).

  4. Sinon, Istanbul (nyc), Chai, Mocha

Solutions to the bonus programming exercises

  1. See ./solution/exercise-1/logger.js for the modified logger.js code, and ./solution/exercise-1/test/test-exercise-1.js for the modified ./test/test-logger.js code.

  2. Your code should look like this:

In test-logger.js:

.
.
// FORMERLY The Date.now() stub - NOW A FAKE!!!
let dateNowFake = null;

// Do this before every test
beforeEach(function() {
    dateNowFake = sinon.fake.returns(1111111111);
    sinon.replace(Date, 'now', dateNowFake);
});
// Do this after every test
afterEach(function() {
    sinon.restore();
});
.
.

You can’t replace dateNowStub with a spy because you can’t use a spy to replace a function.