Performance testing smart contracts developed within VS Code using Hyperledger Caliper

The IBM Blockchain Platform extension for VS Code helps developers create, test, and debug smart contracts. However, a key concern for developers remains: How performant is the developed smart contract?

Hyperledger Caliper is a blockchain benchmarking tool that’s designed to perform benchmarks on deployed smart contracts, enabling analysis of throughput, latency, and resource consumption of the blockchain network as the smart contract is being used. This tutorial shows you how to use Hyperledger Caliper to performance test a Hyperledger Fabric smart contract that’s developed and deployed using the IBM Blockchain Platform VS Code extension.

Prerequisites

To complete this tutorial, you will need:

Estimated time

Once you’ve completed the VS Code tutorial, which should take about 20 – 30 minutes, you will build from the tutorial end point to:

  1. Obtain and configure Caliper
  2. Create Caliper test assets
  3. Run the performance benchmark

These subsequent steps should take approximately 30 – 40 minutes.

Steps

To complete this tutorial, you’ll need to complete three main steps, detailed below:

  1. Obtain Caliper
  2. Create Caliper test assets
  3. Run the benchmark

Step 1. Obtain Caliper

In this tutorial, you will use the Caliper CLI, a node module that can be obtained and installed through npm. Once you’ve installed the Caliper CLI, you will need to bind the tool to a nominated SDK for a target blockchain. This tutorial has been produced using Caliper-CLI version 0.1.0 and Fabric-SDK-Node version 1.4.4.

Tasks:

  1. Globally install the Caliper CLI module using the following terminal command:

    npm install -g --only=prod @hyperledger/caliper-cli@0.1.0
    
  2. Bind to the desired SDK using the terminal command:

    caliper bind --caliper-bind-sut fabric --caliper-bind-sdk 1.4.4 --caliper-bind-args=-g
    

You can now use Caliper to benchmark a Fabric network using clients that are configured to use the nominated 1.4.4 SDK.

Step 2. Create Caliper test assets

Caliper requires two configuration files:

  • A network configuration file that describes the system under test and provides connection requirements for the network
  • A benchmark configuration that describes the performance benchmark workload and references user-specified test files

All required test assets will be created in a new directory, which acts as a workspace.

Tasks:

  1. Within VS Code, create a new folder named benchmarks.
  2. Right-click the folder and select Open in Terminal.

You will now populate this directory with the network configuration file and benchmark test the assets required by Caliper.

Network configuration file

The network configuration file describes the system under test and provides connection requirements for clients that interact with the network. It can be specified in either YAML or JSON format.

For Fabric networks, the network configuration file is an extended common connection profile, augmented by elements that are required by Caliper.

Tasks:

  1. Switch to the IBM Blockchain Platform extension window and disconnect from any gateways.
  2. Export the connection profile from VS Code:
    • Under Fabric Gateways -> Local Fabric, right-click and select to export the connection profile.
    • Save this under the benchmarks folder as network_config.json.
  3. Export the local Fabric wallet from VS Code:
    • Under Fabric Wallets -> Local Fabric Wallet, right-click and export the wallet.
    • Save this under the benchmarks folder as myWallet.

Next, edit the exported connection profile to add the following required attributes:

{
    "caliper": {},
    "wallet": "",
    "clients": {},
    "channels": {}
}

Tasks:

  1. Open the exported connection profile network_config.json.
  2. Identify the Distributed Ledger Technology (DLT) engine that is being tested. At the top of the file add a caliper object to the schema that contains a single property named blockchain with the string value fabric.

    "caliper": {
           "blockchain": "fabric"
    },
    
  3. Identify the wallet that contains the identities required to interact with the network:

    • Add a key named wallet.
    • Within VS Code, right-click the wallet folder, select Copy Path, and provide this as a value for the wallet property:

       "wallet": "<fully-qualified-path-wallet>",
      
  4. Specify the clients that Caliper can use to interact with the network by providing a mapping of client identities to client objects:

    • Nest the existing client object within a new JSON object that has the name of one of the identities in the exported wallet. In this scenario, the name is admin.
    • Nest the mapped identity created above within a new JSON Object named clients.

      "clients": {
         "admin": {
             "client": {
                 "organization": "Org1",
                 "connection": {
                     "timeout": {
                         "peer": {
                             "endorser": "300"
                         },
                         "orderer": "300"
                     }
                 }
             }
         }
      },
      
  5. Identify the channels that are available, and the smart contracts that are deployed to these channels:

    • Add a channels object to the schema under the clients object.
    • Within the channels object, add another object named mychannel, which is the name of the default channel created by VS Code and within which the smart contract is deployed.
    • Within the mychannel object, add an array object named chaincodes. Within this array, add an object containing key/value pairs for the ID and version of the deployed smart contract, which are demoContract and 0.0.1 respectively:
     "channels": {
         "mychannel": {
             "chaincodes": [
                 {
                     "id": "demoContract",
                     "version": "0.0.1"
                 }
             ]
         }
     },
    
  6. Save the modified file.

You now have a network configuration file that can be used by Caliper.

Benchmark configuration

A benchmark consists of repeatedly executing named test callback files over a series of rounds, with each round being controlled in duration and the load being driven during these rounds by (potentially multiple) clients that are themselves controlled through a rate control mechanism.

Benchmark configuration requires:

  • One or more test callback files that interacts with the deployed smart contract and define the operation to be investigated
  • A configuration file that defines the benchmark rounds and references the defined callbacks

Next, you will create a single test callback file for interacting with the deployed smart contract, and a configuration file that references the test callback file within a single test round.

The test callback file

The test callback file is the point of interaction with the deployed smart contract during the benchmark round. Each test callback file must export the following functions:

  • init — used to initialise any required items for use in the run section
  • run — used to interact with the smart contract method during the monitored phase of the benchmark
  • end — used to clean up after completion of the run phase

The deployed smart contract contains the complete set of CRUD operations for an asset; for brevity, we will only investigate the ‘readMyAsset’ smart contract method.

The Caliper blockchain object uses the following methods to interact with a deployed smart contract:

  • invokeSmartContract (ctx, contractId, contractVersion, args)
  • querySmartContract (ctx, contractId, contractVersion, args)

    Where:

    • ctx is a user context
    • contractId is the smart contract name
    • contractVersion is the smart contract version
    • args is an object that contains:
      • chaincodeFunction — the name of the smart contract function to call
      • invokerIdentity — the identity to use when performing the function call
      • chaincodeArguments — an array of arguments to pass to the function when it is being called

Here is a template for a basic test callback that interacts with the deployed smart contract named demoContract at version 0.0.1:

'use strict';

module.exports.info  = 'Template callback';

const contractID = 'demoContract';
const version = '0.0.1';

let bc, ctx, clientArgs, clientIdx;

module.exports.init = async function(blockchain, context, args) {
};

module.exports.run = function() {
    return Promise.resolve();
};

module.exports.end = async function() {
};

Tasks:

  1. Create a subfolder named callbacks within the benchmarks folder.
  2. Create a file called queryAssetBenchmark.js in the callbacks folder.
  3. Open the file and insert the above template code.
  4. Save the file.

Now it’s time to populate the init, run, and end template functions.

init

This function is used to persist passed arguments and prepare any items that are required within the run function. At a minimum, you need to persist the blockchain and context arguments, but since the readMyAsset function requires a set of assets to query, you also need to create those. You can pass a set of user-specified arguments to the init function, which means you can specify the number of assets to create during the test as a variable. It should be noted that since multiple clients can be used during the benchmark round, and they will all call the same test callback, it’s important to disambiguate between each client. This is most easily achieved by using the unique client identifier that is a property of the context.

Tasks:

  1. Persist blockchain, context, and args as global variables bc, ctx, and clientArgs, respectively.
  2. Assume that the number of desired assets to create is given as clientArgs.assets, and create a for loop that’s bounded between 0 and the number of assets to be created.
  3. You will create assets within the for loop using the smart contract method createMyAsset. Since the method may throw if an error occurs, you should condition for that within a try-catch block and print the error to the console to ease debugging.
    • Create a try-catch block in the for loop.
    • In the catch, add an information statement reporting the error.
    • In the try, await the completion of an invokeSmartContract call on the blockchain object, passing the known context, contract name, contract version, and an object that contains:
      • chaincodeFunction set as createMyAsset
      • invokerIdentity set as admin, an identity in the exported wallet
      • chaincodeArguments with an array that contains:
        • a unique asset identity that’s formed by the client identifier and the current for loop index
        • a string to persist under the asset identity
module.exports.init = async function(blockchain, context, args) {
    bc = blockchain;
    ctx = context;
    clientArgs = args;
    clientIdx = context.clientIdx.toString();
    for (let i=0; i<clientArgs.assets; i++) {
        try {
            const assetID = `${clientIdx}_${i}`;
            console.log(`Client ${clientIdx}: Creating asset ${assetID}`);
            const myArgs = {
                chaincodeFunction: 'createMyAsset',
                invokerIdentity: 'admin',
                chaincodeArguments: [assetID, `UUID: ${assetID}`]
            };
            await bc.bcObj.invokeSmartContract(ctx, contractID, version, myArgs);
        } catch (error) {
            console.log(`Client ${clientIdx}: Smart Contract threw with error: ${error}` );
        }
    }
};

run

This is the function that is run repeatedly in the recorded benchmark test phase, consequently it should be as concise as possible. Your goal is to evaluate the readMyAsset smart contract function, performing a query on one of the assets that were created within the init phase. The function must return an unresolved promise and not block — this enables the driving client to make multiple concurrent run calls.

Tasks:

  1. Create a string identity for the asset to query, formed by the concatenation of the test client index and a random integer between 0 and the number of created assets.
  2. Return the call on querySmartContract, passing the known context, contract name, contract version, and an object that contains:
    • chaincodeFunction set as readMyAsset
    • invokerIdentity set as admin, an identity in the exported wallet
    • chaincodeArguments with an array that contains the asset to query in this invocation
module.exports.run = function() {
    const randomId = Math.floor(Math.random()*clientArgs.assets);
    const myArgs = {
        chaincodeFunction: 'readMyAsset',
        invokerIdentity: 'admin',
        chaincodeArguments: [`${clientIdx}_${randomId}`]
    };
    return bc.bcObj.querySmartContract(ctx, contractID, version, myArgs);
};

end

The end function is used to clean up after a test. To ensure test repeatability, you need to delete all assets created within the init phase. You can use the same for loop from the init phase, modified to call the smart contract function deleteMyAsset and passing only the asset identity to delete.

module.exports.end = async function() {
    for (let i=0; i<clientArgs.assets; i++) {
        try {
            const assetID = `${clientIdx}_${i}`;
            console.log(`Client ${clientIdx}: Deleting asset ${assetID}`);
            const myArgs = {
                chaincodeFunction: 'deleteMyAsset',
                invokerIdentity: 'admin',
                chaincodeArguments: [assetID]
            };
            await bc.bcObj.invokeSmartContract(ctx, contractID, version, myArgs);
        } catch (error) {
            console.log(`Client ${clientIdx}: Smart Contract threw with error: ${error}` );
        }
    }
};

You have now completed the specification of a test callback, which creates test assets in the init phase, queries the created assets in the run phase, and deletes the test assets in the end phase.

The benchmark configuration file

The benchmark configuration file is a YAML file that defines the complete performance test to be run against the deployed smart contract through the specification of:

  • The number of test clients to use when generating the test load
  • The number of test rounds
  • The duration of each round
  • The load generation method during each round
  • The callback (test interaction) to use within each round

Now you’ll start building a YAML benchmark configuration file that uses the queryAsssetBenchmark.js test callback. Note that YAML files are case sensitive; all labels are to be specified in lower case.

Tasks:

  1. Create a new file named myAssetBenchmark.yaml within the benchmarks folder and open the file for editing.
  2. Add a root level literal block named test that describes the test to run and contains:
    • A name key with the value my-asset-benchmark
    • A description key with a short description as the value
    • A literal block named clients that defines the type and number of test clients to use. For now, add the following key/value pairs:
      • type: local
      • number: 2
    • A literal block named rounds that is left blank
  3. Add a root-level literal block named monitor that contains a single key named type with a single array entry of none as a value. This indicates that you will not be performing any resource monitoring during the benchmark testing.
---
test:
  name: my-asset-benchmark
  description: Benchmarking for VS Code sample
  clients:
    type: local
    number: 2
  rounds:

monitor:
  type:
  - none

The rounds literal block contains each benchmark test round that is to be run, in a sequence format, headed by a unique round label. Rounds may be used to benchmark different smart contract methods, or the same method in a different manner. Each test round block contains the following:

  • label — a unique label to use for the round
  • description — a description of the round being run
  • chaincodeId — the chaincode (smart contract) ID under test
  • [txDuration | txNumber] — a specifier for the length of the round, which may be duration or transaction based
  • rateControl — a rate control method with options
  • callback — a relative path to a user-defined test file for the smart contract that is being investigated
  • arguments — an optional array of arguments to be passed to the user test file (callback) when being invoked

You will now populate these.

Tasks:

  1. Start a new sequence with a key named label and the value ‘queryAsset’.
  2. Within the queryAsset sequence, add a key named description with the value “Query asset benchmark test.”
  3. Within the queryAsset sequence, add a key named chaincodeId with the value demoContract.
  4. Within the queryAsset sequence, add a literal block named txDuration with a single sequence entry of 30. This indicates that the benchmark test will be run once for 30 seconds.
  5. Within the queryAsset sequence, add a literal block named rateControl that contains a single sequence entry with:
    • A key named type with the string value of fixed-backlog. This indicates that you will be driving the benchmark to maintain a fixed transaction backlog of pending transactions.
    • A literal block named ops with a key named unfinished_per_client that has the value 2. This indicates that each client will be driven at a rate so as to maintain 2 pending transactions.
  6. Within the queryAsset sequence, add a callback with a relative path to the queryAssetBenchmark.js file. The relative path is between the benchmark file being created and the callback file.
  7. Within the queryAsset sequence, add a literal block named arguments. Add a single key named assets with the value 10. This will be passed to the test callback during the init phase.
---
test:
  name: my-asset-benchmark
  description: Benchmarking for VS Code sample
  clients:
    type: local
    number: 2
  rounds:
    - label: queryAsset
      description: Query asset benchmark test
      chaincodeId: demoContract
      txDuration:
      - 30
      rateControl:
      - type: fixed-backlog
        opts:
          unfinished_per_client: 2
      callback: callbacks/queryAssetBenchmark.js
      arguments:
        assets: 10
monitor:
  type:
  - none

You now have a benchmark configuration file with companion test callback files that can be used by Caliper.

Step 3. Run the benchmark

You will now use the Caliper CLI to complete a performance benchmark against the default IBM Blockchain Platform VS Code network, using the resources that you created in the preceding steps. The command to be issued is caliper benchmark run, and it must be provided with details of the network configuration file, the benchmark configuration file, and the workspace that is being used. Based on the resources that you have created, you must supply the following argument pairings:

  • caliper-networkconfig: network_config.json
  • caliper-benchconfig: myAssetBenchmark.yaml
  • caliper-workspace: ./

Since the network has already been configured with chaincode installed and instantiated, the only action that Caliper needs to perform is the test phase, using a fabric gateway that has discovery enabled. To specify these options, you will pass the following additional flags to the CLI command:

  • caliper-flow-only-test
  • caliper-fabric-usegateway
  • caliper-fabric-discovery

Tasks:

  1. Ensure that you are in the benchmarks directory created in Step 2, within which the following resources should now exist:

    .
    ├── callbacks
    │   └── queryAssetBenchmark.js
    ├── myAssetBenchmark.yaml
    ├── myWallet
    │   └── admin
    │       ├── <UUID>-priv
    │       ├── <UUID>-pub
    │       └── admin
    └── network_config.json
    
  2. Run the Caliper CLI command

    caliper benchmark run --caliper-benchconfig myAssetBenchmark.yaml --caliper-networkconfig network_config.json --caliper-workspace ./ --caliper-flow-only-test --caliper-fabric-usegateway --caliper-fabric-discovery
    

You will see the operation of Caliper on the console as the testing progresses, culminating in a summary output of the benchmark. An HTML report will also be generated containing the same information that was printed to the console during the benchmarking process.

The report will detail the following items of information for each benchmark round:

  • Name — the round name, which correlates to the test round label from the benchmark configuration file
  • Succ/Fail — the number of successful/failing transactions
  • Send Rate — the rate at which Caliper issued the transactions
  • Latency (max/min/avg) — statistics relating to the time taken in seconds between issuing a transaction and receiving a response
  • Throughput — the average number of transactions processed per second

Summary

You have now successfully benchmarked the deployed smart contract on the default local network available from the IBM Blockchain Platform VS Code extension. You can repeat the test varying the benchmark parameters: For information on the full set of parameters, please see the official Caliper documentation.

Nick Lincoln