Skill Level: Beginner

CI/CD is already a new norm in the digital commerce space and enterprises at large are adopting it massively and taking it to next level with the help of Kubernetes and Container orchestration techniques along with the existing CI/CD automation toolsets.


  1. Kubernetes
  2. Docker
  3. Continuous Integration and Continuous Deployment
  4. Container Orchestration


  1. Overview

    In the era of smart digital commerce on cloud, there is a massive adoption of automation and CICD patterns across industries, and predominantly in the fast-moving digital commerce sector. Introduction of Cloud platforms along with Kubernetes and Container orchestration techniques, CICD has reached to the next level and it provides tremendous benefits to various stakeholders in the value chain such as SI vendors, customers’ business & IT, as well as end users. With more and more research and investment into these platforms by cloud vendors as well as large enterprises, it’s becoming part of digital release hygiene to adopt one of the industry best practice in this area. Based on the maturity of the customer IT – Business ecosystem, e-commerce customers are joining this journey at various levels such as CICD entry level of preparing a build & deploy pipeline and automation at their application layer release to more sophisticated CICD of Infrastructure deployment using Container orchestration. The container orchestration could vary from simple web containers to custom e-commerce containers fulfilling the need of monolith packages.   

    Planning and putting a right strategy for frequently deploying new features, updates, or fixes to the production environment is essential to achieve the optimal results. In this document, it uses the WebSphere Commerce runtime environment, which is run on Docker based containers, as an example for illustrating a Continuous Integration and Continuous Delivery (CI/CD) pipeline. A CI/CD pipeline helps you automate processes in the development and operations life cycle, from when a developer checks in code into a source code repository, to deploying the code to your production environment after performing the essential set of automation and regression test suites. A proper configuration of the pipeline is essential for seamless releases either its an incremental change or a complete product or application release. It becomes very critical when the release frequency is high. Such situations are experienced in large e-commerce retailers such as Amazon, Costco or Walmart or even in banking customers such as Standard Bank and so on.

  2. Planning a CI/CD Pipeline

    The continuous integration and continuous deployment (CI/CD) pipeline can be logically segregated into two parts, where the CI portion of the pipeline facilitates packaging custom code and building custom docker images. The second portion (CD) covers the deployment of custom images to new or existing environments. Its essential to do proper analysis and planning before choosing the tools to implement the CI/CD pipeline. Some of the well-known CI/CD tools for commerce package deployments are Jenkins and IBM Urban Code Deploy. There are various other tools available in the market and could be selected by assessing the benefits to the customer environment and type of projects being executed.

     Pipeline Overview 

    Commerce DevOps automation will be achieved by automating the CI/CD pipelines. To understand the pipeline better, we have classified it into two separate pipelines as (i). Continuous Integration and (ii). Continuous Deployment and illustrated with detailed steps involved in each pipeline sections.

  3. Continuous Integration (CI) Pipeline

    The continuous integration (CI) pipeline is more of linking various stages of development steps such as developer committing the code to a source repository, then to the build utility picking the code by scanning the source code or configuration changes into the build process and preparing a custom package, then using the docker utilities building a custom docker image and pushing the image into the docker repository (for e.g. Azure Container Registry (ACR) in case of Azure cloud model). Below diagram illustrates the detailed steps involved in a typical Continuous Integration Pipeline.




    1. Developer implements the code in the development environment using one of the development IDE. For example, Rational Application Developer (RAD) or Eclipse in case of WebSphere Commerce or Hybris package development.

    2. Developer pushes the developed code into a source code repository such as Git or SVN

    3. Using Jenkins or IBM Urban Code Deploy pipeline with the help of scripting, can automatically invoke the WebSphere Commerce Build (WCB) utility to pull the code from the source code repository and build a customization package

    4. Push the custom package to an artifact repository such as nexus repository

    5. Implement scripting for the pipeline to pull the customization package from the nexus repository and create the updated Docker images by consuming docker file(s).

    6. Push the Docker image to a pre-defined Docker repository (for e.g. Private Docker Repository in case of IBM Private Cloud or Azure Container Registry in case of Azure cloud)

    a. Use of Run Engine commands are needed to configure the application within the Docker Images. For instance, to configure a data source for the application to connect to the commerce database or search core for the application to the search engine

  4. Continuous Delivery (CD) Pipeline

    Next step in this journey is to define the Continuous Deployment (CD) pipeline and it depends on the container platform and on the software that is used to build an infrastructure that can support the containers. In this document, we are referring to the WebSphere Commerce application and hence we choose the container nomenclatures accordingly. But depending on the microservices architecture or other commerce package architecture layering, the container nomenclature need to be refined by updating the corresponding container configuration files as part of setting up the Continuous Deployment pipeline process.

    Below is a generic diagram illustrating the Continuous Deployment (CD) Pipeline diagram where the Developer (IT) or the Operations team could leverage and customize the existing automation toolset within the enterprise to automate the continuous deployment. The mechanization of the CD Pipeline would involve defining the deployment templates and configurations to achieve the compartmentalized deployments to specific Infrastructure layers. For example, certain packages need to be deployed to the web server layer whereas other packages are targeted for search or commerce application layer deployments. A few of them might even get deployed to the Foundation or Network layers on very specific needs.



    Below set of steps explains the specialization of Continuous Deployment (CD) steps suitable for WebSphere Commerce by utilizing the WCB build tools and Docker compose utility. Below is a gist of high-level steps involved in deploying a Commerce Live environment.

    1. Install the Docker (specific version as suitable/recommended by product spec)
    2. Install the Docker Compose (Version as suitable/recommended by product spec)
    3. Download and store the Docker Compose files (.yml) in the pre-defined docker repository path
    4. Execute the docker-compose commands with the appropriate control parameters including the docker-compose input file




    Docker Images Vs Containers 

    Docker image is an immutable file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run. In other words, docker images are templates for containers that include a runtime environment and all of the libraries, configuration files and dependencies.

    Docker container is a virtualized run-time environment where users can isolate applications from the underlying system. These containers are compact, portable units in which you can start up an application. Since docker containers provide strong isolation, they do not interrupt other running containers with the nodes or intra nodes. Containers virtualize at the app layer whereas the legacy virtual machines virtualize at the hardware level. This difference makes the containers extremely lightweight compared previous generation VM technique.

     Docker Repository 

    A Docker repository is a place where different versions of the docker images and associated files are stored. And by executing the docker commands and docker pull, required docker images will be pulled from these repositories locally. Also when the next stable version of the docker image is built and ready, it will be stored into the Docker repository for reuse. Docker Hub and other third-party repository hosting services are called Docker registries. A Docker registry stores a collection of docker repositories.


  5. Container Orchestration and Tools

    This section explains about different layers involved in the container orchestration and virtualization. It explains container orchestration in general, and the common platform and deployment architecture limitations it can solve for the IT enterprises and customers at large.

    Container Orchestration is an automated arrangement, coordination, and maintenance of software or application containers. It is the automation of all aspects of coordinating and maintaining the life cycle of containers and its dependent environments or dependencies. The popular container orchestration platforms are based on open-source tools like Kubernetes, Docker Swarm or the commercial ones like Red Hat OpenShift, etc.

    Depending on the container orchestration tool selected, the configuration files will instruct the container orchestration tool how to network between containers, how to resolve the dependent libraries, database or other connectivity and where to store the logs (gfs file system, etc.). The orchestration tool will first identify the suitable host for the container based on the predefined set of specifications and then schedules the deployment of containers into clusters to achieve the optimum performance.

    Two common Container Orchestration technique or tooling used are – (i). Kubernetes Container Orchestration and (ii). Docker Container Orchestration. Kubernetes container orchestration enables container clustering via container orchestration engine. Whereas Docker container orchestration (aka Docker Swarm), package and run applications as containers, pulls the existing container images from a docker repository or service registry and deploys the containers on server or cloud environment.   

    Need of Container Orchestration and Problems Solved 

    A gist of the need of Container orchestration is to automate the configuration, scheduling, provisioning and deployment of containers. It enables the configuration of applications to suite the container environment on which the application is deployed and scaling of containers to efficiently balance application workloads across infrastructure. Load balancing, traffic routing and service discovery of containers along with allocation of resources between containers and clusters are also achieved via an effective container orchestration. Other key features it pedals are securing the interactions between containers and health monitoring. In addition to this list, container orchestration is evolved to handle many more features and solve industry challenges in large enterprise.

  6. Kubernetes Cluster View for WebSphere Commerce

    This section illustrates the Kubernetes cluster logical view considering WebSphere Commerce package as an example. It typically contains 3 logical partitioning such as Kubernetes Master node, Commerce Stage node(s) as well as a cluster of Commerce Live nodes. Depending on the commerce NFR parameters the number of live nodes might vary from couple of nodes to dozens of application layer nodes. An individual node will be constituted with logical groups of PODs. For example, commerce stage node might contain PODs of web server containers, app server containers, store server containers, different search server layer containers, vault and utility containers. These PODs are just the logical grouping of containers performing the duties of similar nature in the overall Platform architecture. The fine grain separation of these server architecture and functionalities are done to achieve a better utilization of the Infrastructure by an efficient container orchestration mechanism or process. It also helps in spawning or maintaining the required Infrastructure capacity at the click of a button or as part of scheduled CICD pipeline. These are typical use cases which will be commonly used during the Retail Holidays and other Commerce marketing and campaign activities to scale out and in the Infrastructure capacity based on the transaction volume and resource utilizations.




  7. Commerce Deployment View using Container Orchestration

    This section covers the commerce deployment architecture using Azure Kubernetes as Container orchestration technique. Commerce pods are distributed as authoring and live clusters each containing its own set of pods within the nodes. Authoring nodes will contain the authoring set of app, web and search pods whereas Live nodes would contain the live set of app, web and search pods. Depending on whether the cluster is on authoring or live side, the search pods will be named as either search-master, search-repeater or search-slave in the namespace section of the container orchestration yaml file (for e.g. value.yaml). Similarly, the authoring side of the application cluster will be communicating with the Authoring Database and Live side will be communicating with the Live Database.



  8. CICD Pipelines and tooling

    This section covers commonly used CI/CD pipeline tools such as Jenkins, IBM Urban Code Deploy, Ansible, Chef, etc. Also briefly explains various plug-ins and associated tooling required to define an effective pipeline. Depending on the Enterprise application landscape and packaging, the toolset needed for the CICD pipeline might vary. So its an essential factor to identify and define the right set of toolset in this space. Though there are wide range of products or solutions available in this area, below are commonly used CICD pipeline toolset for reference.

    Jenkins and Pipelines 

    Jenkins provides an easy way to setup and configure a Continuous Integration / Continuous Delivery (CI/CD) environment for almost any combination of languages and source code repositories using Jenkins pipelines. Jenkins enables a faster and more robust way to integrate the entire chain of build, test, and deployment tools with some assistance of added custom scripts for individual or custom steps.

    Jenkins Plug-ins

    Jenkins comes with a default set of plug-ins for almost any combination of programming languages and build & deploy tools such as Ant, Apache Maven, Gradle, and source code repositories such as Git, Tortoise SVN, Bit bucket, etc. It also allows to add any additional plug-ins or integrate with custom plug-ins, if required.

    Jenkins is just one of the toolset for implementing and automating the CI/CD pipelines. Ansible, IBM Urban Code Deploy, Chef are other commonly used CI/CD automation toolsets.  

  9. CICD Journey and Best practices

    Adopting the best practices while following the CICD Journey for an enterprise can enrich the overall experience and yield optimum results, while reducing the mistakes or misconceptions that can occur till the enterprise mature in this journey. It starts from the selection of right tool set used for build and deployment to the efficient usage of container Orchestration toolset.


    1. Identify the best fit build and deploy tools for the custom Commerce packages used or depending on the open source platform used for the project or product implementations
    2. Define and Implement the Code Commit, Code Build and Packaging pipeline effectively considering an Agile delivery and implementation model
    3. Adopt the efficient Container Orchestration and Deployment tooling so as to improve the development team’s efficiency and enable seamless code integration and container deployments
    4. Configure and customize the Deployment template files to be suitable with the custom Commerce package or depending on the specific customer / enterprise environment and application landscape
    5. Define the Development and Operations roles and responsibilities carefully to avoid any process gaps
    6. Hook up the automation test suites in the pipeline so that the code quality and gateway check will be improved
    7. Add any additional scripting to validate the environment health check, application component’s sanity via either custom scripts or by utilizing Cloud environment health check mechanisms (Either instance or system status checks and Cloud watch utilities)

Join The Discussion