Archived | Configure traditional WebSphere session persistence on OpenShift

Archived content

Archive date: 2021-03-19

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.


IBM Cloud Pak™ for Applications, built on the Red Hat® OpenShift® Container Platform, provides a long-term solution to help you transition between public, private, and hybrid clouds, and to create new business applications. While traditional IBM® WebSphere®, a key component of IBM Cloud Pak for Applications, isn’t a “built-for-the-cloud” runtime like WebSphere Liberty, it can still run in containers, and reap the benefits of consistency and reliability that containers provide.

During this traditional WebSphere operational modernization journey, application session management configuration plays a key role. By default, WebSphere places session objects in memory. However, the administrator has the option of enabling persistent session management, which instructs WebSphere to place session objects in a database (persistent store).

But how do you configure session database persistence for an application on an IBM Cloud Pak for Applications traditional WebSphere container, where you usually deploy a stateless application?

The recommended approach is to revisit your application in order to make it stateless; in this way, you can scale it in a better way without problems. However, a common request when a customer evaluates the use of an xPaaS environment is to migrate an application without modifying it. Let’s see how to maintain a session state by configuring session database persistence for an application on an IBM Cloud Pak for Applications traditional WebSphere container.

Configure and build

You can configure session database persistence using wsadmin scripts, as described in the IBM Knowledge Center product documentation.

The following code example is a sample wsadmin script that configures session database persistence using Db2:

Server=AdminConfig.getid('/Cell:' + AdminControl.getCell() + '/Node:' + AdminControl.getNode() + '/Server:server1')
print 'Get SessionManager Configuration'
SessMgr = AdminConfig.list('SessionManager', Server)
print 'Start - Enable Session Database Persistence'
AdminConfig.modify(SessMgr,'[[sessionPersistenceMode "DATABASE"]]')
SessDB = AdminConfig.list('SessionDatabasePersistence',SessMgr)
AdminConfig.modify(SessDB,'[[userId "db2inst1"] [password "db2inst1"] [tableSpaceName ""] [datasourceJNDIName "jdbc/Sessions"]]')
print 'End - Enable Session Database Persistence'

We will use this wsadmin script while building a tWAS Docker image.

    FROM ibmcom/websphere-traditional:

    COPY ./session-mgmt/PASSWORD /tmp/PASSWORD

    //Db2 Drivers
    COPY ./session-mgmt/db2/ /opt/IBM/db2drivers/

    //wsadmin script to configure session DB
    COPY ./session-mgmt/ /work/config/

    //Property file to install sample session app
    COPY ./session-mgmt/app-install.props /work/config/app-install.props

    //Application Binaries
    COPY ./session-mgmt/SessionSample-1.0.war /work/config/SessionSample-1.0.war

    RUN /work/

The following code is a snapshot of the SessionServlet code that creates and increments an attribute (counter) in a session scope. If for any reason the session is lost, the counter falls back to zero; otherwise, it keeps incrementing on every request to the servlet.

      HttpSession httpSession = request.getSession();

        Integer counter = (Integer) httpSession.getAttribute("counter");
        if(counter == null) {
            out.println("Session is empty");
            counter = new Integer(0);
        else {
            out.println("Counter = " + counter);
        System.out.println("counter= " + counter);
        httpSession.setAttribute("counter", counter);

        out.println("<a href='SessionServlet'>Increment counter</a>");

Let’s build and deploy the session sample tWAS application with session DB configured using a Jenkins Pipeline.

On successful completion of the Pipeline, you should see :

  1. OpenShift build is created and ran that pulls tWAS image from Docker Hub
  2. JDBC providers and DataSource is configured on tWAS
  3. Session DB persistence is configured
  4. Sample session application is installed
  5. The target tWAS application image is pushed to Docker Registry on OCP
  6. A DeploymentConfig is created on OCP pointing to the target tWAS application image in Docker Registry
  7. A Service is created on OCP that exposes the application ports 9080 and 9443
  8. A route is created on OCP that enables us to access the application on a public address


Once the Pipeline is executed successfully, perform the following steps to verify session database persistence:

  1. On the OCP Application Console, click the Overview section to find the Sessions Application Deployment.

  2. Click on Route to access the Sessions application.


    You should see the “Welcome to Session Servlet” page, as shown below:


  3. Click on the link provided (“Click here”) to call the SessionServlet.

    On this page, you can find the details of the Pod (bhlxm) that served the request. The SessionServlet gets the Counter value from the HTTPSession. As we have invoked the SessionServlet for the first time, the counter value is empty.


  4. Now, click the Increment counter link.

    The Counter is incremented to 1, and the session attributes are persisted in the database.


  5. Click the Increment counter link until the Counter = 5.

  6. Now, from the OCP Cluster Console, kill the pod by deleting it.


  7. Wait for OCP to create another pod and for the application to be back up and running.



  8. Now, click the Increment counter link again.

    You should see the new pod ( jz42r) continues to maintain the session and increments the counter to 6 by getting session attributes from the database.


Sticky sessions, also known as session affinity in OpenShift

A lot of enterprise applications are not yet cloud-ready or even designed for microservices. Due to this fact, session stickiness is required for a lot of enterprise applications to ensure all traffic from a user’s session goes to the same pod, creating a better user experience. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear.

In OpenShift, when a route has multiple endpoints, HAProxy distributes requests to the route among the endpoints based on the selected load-balancing strategy:

  • roundrobin
  • leastconn
  • source

By default, sticky sessions for passthrough routes are implemented using the source load-balancing strategy in OpenShift.

When you need a cookie session stickiness, out-of-the-box OpenShift is a handy way to go.

Let’s verify the same using the same session sample application.

  1. Scale up the number of pods to 2.


  2. Click the Increment counter link multiple times.

    Notice that the request from the same browser session is routed to the same Pod (jz42r) due to session affinity, although we have another Pod to serve the requests.



In this tutorial, you have learned how to configure session database persistence, and how to build and deploy an application on an IBM Cloud Pak for Applications traditional WebSphere container. You have also seen an example of how WebSphere session stickiness is maintained when you scale up pods in the OpenShift Container Platform.