Cloud-native development grows up

When you book an airline ticket, apply for a new passport, access your insurance documents or bank account, you’re typically relying on software built by enterprise developers in corporate labs and cities around the world.

Enterprise developers are busy people. Customer’s expectations are higher than they’ve ever been for fast, mobile, and secure access to information. We already accept that microservices and cloud-based solutions offer the only real flexible, scalable future for the enterprise. Yet only 20% of enterprise software has moved to the cloud.

What is preventing more companies from moving to the cloud? Enterprise developers are often pressed for time to learn on the job. Even if they have time, it can be difficult knowing where to start with cloud-native development. The technology is constantly evolving, and opinions on best tools and approaches vary even within small organizations.

In this blog post, we introduce you to new cloud-native products and open source projects from IBM that simplify your journey to the cloud.

Cloud Pak for Applications

Cloud Pak for Applications aims to clear some of the mystery around Cloud Native development by:

  • Bundling the best in class libraries and frameworks for developing secure, fast and scalable solutions
  • Instilling customizable and consistent access to frameworks approved by an organization.

The image below shows the underlying technology that is included in Cloud Pak for Applications:

Cloud Pak Architecture

Cloud Paks for Applications contains a few big components, one of which is Kabanero – a collection cloud-native tools and libraries that we think are essential for cloud-native development.

We’re introducing a new collection of code patterns, articles and tutorials that gently introduce the concepts of Kabanero within Cloud Pak for Apps as a smart, disciplined and consistent approach to creating cloud native applications in the enterprise.

Because Kabanero is a core component of our cloud-development offering, let’s take a closer look at the underlying technology.

Kabanero development technologies

Kabanero is the open source foundational layer of Cloud Paks for Applications. Kabanero itself is made up of accepted, best-in-class cloud technologies which are all open source. You can see a graphical representation of many of the important technologies below:

Kabanero Architecture

One of the special ingredients in Kabanero is Appsody which uses technology stacks and templates to create a disciplined and consistent approach to developing apps within an enterprise organization.

Our approach to creating developer resources around Kabanero and Cloud Paks for Apps is to focus on workflows using the cloud DevOps components, providing tutorials around them, and code patterns that can be cloned and explored as reference models. In our first collection of developer resources, we’re sharing an Appsody code pattern that walks through the basics of creating an application that has two microservices – with presentation and business logic, as well as digging into approaches for using Appsody in your own projects.

Building with Appsody for consistent results

Appsody is an open source project that simplifies and controls cloud-native application development. Appsody’s primary component is a stack, which builds a pre-configured Docker image that developers can immediately use to create applications in a cloud environment. Appsody allows stack builders to decide which parts of the users’ resulting application images are fixed (a set of technology choices and configurations defined by the stack image) and which parts stack users can modify/extend (templates).

One way to think about Appsody is that it can give developers the advantages of a Platform as a Service (PaaS) environment (in terms of not having to worry about installing and configuring the underlying technology components), while allowing architects the flexibility to define those technology components using Docker images.

Appsody stacks

An Appsody stack represents a pre-configured set of technologies aimed at simplifying the building of a particular type of cloud native application. This might include a particular environment (for example, node.js, or perhaps python-flask), combined with integrated choices for monitoring, logging etc. Stacks are published in stack repositories, which can either be public or private to an enterprise. Developers can then use the Appsody CLI to pull in the appropriate stack for the application they are building. Kabanero contains all the tools for using and contributing to public stack repositories, as well as a set of curated stacks suitable for the enterprise.

Appsody goes even further than simplifying the use of pre-configured technologies. It enables developers to create and test applications within a local containerized environment from the start using Rapid Local Development Mode. After those initial tests are run, developers can then deploy the final application to cloud-based testing and production clusters. Developing in containers from the start reduces the likelihood of subtle problems being introduced when containerization is added late in the development process.

Appsody templates

Appsody stacks come with one or more templates. A template represents a starter application using that stack and comes ready to be run and deployed. Developer can modify the template to build out their application.

The following image shows the flow of how a developer uses Appsody to pull down and modify a stack, build it and then deploy it to a remote Kubernetes cluster.

Appsody Architecture

The above flow shows the manual deployment to a Kubernetes cluster. In more production-orientated environments, GitOps might trigger the build and deploy steps and Tekton Pipelines would drive the deployment. Kabanero Collections, which is part of Cloud Pak for Applications, brings together Appsody stacks, GitOps, and Tekton Pipelines to provide an enterprise-ready solution for grown-up cloud-native application development and deployment.

Ready to start?

Now that you understand the technology that underlies IBM Cloud Pak for Applications, you’re ready to start exploring the content that we’ve created. We’ve selected two different paths to help you get started with Cloud Pak for Applications:

Anton McConville
Henry Nash
Brad Topol

EclipseCon EU: Let’s talk open source

EclipseCon EU: Let’s celebrate open!

EclipseCon EU is almost here – October 21-24 in Ludwigsburg, Germany. We’re excited to sponsors both the main and co-located OSGi Community 20th anniversary events this year, but mostly we’re looking forward to connecting with you around ways to accelerate your journey to cloud.

From fun hands-on coding challenges to incredible speakers, we know it’s going to be a great event. We hope you’ll stop by the IBM Developer booth to connect.

Test your coding skills: Cloud-native workshop and QuickLabs

Workshop: Build-a-bike Tuesday, October 22 – 09:00 to 12:00 in Seminarraum 5
Liberty Bikes is a four player, elimination game built using the latest technologies of Jakarta EE 8 and MicroProfile 3.0. Come build your first (or 100th) microservice as you create an AI to compete in a battle royale against your fellow attendees. In this workshop, you will develop a complete microservice, leveraging a MicroProfile Rest Client to seamlessly integrate and communicate with an existing application. Can you become champion of the grid?

Code for Swag QuickLabs: We challenge you to use your coding skills for . . . some cool swag. In 15 minutes or less, with a few lines of code, explore some of the latest in Java, open source, frameworks, and more. You can then claim victory and a special prize.

Your open source journey to the cloud starts at the IBM booth

In addition to great speakers, here are a few of the topics we’re most excited to chat with you about:

Book signing: Wednesday, October 23 from 14:30 – 16:00
Meet the authors and get a free “Developing Open Cloud Native Microservices: Your Java Code in Action” ebook.

Code for Swag QuickLabs: Get your hands on the code and add to your swag collection with one or more of our labs featuring Kabanero, Open Liberty, Kubernetes and more.

Build cloud-native apps and microservices: Experience the power of the Open Liberty cloud runtime for building cloud native apps in a fun, interactive game, Liberty Bikes. Open Liberty provides fully compatible implementations of Jakarta EE and Eclipse MicroProfile.

Modernize existing apps for cloud and containers: Explore tooling, skills, and recommendations for getting the most out of your existing applications as you move to the cloud. Join IBMers who are actively working with customers on modernizing their apps using open technologies including Open Liberty, Eclipse OpenJ9, Jakarata EE, Eclipse MicroProfile, Spring, Reactive and more.

Kabanero: Find out about this new open source project that brings together Knative, Istio, and Tekton with new open projects Codewind, Appsody, and Razee into an end-to-end solution to architect, build, deploy, and manage the lifecycle of Kubernetes-based applications.

** Hear from IBMers on topics covering app modernization, cloud native development, new open source projects, machine learning, Eclipse projects, communities, and more.

Eclipsecon image

Session Date/Time Session Title Speaker Name
22 Oct: 09:00 – 12:00 Build-A-Bike Workshop Ryan Esch Andrew Guibert
22 Oct: 14:30 – 15:05 How Java9+ helps you React- Reactive Programming? Jayashree S Kumar
22 Oct: 14:30 – 15:05 Jakarta for dummEEs Kevin Sutter
22 Oct: 14:30 – 15:05 Intro to Eclipse Codewind – simplified app development for the cloud! Tim deBoer
22 Oct: 17:00 – 17:35 Mastering your Eclipse IDE – Java tooling, Tips & Tricks! Noopur Gupta
23 Oct: 09:30 – 10:05 Modern Development — How Containers are Changing Everything Steve Poole
23 Oct: 09:30 – 10:05 Make testing Enterprise Java more joyful Sebastian Daschner
23 Oct: 10:15 – 10:50 OSGi in Action: How we use OSGi to build Open Liberty Alasdair Nottingham
23 Oct: 15:10 – 15:45 Thirst-quenching streams for the reactive mind – A comparison of OSGi Push Stream API and implementations of Reactive Streams Mary Grygleski
23 Oct: 15:10 – 15:45 Evolve Java APIs and keep them compatible using API Tools Vikas Chandra
23 Oct: 15:10 – 15:45 Test Dockerization Mamatha J V
23 Oct: 15:10 – 15:45 OpenJ9 a Lean, Mean, Java Virtual Machine for the Cloud Billy Korando
23 Oct: 16:15 – 16:50 EGit Essentials, Tips and Tricks Lakshmi Shanmugam
23 Oct: 16:15 – 16:50 Migrating Beyond Java 8 Dalia Abo Sheasha
23 Oct: 16:15 – 16:50 Java EE, Jakarta EE, MicroProfile, Or Maybe All Of Them? Sebastian Daschner
23 Oct: 16:15 – 16:50 Promises in Java: Using Promises to Recover from Failure BJ Hargrave
24 Oct: 10:15 – 10:50 Java and Containers – Make it Awesome! Dinakar Guniguntala
24 Oct: 13:00 – 13:35 Streamline Integration Testing with Testcontainers Andrew Guibert Kevin Sutter
Neil Patterson

Flying Kubernetes webinar: Key concepts explained with drones

Kubernetes is one of the fastest-growing technologies in the industry, and it’s not hard to tell why. It provides an isolated and secure app platform for managing containers, transforming both application development and operations for your organization.

But are you worried that Kubernetes is complex and difficult to learn?

To make learning Kubernetes concepts a little more fun, our team at IBM built a “Kubefly” demo to teach and explain core Kubernetes concepts by using a swarm of flying drones. The drones showcase concepts like pods, replica sets, deployments, and stateful sets by reacting to configuration changes on our Kubernetes cluster. For example, after a Kubernetes application is deployed, a few drones take off. Each one represents a pod in the deployment. If one of the Kubernetes pods is killed, the drone lands, and another takes its place, because Kubernetes uses a declarative model to always attempt to match the specified state.

Kubefly Kubernetes drones

To see these drones in action, register for the upcoming webinar with Jason McGee, Chief Technology Officer for IBM Cloud, and Briana Frank, Director of Product Management for IBM Cloud. The webinar demonstrates Kubernetes concepts and introduces basic benefits of the Istio service mesh.

If you want to learn a bit more about the drones behind the project, see the Flying Kubernetes with the Kubefly project blog post and video from earlier this year.

Belinda Vennam

Microsurvival Part 4: On to Kubernetes

Note: This blog post is part of a series.

Hey Appy! I’m glad you’re back. Before we get started, I wanted to share with you a poem I’ve been working on.

Ops teams were on their knees.

Ops teams were in tears.

But Kubernetes came in sight.

Kubernetes became their guide.

So, what’d you think? Why are you so interested in the ceiling all of a sudden? Hmm, I’ll take that as a sign to continue working on my poetry writing skills.

Anyway, our last conversation covered container technologies. Now, I think it’s time to dive into Kubernetes, don’t you think?

What is Kubernetes?

Where do I begin? Let’s start with the meaning of Kubernetes. Kubernetes is a Greek word that means helmsman. Now, why do we need it? As applications like you grow and the number of components increase, the difficulty of configuring, managing, and running the whole system smoothly also increases. And since humans always tried to automate difficult and repetitive tasks, Kubernetes was created.

Kubernetes is one of the many container orchestrators out there that runs and manages containers. What does a container orchestrator do? It helps the operations team to automatically monitor, scale, and reschedule containerized applications inside a cluster in the event of hardware failure. It enables containerized applications to run on any number of computer nodes as if all those nodes were a single, huge computer. That makes it a whole lot easier for both developers and operations team to develop, deploy, and manage their applications. Your parents, Dev and Ops, would surely agree with this magic.

Kubernetes architecture

Next is the Kubernetes architecture. A Kubernetes cluster is a bunch of master nodes and worker nodes. A master node manages worker nodes. You can have one master node or more than one if you want to provide high availability. The master nodes provide many cluster-wide system management services for the worker nodes, and the worker nodes handle our workload. However, we won’t be interacting with them a lot (not directly at least). To set your desired state of the cluster, you need to create objects using the Kubernetes API. We can use kubectl to do that, which is the command-line interface of Kubernetes.

Here, let me draw you a picture:


The master node consists of basically four things:

  1. etcd is a data store. The declarative model is stored in etcd as objects. For example, if we say we want five instances of a certain container, that request is stored in the data store.
  2. Kubernetes controller manager watches the changes requested through the API server and attempts to move the current state of the cluster towards the desired state.
  3. Kubernetes API server validates and configures data for the API objects, which include pods, services, replication, controllers, and more.
  4. Kubernetes scheduler takes charge of scheduling pods on nodes. It needs to consider a lot information, including resource requirements, hardware/software constraints, and many other things.

Each worker node has two main processes running on them:

  • kubelet is something like a node manager. The master node talks to the worker nodes through kubelet. The master node tells kubelet what to do, and then kubelet tells the Pods what to do.
  • kube-proxy is a network proxy that reflects Kubernetes networking services on each node. When a request comes from outside of the cluster, the kube-proxy routes that request to the specific pod needed, and the pod runs the request on the container.

Now this picture has more details, but you can see how everything works together:


We use Kubernetes API objects to describe how our cluster should be, what applications to run in it, which container images to use, and how many of them should be running.

Get to know the following basic Kubernetes objects:

  • Pods, are the smallest deployable units of computing that can be created and managed in Kubernetes. Containers of an application run inside these pods.
  • Pods are mortal and are created and destroyed dynamically when scaling. For the pods to communicate with each other, we need services. A service is an abstraction which defines a logical set of pods and how to access them.
  • Volume is an abstraction that solves two problems. The first problem is that all the files inside a container are lost when it crashes. The kubelet restarts the container, but it is a new container with a clean state. The second problem is that two containers running in the same pod often share files.
  • Namespace lets you create multiple virtual clusters (called namespaces) backed by the same physical cluster. You use them in huge clusters with many users belonging to multiple teams or multiple projects.

Now, pay attention to the controllers, which build upon the basic objects to give us more control over the cluster and provide additional capabilities:

  • ReplicaSet ensures a set number of replica pods are running at any given time.
  • After you define a desired state in a deployment object, the deployment controller changes the current state to the desired state at a controlled rate.
  • A DaemonSet ensures that all (or some) nodes run the specified pod. When more nodes are added, pods are added to them.
  • Jobs create one or more pods and after they are successfully completed, the job is marked as complete.

Okay, I know that was a lot of information. But, guess what? This is far from enough to get a full idea of Kubernetes! However, it is enough to get you off to a good start. Don’t you agree, Appy?

Hey! Are you dozing off? Wow, maybe lectures aren’t your thing. Let’s try a hands-on approach.

Lucky for you, there are several labs that can help you understand the core concepts of Kubernetes that I just described. These labs only require you to have an IBM Cloud account. You can then create a free Kubernetes cluster to play with.

I hope you tell your parents, Dev and Ops, what you have learned so far from our meeting today and from the labs after you complete them. I am sure both of them will be happy to hear about it. Don’t forget to deliver my regards as well.

Have a great adventure, Appy! I know you’re off to a good start.

A previous version of this post was published on Medium.

Amro Moustafa

Microsurvival Part 3: Hello Docker

Note: This blog post is part of a series.

Hello, Developer! I am so glad you could join me today. I’m extremely happy that your child Appy told you about what we discussed, but even happier that you suggested meeting today. I know you want the best for Appy, to look good and get along with others. We’ve been talking about an appropriate and safe environment for Appy, especially as Appy continues to grow and mature.

So, you’re trying to work with Docker and need some tips, correct? Well, I am more than happy to help. It’s pretty easy. I see you have a Windows laptop like me, so you can follow along just fine!

First, let’s start by installing Docker. You will need to follow the instructions specified for your operating system. Make sure your version of Windows is compatible.

Now that you have Docker, you can run Docker commands using the Docker executable.

Because it’s common to start developing with a “Hello World!” program, let’s run a “Hello World!” container. Try running the command docker run busybox echo "Hello world" and you should get a similar output:

> docker run busybox echo "Hello world"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
90e01955edcd: Pull complete
Digest: sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
Status: Downloaded newer image for busybox:latest
Hello world

Allow me to explain what running this command did for us. Docker first searched for the image we are trying to pull on your local machine but couldn’t find it. The image was pulled from Docker Hub instead, Docker’s public registry of ready-made container images. The image we pulled is a BusyBox image, which combines tiny UNIX tools into a single executable. Then, Docker created an isolated container based on the image. Optionally, we specified which command to execute when running the container. We downloaded and ran a full application without the need to install it or any of its dependencies, all in the same command. Fascinating, don’t you agree?

What’s a docker run command?

Now, let me elaborate a bit more on the docker run command. This command runs existing images or pulls images from the Docker Hub registry. These images are software packages that get updated often, so there is more than one version for each image. Docker allows multiple versions of an image with the same name, but each version must have a unique tag. If you run the docker run <image> command without a tag, Docker assumes you are looking for the latest version of the image, which has the latest tag. To specify the version of the image you are looking for, simply add the tag docker run <image>:<tag>.

You might want to list the images using docker images to check the images created, their tags (or versions), creation dates, and sizes. After you run it, you should get an output similar to the following example:

> docker images
busybox       latest   59788edf1f3e   8 weeks ago   1.15MB

You can also use the docker container list command to list the running containers. If you run it right now, you probably won’t get any containers listed because the container no longer running. But if you add the -a or --all flag, both running and stopped containers are displayed in a similar output to this example:

>docker container list -a
CONTAINER ID IMAGE COMMAND CREATED ... 47130c55f730 busybox "echo 'Hello world'" About an hour ago ...

(Some of the details are omitted and replaced by ...)

Do you find the command docker container list a bit long? If so, there is an alternative command, docker ps, with the same function. You can optionally add the -a flag to show the stopped containers as well.

Since the container shows up as a stopped container, you can start it up again by using the docker start <container ID> command. And, you can stop a running container by using the command docker stop <container ID>.

Create a Docker image

Now that you know how to run a new container using an image from the Docker Hub registry, let’s make our own Docker image. The image we will create consists mainly of two things: the application you want to run and the Dockerfile that Docker reads to automatically build an image for our application. The Dockerfile is a document that contains all the commands that Docker users could call on the command line to assemble an image. Let’s first start with the simple Node.js application, and name it app.js. Feel free to customize the name if you’d like.

const http = require('http');
const os = require('os');var server = http.createServer(function(req,res){
  response.end("Hostname is " + os.hostname() + "\n");

As you can see in this code sample, we are just starting an HTTP server on port 3000, which will respond with “Hostname is (the hostname of the server host)” to every request. Make a directory and name it as you like, then save the app code inside of it. Make sure no other files are present in that directory.

Now that we’ve created an application, it’s time to create our Dockerfile. Create a file called Dockerfile, copy and paste the content from the following code sample into that file, and then save it in the same directory as your app code.

FROM node:8
COPY app.js /app.js
CMD ["node", "app.js"]

Each FROM statement has a meaning in the Dockerfile. FROM designates which parent image you are using as a base for the image you are building. It is always better to choose a proper base image. We could have written FROM Ubuntu, but using a general-purpose image for running a Node application is unnecessary, because it increases the image overhead. In general, the smaller the better.

Instead, we used the specialized official Node runtime environment as a parent image. Another thing to note is that we specified the version with the tag FROM node:8 instead of using the default latest tag. Why? The latest tag result in a different base image used when a new version is released, and your build may break. I prefer to take this precaution.

We also used COPY <src> <dest> to copy new files or directories from <src> and add them to the file system of the container at the path <dest>. The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the <dest> path. Another Dockerfile instruction that has a similar function as COPY, is ADD. However, COPY is preferred because it is simpler. You can use ADD for some unique functions like downloading external resources or extracting .tar files into the image. You can explore that option further by checking out the Docker documentation.

Lastly, you can use the CMD instruction to run the application contained by your image. The command, in this case, would be node app.js.

There are other instructions that can be included in the Dockerfile. Reviewing them briefly now could prove helpful for you later on. The RUN command, for example, allows you to run commands to set up your application and you can use it to install packages. An example of that is RUN npm install. We can expose a specific port to the world outside the container we are building by using EXPOSE <port>.

Before writing all these commands, you should know some essential knowledge about Dockerfiles. Every command you write in the Dockerfile creates a layer, and each layer is cached and reused. Invalidating the cache of a single layer invalidates all the subsequent layers below it. For example, invalidation occurs after command change. Something to note is that Docker likes to keep the layers immutable. So, if you add a file in one layer and remove it in the next one, the image still contains that file on the first layer. It’s just that now the container doesn’t have access to it anymore.

Two things to keep in mind is that the fewer layers in a Dockerfile, the better. To change the inner layers in Docker images, Docker must remove all the layers above it first. Think about it like this: you’ll cry less if you have fewer layers to peel off an onion. Also, the most general steps and the longest steps should come first in your Dockerfile (the inner layers), while the specific ones should come later (outer layers).

Build an image from a Dockerfile

Now that you have a better understanding of the contents of the Dockerfile, let’s go ahead and build an image. First, make sure your path is inside the new directory that you made. Using ls command should only show you two files: app.js and Dockerfile. You can build the image by using the docker build -t medium. command. We tag it medium by using the -t flag and we target the current directory. (Note the dot at the end of the following command.)

>docker build -t medium .
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM node:8
8: Pulling from library/node
54f7e8ac135a: Pull complete
d6341e30912f: Pull complete
087a57faf949: Pull complete
5d71636fb824: Pull complete
0c1db9598990: Pull complete
89669bc2deb2: Pull complete
3b96ee2ed0b3: Pull complete
df3df33f8e3c: Pull complete
Digest: sha256:dd2381fe1f68df03a058094097886cd96b24a47724ff5a588b90921f13e875b7
Status: Downloaded newer image for node:8
---> 3b7ecd51ffe5
Step 2/3 : COPY app.js /app.js
---> 63633b2cf6e7
Step 3/3 : CMD ["node", "app.js"]
---> Running in 9ced576fdb46
Removing intermediate container 9ced576fdb46
---> 91c37fa82fe5
Successfully built 91c37fa82fe5
Successfully tagged medium:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

You can use the run command again to run the built image, or you can use docker push to push it to a registry and pull it again on another computer from the registry using docker pull.

Congratulations! Now you know how to make a proper Dockerfile, build an image from that Dockerfile, and run it. These skills will help you send young Appy out into the world. If you’re feeling adventurous, you can check out the Docker docs and keep going. Good luck!

A previous version of this post was published on Medium.

Amro Moustafa

Use Watson APIs on OpenShift

Before we talk about how to use Watson APIs on OpenShift, let’s quickly define what they are.

  • Watson APIs: A set of artificial intelligence (AI) services that are available on IBM Cloud that have a REST API and SDKs for many popular languages. Watson Assistant and Watson Discovery are part of this set to name a few.

  • OpenShift: Red Hat OpenShift is a hybrid-cloud, enterprise Kubernetes application platform. IBM Cloud now offers it as a hosted solution or an on-premises platform as a service (PaaS): Red Hat OpenShift on IBM Cloud. It is built around containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux. You can read more about the History of Kubernetes, OpenShift, and IBM in a blog post by Anton McConville and Olaph Wagoner.

Now, let’s talk about how to combine the two. In our opinion, there are really two ways to use Watson APIs in an OpenShift environment.

  1. Containerizing your application with Source-to-Image (S2I) and calling the Watson APIs directly at the application layer
  2. Using Cloud Pak for Data add-ons for specific APIs (more on this option later)

Let’s dig into the first option.


What is S2I?

Source-to-Image is a framework for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. S2I comes with OpenShift but it is also available as a stand-alone tool. Take a look at how simple it is to use S2I through an OpenShift console.

How do I use S2I for my Watson app?

Say you have a Node.js app, and you’d like to deploy it in a container running on OpenShift. Here’s what you do. (Our examples in this section use Red Hat OpenShift on IBM Cloud.)

  1. From the OpenShift catalog, select a runtime (for example, Node.js or Python) and point to a repository.

    add git repo

  2. Add configuration for the application, such as any Watson services API keys, as a Config Map.

    openshift config map

  3. Associate that Config Map with your app.

    openshift add config map to app

And you’re done! The containerized app will be deployed and now can use any existing Watson Service available through a REST API call.

What are the benefits?

  • Minimal refactoring of code
  • Source-to-Image’s ease of use
  • Fastest way to get started


We’ve already added OpenShift Source-to-Image instructions for some of our most popular Watson code patterns.

A quick example

We also created a quick video example that demonstrates how to use the approach mentioned above.

Cloud Pak for Data

What is Cloud Pak for Data?

Cloud Pak for Data can be deployed on OpenShift and includes a lot of AI and data products from IBM. These products include, but are not limited to, Watson Studio, Watson Machine Learning, Db2 Warehouse, and Watson Assistant.

How do I use Cloud Pak for Data for my Watson app?

Using our previous example, say that you have a Node.js app running on-premises and behind a firewall. In just a few minutes, you can update the application to call Watson APIs that are running on your Cloud Pak for Data.

  1. (Prerequisite) Install Cloud Pak for Data, on-premises, preferably on OpenShift.

  2. Install the Watson API kit add-on, the Watson Assistant add-on, and the Watson Discovery add-on. The Watson API kit includes Watson Knowledge Studio, Watson Natural Language Understanding, Watson Speech to Text, and Watson Text to Speech.

  3. Launch the Watson API service that you want to use and generate a new API Key.

  4. Update the application to use the new API key and REST endpoint.

What are the benefits?

  • If on-premises, REST calls never hit a public endpoint
  • Some refactoring, mostly at the configuration level


We’re still in the process of updating our content to work with Watson APIs on OpenShift, so here are a couple of references instead:

Thanks for reading our blog! Start the journey to containerizing your Watson applications by following our Sample using Watson Assistant or Sample using Watson Discovery. Or, if you’re interested in learning more about Cloud Pak for Data, check out this Overview of Cloud Pak for Data video.

Steve Martinelli
Scott D’Angelo

Oracle Code One: Let’s talk open source, Java, cloud, and . . . cool socks

Code One 2019 is almost here! Join us September 16-19 in San Francisco. We’re excited to be sponsors this year, but mostly we’re looking forward to connecting with developers around Java, open source, cloud modernization and cloud-native app development.

From fun hands-on coding challenges to incredible speakers, we think it’s going to be a great event. We hope you’ll stop by booth 3101 to meet us.

Test your coding skills: Cloud-native workshop and QuickLabs

Workshop: Hands-on, open cloud-native development with Kabanero and Java [HOL6624]

Monday, 16 September, Room 3024A

Choose the time that works best for you! 12:30-2:30 PM or 2:45-4:45 PM or 5:00–7:00 PM

Explore modern, cloud-native app development that uses Java and the latest open technologies in this deep dive hands-on workshop. The workshop lets you use Kabanero, a new open source project that integrates popular open source projects (Tekton, Kubernetes, Appsody, etc) and delivers a simplified, seamless development experience.

Code for Socks QuickLabs: Booth 3101 We challenge you to use your coding skills for . . . cool socks. In 15 minutes or less, with a few lines of code, explore some of the latest in Java, open source, frameworks, and more. You can then claim victory and a special pair of socks.

Challenges include:

  • Cloud-native development with Kabanero
  • Getting started with Open Liberty
  • Akka and Kubernetes: a symbiotic love story
  • Intro to Kabanero and Spring

Come talk open source Java

Join us at Booth 3101 to talk about the newest cloud-native, Java open source projects at IBM, including:

  • Kabanero: Find out about this new open source project that brings together Knative, Istio, and Tekton with new open projects Codewind, Appsody, and Razee into an end-to-end solution to architect, build, deploy, and manage the lifecycle of Kubernetes-based applications.
  • Open Liberty: Experience the power of this cloud runtime for building cloud native apps in a fun, interactive game, Liberty Bikes. Open Liberty provides fully compatible implementations of Jakarta EE and Eclipse MicroProfile.
  • Cloud-native Java: Talk to IBMers who are actively contributing to the open Java communities including: Open Liberty, Eclipse OpenJ9, Jakarata EE, Eclipse MicroProfile, Spring, Reactive and more.

Socialize with other Java developers and get your books signed

Cloud-native Java networking evening: IBM is excited to co-sponsor the first networking evening at Code One. The event brings the Jakarta EE and Eclipse MicroProfile communities together for a time of celebration around the significant achievements of the past year.

Book signing: Meet the authors and get a free “Developing Open Cloud Native Microservices: Your Java Code in Action” ebook. Tuesday, September 17 from 2:30 – 4:00 PM.

IBM sessions at Code One

Hear from IBMers on topics covering cloud native development, new open source projects, machine learning, communities, and more.

Session Date/Time Session Title Speaker Name
16 Sept 12:30-2:30 PM Hands-on Lab: Open Cloud Native Development with Kabanero and Java [HOL6624] Steve Poole, Graham Charters
16 Sept 2:45-4:45 PM Hands-on Lab: Open Cloud Native Development with Kabanero and Java [HOL6624] Steve Poole, Graham Charters
16 Sept 5:00-7:00 PM Hands-on Lab: Open Cloud Native Development with Kabanero and Java [HOL6624] Steve Poole, Graham Charters
16 Sept 5:00 PM Seven Principles of Productive Software Developers [DEV2118] Sebastian Daschner
16 Sept 5:00 PM Creating a Cloud Native Microservice: Which Programming Model Should I Use? [DEV1573] Emily Jiang
16 Sept 5:00 PM FaaS Meets Java EE: Developing Cloud Native Applications at Speed [DEV3080] Chris Bailey
16 Sept 5:00 PM Ignite Session [IGN6313] Grace Jansen
16 Sept 6:00 PM Your Java Code Just Needs a Little Injection: It Won’t Hurt! [DEV3631] Gordon Hutchinson
17 Sept 8:45 AM Java Application Security the Hard Way: A Workshop for the Serious Developer [TUT2851] Steve Poole
17 Sept 12:30 PM Let’s Talk About Communities [BOF3992] Billy Korando
17 Sept 1:30 PM Beyond Jakarta EE 8 [DEV1391] Ian Robinson
17 Sept 1:30 PM Overcoming Obstacles: Using Next-Gen Tools to Streamline Your Move to the Cloud [DEV6686] Erin Schnabel
17 Sept 2:00 PM Machine Learning and Artificial Intelligence: Myths and Reality [DEV6005] James Weaver
18 Sept 11:30 AM Configuration: JSR 382 [DEV2207] Emily Jiang
18 Sept 11:30 AM Condy? NestMates? Constable? Understanding JDK11 and JDK12’s JVM Features [DEV3407] Dan Heidinga
18 Sept 11:30 AM Breaking Stereotypes [BOF3757] Mary Grygleski
18 Sept 11:30 AM Fast, Efficient Jakarta EE for the Cloud [DEV4576] Alasdair Nottingham
18 Sept 12:30 PM Hands-on Java EE with Docker and Kubernetes – BYOL [HOL1138] Ahmad Gohar
18 Sept 1:30 PM Bulletproof Java Enterprise Applications for the Hard Production Life [DEV2122] Sebastian Daschner
18 Sept 4:00 PM Fantastic Data Consistency Techniques and Where to Find Them [DEV3922] Gordon Hutchinson
18 Sept 6:00 PM Streamline Integration Testing with Testcontainers [DEV3744] Kevin Sutter
18 Sept 6:00 PM Streamline Integration Testing with Testcontainers [DEV3744] Andrew Guibert
18 Sept 6:00 PM Share What You Know, Become a Speaker, and Get Accepted at Events [DEV5993] Mary Grygleski
19 Sept 9:00 AM Eclipse MicroProfile: The Present and the Future [BOF2200] Emily Jiang
19 Sept 9:00 AM Modern Development: How Containers Are Changing Everything [DEV2849] Steve Poole
19 Sept 9:00 AM Modern Development: How Containers Are Changing Everything [DEV2849] Andy Watson
19 Sept 9:00 AM You Have Nothing to Say? Let Us Help You! [BOF3734] Mary Grygleski
19 Sept 10:00 AM Striving for More-Productive Development Workflows [DEV2115] Sebastian Daschner
19 Sept 10:00 AM Reactive Microservices in Action [DEV4322] Emily Jiang
19 Sept 10:00 AM Stop Feeling Stuck! Design Your Career and Overcome the Plateau [DEV6010] Kevin Sutter
19 Sept 10:00 AM Stop Feeling Stuck! Design Your Career and Overcome the Plateau [DEV6010] James Weaver
19 Sept 12:15 PM Data Visualization, Processing, and ML (on the JVM!) with Apache Zeppelin [DEV2237] Pratik Patel
19 Sept 12:15 PM Data Visualization, Processing, and ML (on the JVM!) with Apache Zeppelin [DEV2237] Mo Haghighi
19 Sept 12:15 PM Making Java a First-Class Citizen with Machine Learning [DEV1306] Dan Heidinga
19 Sept 12:15 PM How to Get Along with HATEOS Without Letting the Bad Guys Steal Your Lunch [DEV2850] Steve Poole
19 Sept 12:15 PM How to Get Along with HATEOS Without Letting the Bad Guys Steal Your Lunch [DEV2850] Graham Charters
19 Sept 2:15 PM Microservices with Docker and Kubernetes: Best Practices for Java Developers [DEV2043] Ahmad Gohar
19 Sept 2:15 PM Migrating Beyond Java 8 [DEV2100] Dalia Abo Sheasha
19 Sept 2:15 PM Team Diversity the Successful Way [DEV6012] Mary Grygleski
Neil Patterson

Scaffold and deploy a scalable web application in an enterprise Kubernetes environment

Deploying your application to a container, or multiple containers, is just the first step. When a cloud-native system becomes more established, it’s even more important to manage, track, redeploy, and repair the software and architecture.

You can choose from various techniques to help platforms provision, test, deploy, scale, and run your containers efficiently across multiple hosts and operating environments, to perform automatic health checks, and to ensure high availability. Eventually, these approaches transform an app idea into an enterprise solution.

The code patterns, tutorials, videos, and articles on IBM Developer about Red Hat OpenShift on IBM Cloud™ are a good place to start considering ways to use an enterprise Kubernetes environment with worker nodes that come installed with the Red Hat OpenShift on IBM Cloud Container Platform orchestration software. With Red Hat OpenShift on IBM Cloud, you can use IBM Cloud Kubernetes Service for your cluster infrastructure environment and the OpenShift platform tools and catalog that run on Red Hat Enterprise Linux for deploying your apps.

As you move forward in exploring how to work with combined Red Hat OpenShift on IBM Cloud capabilities, you will want to know how to scaffold a web application (both Node.js and Express), run it locally in a Docker container, push the scaffolded code to a private Git repository, and then deploy it. You can follow the details in the Scalable web application on OpenShift tutorial in the Red Hat OpenShift on IBM Cloud documentation.

Consider a few tips: You can expose the app on an OpenShift route, which directs ingress traffic to applications deployed on the cluster, a simplified approach. You can bind a custom domain in OpenShift with one command, instead of defining an Ingress Kubernetes service in YAML and applying it. Also, you can monitor the health of the environment scale the application. For example, if your production app is experiencing an unexpected spike in traffic, the container platform automatically scales to handle the new workload.

You can check out the architecture diagram at the Scalable web application on OpenShift tutorial and then try it for yourself.

Vidyasagar S Machupalli