Kubernetes with OpenShift World Tour: Get hands-on experience and build applications fast! Find a workshop!

Convert your app from Cloud Foundry with Docker and Kubernetes

Cloud Foundry, Docker, and Kubernetes are very popular options for cloud native applications, but they each approach providing an application in slightly different ways. In this article, we’ll examine how you can containerize a cloud foundry application and how to deploy a containerized application into Kubernetes.

1

Starting from Cloud Foundry

Cloud Foundry applications

When you develop a Cloud Foundry application, your application sits in the repository where you initiated a deployment by using the Cloud Foundry CLIs to package and deploy your application.

A Cloud Foundry application is comprised primarily of your application and a manifest, which defines deployment parameters for your application. When you deploy an application to Cloud Foundry, the tools will search through your directory or specified location, therefore gathering all of your application’s artifacts and deployment based on the parameters that you specified in your manifest or utilized in your buildpack.

This knowledge allows you to keep your code separate from your deployment, and when you initiate a cf push, the Cloud Foundry tools will simply take your code and do the deployment for you, including packaging.

Let’s look at an example of a Cloud Foundry application. Start by setting up and running the nodejs get-started-node sample app as documented here: https://cloud.ibm.com/docs/runtimes/nodejs/getting-started.html#getting-started

getting started node files

This sample application can run on both Cloud Foundry or Docker. For now, we’ll focus on Cloud Foundry.

The main piece we need for Cloud Foundry is the manifest.yml, (seen in the above image of the files and folders):

applications:
 - name: GetStartedNode
   random-route: true
   memory: 256M

This file tells Cloud Foundry how to structure the application; we have one application named GetStartedNode that runs with memory 256M.

The code for the Node.js application is very simple. A package.json defines the main entry point as server.js:

"name": "get-started-node",
"main": "server.js",
"description": "An introduction to developing Node.js apps on the IBM Cloud platform",
"version": "0.1.1",

While server.js contains the code for the main entrypoint, it also controls the overall function of the web app and incorporates the express framework, namely the view render.

You can deploy this application by using the IBM Cloud command line tools as documented here: https://cloud.ibm.com/docs/runtimes/nodejs/getting-started.html#deploy

Here is what the output should look like:

ibmcloud cf push
Invoking 'cf push'...

Pushing from manifest to org test@ibm.com / space dev as test@ibm.com...
Using manifest file get-started-node/manifest.yml
Getting app info...
Creating app with these attributes...
+ name:       GetStartedNode
  path:       /get-started-node
+ memory:     256M
  routes:
+   getstartednode-fluent-reedbuck.mybluemix.net

Creating app GetStartedNode...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...
 41.37 KiB / 41.37 KiB [=========================================================================================================================================================] 100.00% 1s

Waiting for API to complete processing files...

Staging app and tracing logs...
   Downloading sdk-for-nodejs_v3_26-20190313-1440...
   Downloading java_buildpack...
   Downloading staticfile_buildpack...
   Downloading ruby_buildpack...
   Downloading nodejs_buildpack...
   Downloaded nodejs_buildpack
   Downloading go_buildpack...
   Downloaded sdk-for-nodejs_v3_26-20190313-1440
   Downloading python_buildpack...
   Downloaded ruby_buildpack
   Downloading xpages_buildpack...
   Downloaded staticfile_buildpack
   Downloading php_buildpack...
   Downloaded java_buildpack
   Downloading liberty-for-java_v3_29-20190223-2128...
   Downloaded go_buildpack
   Downloading binary_buildpack...
   Downloaded xpages_buildpack
   Downloading liberty-for-java_v3_17_1-20180131-1532...
   Downloaded php_buildpack
   Downloading liberty_v3_14-20171013-1023...
   Downloaded liberty-for-java_v3_29-20190223-2128
   Downloading dotnet-core_v2_0-20180918-1356...
   Downloaded liberty_v3_14-20171013-1023
   Downloading dotnet-core_v2_1-20181205-1536...
   Downloaded liberty-for-java_v3_17_1-20180131-1532
   Downloading liberty-for-java_v3_28-20190127-1723...
   Downloaded binary_buildpack
   Downloading sdk-for-nodejs_v3_25_1-20190115-1637...
   Downloaded python_buildpack
   Downloading swift_buildpack_v2_0_17-20190212-2123...
   Downloaded dotnet-core_v2_0-20180918-1356
   Downloading swift_buildpack_v2_0_18-20190303-1915...
   Downloaded dotnet-core_v2_1-20181205-1536
   Downloading noop-buildpack...
   Downloaded liberty-for-java_v3_28-20190127-1723
   Downloading liberty-for-java...
   Downloaded sdk-for-nodejs_v3_25_1-20190115-1637
   Downloading sdk-for-nodejs...
   Downloaded swift_buildpack_v2_0_17-20190212-2123
   Downloading dotnet-core...
   Downloaded swift_buildpack_v2_0_18-20190303-1915
   Downloading swift_buildpack...
   Downloaded noop-buildpack
   Downloaded liberty-for-java
   Downloaded sdk-for-nodejs
   Downloaded dotnet-core
   Downloaded swift_buildpack
   Cell ab703cc7-20f2-4f33-b382-883396420cd6 creating container for instance cedd8532-6fee-42df-a0df-02efea9caf4e
   Cell ab703cc7-20f2-4f33-b382-883396420cd6 successfully created container for instance cedd8532-6fee-42df-a0df-02efea9caf4e
   Downloading app package...
   Downloaded app package (40.5K)
   -----> IBM SDK for Node.js Buildpack v3.26-20190313-1440
          Based on Cloud Foundry Node.js Buildpack v1.5.24
   -----> Creating runtime environment

          NPM_CONFIG_LOGLEVEL=error
          NPM_CONFIG_PRODUCTION=true
          NODE_ENV=production
          NODE_MODULES_CACHE=true
   -----> Installing binaries
          engines.node (package.json):  6.*
          engines.npm (package.json):   unspecified (use default)

          Resolving node version 6.* via 'node-version-resolver'
          Downloading and installing node 6.17.0...
          Using default npm version: 3.10.10
   -----> Restoring cache
          Skipping cache restore (new runtime signature)
   -----> Building dependencies
          Installing node modules (package.json)
          get-started-node@0.1.1 /tmp/app
          ├─┬ @cloudant/cloudant@3.0.2
          │ ├─┬ @types/request@2.48.1
          │ │ ├── @types/caseless@0.12.2
          │ │ ├── @types/form-data@2.2.1
          │ │ ├── @types/node@11.12.2
          │ │ └── @types/tough-cookie@2.3.5
          │ ├─┬ async@2.1.2
          │ │ └── lodash@4.17.11
          │ ├─┬ concat-stream@1.6.2
          │ │ ├── buffer-from@1.1.1
          │ │ ├── inherits@2.0.3
          │ │ ├─┬ readable-stream@2.3.6
          │ │ │ ├── core-util-is@1.0.2
          │ │ │ ├── isarray@1.0.0
          │ │ │ ├── process-nextick-args@2.0.0
          │ │ │ ├── string_decoder@1.1.1
          │ │ │ └── util-deprecate@1.0.2
          │ │ └── typedarray@0.0.6
          │ ├─┬ debug@3.2.6
          │ │ └── ms@2.1.1
          │ ├── lockfile@1.0.3
          │ ├─┬ nano@7.1.1
          │ │ ├─┬ cloudant-follow@0.18.1
          │ │ │ ├── browser-request@0.3.3
          │ │ │ └── debug@4.1.1
          │ │ ├─┬ debug@2.6.9
          │ │ │ └── ms@2.0.0
          │ │ ├── errs@0.3.2
          │ │ └── lodash.isempty@4.4.0
          │ ├─┬ request@2.88.0
          │ │ ├── aws-sign2@0.7.0
          │ │ ├── aws4@1.8.0
          │ │ ├── caseless@0.12.0
          │ │ ├─┬ combined-stream@1.0.7
          │ │ │ └── delayed-stream@1.0.0
          │ │ ├── extend@3.0.2
          │ │ ├── forever-agent@0.6.1
          │ │ ├─┬ form-data@2.3.3
          │ │ │ └── asynckit@0.4.0
          │ │ ├─┬ har-validator@5.1.3
          │ │ │ ├─┬ ajv@6.10.0
          │ │ │ │ ├── fast-deep-equal@2.0.1
          │ │ │ │ ├── fast-json-stable-stringify@2.0.0
          │ │ │ │ ├── json-schema-traverse@0.4.1
          │ │ │ │ └─┬ uri-js@4.2.2
          │ │ │ │   └── punycode@2.1.1
          │ │ │ └── har-schema@2.0.0
          │ │ ├─┬ http-signature@1.2.0
          │ │ │ ├── assert-plus@1.0.0
          │ │ │ ├─┬ jsprim@1.4.1
          │ │ │ │ ├── extsprintf@1.3.0
          │ │ │ │ ├── json-schema@0.2.3
          │ │ │ │ └── verror@1.10.0
          │ │ │ └─┬ sshpk@1.16.1
          │ │ │   ├── asn1@0.2.4
          │ │ │   ├── bcrypt-pbkdf@1.0.2
          │ │ │   ├── dashdash@1.14.1
          │ │ │   ├── ecc-jsbn@0.1.2
          │ │ │   ├── getpass@0.1.7
          │ │ │   ├── jsbn@0.1.1
          │ │ │   └── tweetnacl@0.14.5
          │ │ ├── is-typedarray@1.0.0
          │ │ ├── isstream@0.1.2
          │ │ ├── json-stringify-safe@5.0.1
          │ │ ├─┬ mime-types@2.1.22
          │ │ │ └── mime-db@1.38.0
          │ │ ├── oauth-sign@0.9.0
          │ │ ├── performance-now@2.1.0
          │ │ ├─┬ tough-cookie@2.4.3
          │ │ │ ├── psl@1.1.31
          │ │ │ └── punycode@1.4.1
          │ │ ├── tunnel-agent@0.6.0
          │ │ └── uuid@3.3.2
          │ └─┬ tmp@0.0.33
          │   └── os-tmpdir@1.0.2
          ├─┬ body-parser@1.18.3
          │ ├── bytes@3.0.0
          │ ├── content-type@1.0.4
          │ ├─┬ debug@2.6.9
          │ │ └── ms@2.0.0
          │ ├── depd@1.1.2
          │ ├─┬ http-errors@1.6.3
          │ │ └── statuses@1.5.0
          │ ├─┬ iconv-lite@0.4.23
          │ │ └── safer-buffer@2.1.2
          │ ├─┬ on-finished@2.3.0
          │ │ └── ee-first@1.1.1
          │ ├── qs@6.5.2
          │ ├─┬ raw-body@2.3.3
          │ │ └── unpipe@1.0.0
          │ └─┬ type-is@1.6.16
          │   └── media-typer@0.3.0
          ├─┬ cfenv@1.2.2
          │ ├─┬ js-yaml@3.13.0
          │ │ ├─┬ argparse@1.0.10
          │ │ │ └── sprintf-js@1.0.3
          │ │ └── esprima@4.0.1
          │ ├── ports@1.1.0
          │ └── underscore@1.9.1
          ├── dotenv@4.0.0
          └─┬ express@4.16.4
          ├─┬ accepts@1.3.5
          │ └── negotiator@0.6.1
          ├── array-flatten@1.1.1
          ├── content-disposition@0.5.2
          ├── cookie@0.3.1
          ├── cookie-signature@1.0.6
          ├─┬ debug@2.6.9
          │ └── ms@2.0.0
          ├── encodeurl@1.0.2
          ├── escape-html@1.0.3
          ├── etag@1.8.1
          ├─┬ finalhandler@1.1.1
          │ ├─┬ debug@2.6.9
          │ │ └── ms@2.0.0
          │ └── statuses@1.4.0
          ├── fresh@0.5.2
          ├── merge-descriptors@1.0.1
          ├── methods@1.1.2
          ├── parseurl@1.3.2
          ├── path-to-regexp@0.1.7
          ├─┬ proxy-addr@2.0.4
          │ ├── forwarded@0.1.2
          │ └── ipaddr.js@1.8.0
          ├── range-parser@1.2.0
          ├── safe-buffer@5.1.2
          ├─┬ send@0.16.2
          │ ├── debug@2.6.9
          │ ├── destroy@1.0.4
          │ ├── mime@1.4.1
          │ ├── ms@2.0.0
          │ └── statuses@1.4.0
          ├── serve-static@1.13.2
          ├── setprototypeof@1.1.0
          ├── statuses@1.4.0
          ├── utils-merge@1.0.1
          └── vary@1.1.2

   -----> Installing App Management
   Checking for Dynatrace credentials
   No Dynatrace Service Found (service with substring dynatrace not found in VCAP_SERVICES)
   -----> Caching build
          Clearing previous node cache
          Saving 2 cacheDirectories (default):
          - node_modules
          - bower_components (nothing to cache)
   -----> Build succeeded!
          ├── @cloudant/cloudant@3.0.2
          ├── body-parser@1.18.3
          ├── cfenv@1.2.2
          ├── dotenv@4.0.0
          └── express@4.16.4

   Exit status 0
   Uploading droplet, build artifacts cache...
   Uploading build artifacts cache...
   Uploading droplet...
   Uploaded build artifacts cache (2.5M)
   Uploaded droplet (19.5M)
   Uploading complete
   Cell ab703cc7-20f2-4f33-b382-883396420cd6 stopping instance cedd8532-6fee-42df-a0df-02efea9caf4e
   Cell ab703cc7-20f2-4f33-b382-883396420cd6 destroying container for instance cedd8532-6fee-42df-a0df-02efea9caf4e

Waiting for app to start...

name:              GetStartedNode
requested state:   started
routes:            getstartednode-fluent-reedbuck.mybluemix.net
last uploaded:     Sat 30 Mar 00:34:52 EDT 2019
stack:             cflinuxfs2
buildpacks:        SDK for Node.js(TM) (node.js-6.17.0, buildpack-v3.26-20190313-1440)

type:            web
instances:       1/1
memory usage:    256M
start command:   ./vendor/initial_startup.rb
     state     since                  cpu    memory          disk          details
#0   running   2019-03-30T04:35:15Z   0.0%   47.5M of 256M   77.9M of 1G

Let’s see if our app is running:

ibmcloud cf apps
Invoking 'cf apps'...

Getting apps in org mvelasc@us.ibm.com / space dev as mvelasc@us.ibm.com...
OK

name             requested state   instances   memory   disk   urls
GetStartedNode   started           1/1         256M     1G     getstartednode-fluent-reedbuck.mybluemix.net

Now let’s actually go to the running application from the listed URL, getstartednode-fluent-reedbuck.myblumix.net:

running application from listed url

So we can see that we easily deployed a sample application for Cloud Foundry that contains the code for the app (Node dependencies and server.js).

Cloud Foundry buildpacks

In Cloud Foundry, the developer is typically only concerned with the application and not the runtime and dependencies. That is because the responsiblity would fall on the buildpack. Cloud Foundry buildpacks are designed to handle the runtime and dependencies that are needed by your applications. Buildpacks exist for all kinds of application runtimes.

As a developer, you can specify a specific buildpack to use when you’re performing a cf push; you can also specify it as an attribute in your manifest or the default action. By not specifying a buildpack to use, the Cloud Foundry platform will automatically pick a buildpack for you based on the contents of your application.

So in Cloud Foundry, the notion of runtime and dependencies is abstracted from the actual application development.

How Cloud Foundry is different from Docker

In Docker, that abstraction of runtimes and dependencies does not exist. When it comes to Docker, the runtime and application are combined. Just as a Docker container encapsulates your application, runtime, and dependencies, so too does the definition of that Docker container.

Each Docker container is built from a Dockerfile, a file that specifies how a container is built and the components that it holds. That is the application, the runtime, and the dependencies. This is the big difference between Cloud Foundry and Docker applications. With Docker, a developer must be aware of the runtime and dependencies in the Docker container that Cloud Foundry used to handle for you automatically. However this isn’t as bad as it seems. Keep in mind containers should be fairly lightweight, and short of huge monolithic applications, a docker container is usually relatively small and nimble.

Applying this to our Cloud Foundry application, let’s look at a Dockerfile that builds a Docker container around our application:

FROM node:6-alpine

ADD views /app/views
ADD package.json /app
ADD server.js /app

RUN cd /app; npm install

ENV NODE_ENV production
ENV PORT 8080
EXPOSE 8080

WORKDIR "/app"
CMD [ "npm", "start" ]

In this case, we take the same code base since the application itself does not change; instead we encapsulate the application into the Docker container. Specifically, the first line identifies this container we’ll create based on the “node” container from Docker, which you can see here.

The tag “6-alpine” is a specific revision of the node container in the Docker store to use as our base, which means someone a container with node v 6.0.0 on Alpine Linux. We will use this as the base of our container.

The three ADD lines will add the files from our source directory into the container, which loads our application files into the container. After the container is created a copy of the application will live within the container itself.

The RUN command installs the node dependencies for our application prior to the container starting up for the first time, this allows us to preload all dependencies so when an instance of our container is instantiated it can start immediately with no dependency load.

The ENV and EXPOSE tags set the port information for the container to respond to user requests; this is needed as a Docker container generally does not expose ports but instead by the application developer to decide which ports should be exposed and opened for the application to provide service.

The WORKDIR changes the context to the directory of our application so that we can run the “CMD” command on container startup, this is the command that executes as soon as the container starts, which in our example starts the node application.

Now let’s create the container. First you’ll need to do a docker login as we are pulling an image from Docker Hub:

$docker login
Authenticating with existing credentials...
Stored credentials invalid or expired
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username (test@ ibm.com): test
Password: 
Login Succeeded

Now create the container by using the Dockerfile in our current directory. Note that each line in the Dockerfile represents a step in building the Docker container. When you finish creating the container, you will have created a Docker container dockerdemo, with a tag of test1:

$docker build .
Sending build context to Docker daemon   12.7MB
Step 1/10 : FROM node:6-alpine
 ---> 7c9d8e1567b1
Step 2/10 : ADD views /app/views
 ---> cc32b8c96c8a
Step 3/10 : ADD package.json /app
 ---> 577d5c8eedf8
Step 4/10 : ADD server.js /app
 ---> 8d54bf8ac53b
Step 5/10 : RUN cd /app; npm install
 ---> Running in 7c8ee1dc7074
get-started-node@0.1.1 /app
+-- @cloudant/cloudant@3.0.2 
| +-- @types/request@2.48.1 
| | +-- @types/caseless@0.12.2 
| | +-- @types/form-data@2.2.1 
| | +-- @types/node@11.12.2 
| | `-- @types/tough-cookie@2.3.5 
| +-- async@2.1.2 
| | `-- lodash@4.17.11 
| +-- concat-stream@1.6.2 
| | +-- buffer-from@1.1.1 
| | +-- inherits@2.0.3 
| | +-- readable-stream@2.3.6 
| | | +-- core-util-is@1.0.2 
| | | +-- isarray@1.0.0 
| | | +-- process-nextick-args@2.0.0 
| | | +-- string_decoder@1.1.1 
| | | `-- util-deprecate@1.0.2 
| | `-- typedarray@0.0.6 
| +-- debug@3.2.6 
| | `-- ms@2.1.1 
| +-- lockfile@1.0.3 
| +-- nano@7.1.1 
| | +-- cloudant-follow@0.18.1 
| | | +-- browser-request@0.3.3 
| | | `-- debug@4.1.1 
| | +-- debug@2.6.9 
| | | `-- ms@2.0.0 
| | +-- errs@0.3.2 
| | `-- lodash.isempty@4.4.0 
| +-- request@2.88.0 
| | +-- aws-sign2@0.7.0 
| | +-- aws4@1.8.0 
| | +-- caseless@0.12.0 
| | +-- combined-stream@1.0.7 
| | | `-- delayed-stream@1.0.0 
| | +-- extend@3.0.2 
| | +-- forever-agent@0.6.1 
| | +-- form-data@2.3.3 
| | | `-- asynckit@0.4.0 
| | +-- har-validator@5.1.3 
| | | +-- ajv@6.10.0 
| | | | +-- fast-deep-equal@2.0.1 
| | | | +-- fast-json-stable-stringify@2.0.0 
| | | | +-- json-schema-traverse@0.4.1 
| | | | `-- uri-js@4.2.2 
| | | |   `-- punycode@2.1.1 
| | | `-- har-schema@2.0.0 
| | +-- http-signature@1.2.0 
| | | +-- assert-plus@1.0.0 
| | | +-- jsprim@1.4.1 
| | | | +-- extsprintf@1.3.0 
| | | | +-- json-schema@0.2.3 
| | | | `-- verror@1.10.0 
| | | `-- sshpk@1.16.1 
| | |   +-- asn1@0.2.4 
| | |   +-- bcrypt-pbkdf@1.0.2 
| | |   +-- dashdash@1.14.1 
| | |   +-- ecc-jsbn@0.1.2 
| | |   +-- getpass@0.1.7 
| | |   +-- jsbn@0.1.1 
| | |   `-- tweetnacl@0.14.5 
| | +-- is-typedarray@1.0.0 
| | +-- isstream@0.1.2 
| | +-- json-stringify-safe@5.0.1 
| | +-- mime-types@2.1.22 
| | | `-- mime-db@1.38.0 
| | +-- oauth-sign@0.9.0 
| | +-- performance-now@2.1.0 
| | +-- tough-cookie@2.4.3 
| | | +-- psl@1.1.31 
| | | `-- punycode@1.4.1 
| | +-- tunnel-agent@0.6.0 
| | `-- uuid@3.3.2 
| `-- tmp@0.0.33 
|   `-- os-tmpdir@1.0.2 
+-- body-parser@1.18.3 
| +-- bytes@3.0.0 
| +-- content-type@1.0.4 
| +-- debug@2.6.9 
| | `-- ms@2.0.0 
| +-- depd@1.1.2 
| +-- http-errors@1.6.3 
| | `-- statuses@1.5.0 
| +-- iconv-lite@0.4.23 
| | `-- safer-buffer@2.1.2 
| +-- on-finished@2.3.0 
| | `-- ee-first@1.1.1 
| +-- qs@6.5.2 
| +-- raw-body@2.3.3 
| | `-- unpipe@1.0.0 
| `-- type-is@1.6.16 
|   `-- media-typer@0.3.0 
+-- cfenv@1.2.2 
| +-- js-yaml@3.13.0 
| | +-- argparse@1.0.10 
| | | `-- sprintf-js@1.0.3 
| | `-- esprima@4.0.1 
| +-- ports@1.1.0 
| `-- underscore@1.9.1 
+-- dotenv@4.0.0 
`-- express@4.16.4 
  +-- accepts@1.3.5 
  | `-- negotiator@0.6.1 
  +-- array-flatten@1.1.1 
  +-- content-disposition@0.5.2 
  +-- cookie@0.3.1 
  +-- cookie-signature@1.0.6 
  +-- debug@2.6.9 
  | `-- ms@2.0.0 
  +-- encodeurl@1.0.2 
  +-- escape-html@1.0.3 
  +-- etag@1.8.1 
  +-- finalhandler@1.1.1 
  | +-- debug@2.6.9 
  | | `-- ms@2.0.0 
  | `-- statuses@1.4.0 
  +-- fresh@0.5.2 
  +-- merge-descriptors@1.0.1 
  +-- methods@1.1.2 
  +-- parseurl@1.3.2 
  +-- path-to-regexp@0.1.7 
  +-- proxy-addr@2.0.4 
  | +-- forwarded@0.1.2 
  | `-- ipaddr.js@1.8.0 
  +-- range-parser@1.2.0 
  +-- safe-buffer@5.1.2 
  +-- send@0.16.2 
  | +-- debug@2.6.9 
  | +-- destroy@1.0.4 
  | +-- mime@1.4.1 
  | +-- ms@2.0.0 
  | `-- statuses@1.4.0 
  +-- serve-static@1.13.2 
  +-- setprototypeof@1.1.0 
  +-- statuses@1.4.0 
  +-- utils-merge@1.0.1 
  `-- vary@1.1.2 

Removing intermediate container 7c8ee1dc7074
 ---> eda2c8116149
Step 6/10 : ENV NODE_ENV production
 ---> Running in c12c38466361
Removing intermediate container c12c38466361
 ---> 0313fd426ca2
Step 7/10 : ENV PORT 8080
 ---> Running in 3fde13e7f41a
Removing intermediate container 3fde13e7f41a
 ---> 32726bb919c1
Step 8/10 : EXPOSE 8080
 ---> Running in 0e8bd872b52b
Removing intermediate container 0e8bd872b52b
 ---> 8d36fa4ce87f
Step 9/10 : WORKDIR "/app"
 ---> Running in fb07bdc60f25
Removing intermediate container fb07bdc60f25
 ---> d9b86f6865af
Step 10/10 : CMD [ "npm", "start" ]
 ---> Running in a14c2a0f4cc4
Removing intermediate container a14c2a0f4cc4
 ---> a94691e4e594
Successfully built a94691e4e594

We can see in Docker that the images are available now, including our new container and node that we used as our base:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
dockerdemo          test1               5a6307ac6e7e        32 seconds ago      79.9MB
node                6-alpine            7c9d8e1567b1        3 weeks ago         55.6MB

Now let’s deploy an instance of our new container:

$docker run -d -p 8080:8080 dockerdemo:test1
3cca2823db352b9e8931e3e4ce3882de1099473765f630d139c45d51e8ecbbe4
$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
3cca2823db35        dockerdemo:test1    "npm start"         44 seconds ago      Up 43 seconds       0.0.0.0:8080->8080/tcp   hardcore_murdock

Now let’s verify with our browser. You should see the following:

verify with browser

2

On to Kubernetes

At this point, you should have a Docker container for your application. The next step in our Kubernetes journey is to deploy this container in a configuration that meets your needs. The key part is to determine what types of deployment and what attributes that you’ll need to provide for the deployment. But first, let’s understand what deploying to Kubernetes actually means.

What’s involved in deploying to Kubernetes?

To deploy into Kubernetes, you’ll need to push your Docker container, with your application in it, to a Docker registry. This can be a public registry, like the Docker store or a private Docker registry. Note that most private Kubernetes distributions will include a Docker registry as a part of their services.

Our goal in using Kubernetes here is to deploy a Kubernetes pod, which is the unit of encapsulation for one to many Docker containers. We define the parameters, attributes, and container information for this pod in a yaml file that is used with the Kubernetes CLI to create the pod. The container information is the Docker registry and container image name that Kubernetes will pull when it builds the pod.

The yaml file also defines network resources that provide access to your pod. In Kubernetes, the pod just controls the availability of a set of containers. In order to access a container, you need to create a corresponding network resource that provides routing to a container in the pod. Imagine if you have four containers and a user tries to connect: the user needs to be directed to a container in the pod; and the network service controls the list and location of containers, as containers are automatically restarted or killed and recreated.

3

Tying it all together

Pushing to the Container Registry

Now let’s confirm that we still have our Docker container that we just created:

$docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
dockerdemo          test1               5a6307ac6e7e        11 days ago         79.9MB

Now we’ll connect and push this Docker image to a container repository, because when we deploy to Kubernetes we’ll need to refer to this container registry from which to pull the container image. In this instance, we’ll use the Container Registry as an example.

First, determine the Container Registry information:

ibmcloud cr namespace-add getnode
Adding namespace 'getnode'...

Successfully added namespace 'getnode'

OK
$ ibmcloud cr info 

Container Registry                us.icr.io   
Container Registry API endpoint   https://us.icr.io/api   

IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local

OK

Next we will log in to the Container Registry:

ibmcloud cr login
Logging in to 'registry.ng.bluemix.net'...
Logged in to 'registry.ng.bluemix.net'.

IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local

Logging in to 'us.icr.io'...
Logged in to 'us.icr.io'.

IBM Cloud Container Registry is adopting new icr.io domain names to align with the rebranding of IBM Cloud for a better user experience. The existing bluemix.net domain names are deprecated, but you can continue to use them for the time being, as an unsupported date will be announced later. For more information about registry domain names, see https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_overview#registry_regions_local

Now we can push our container into the container repository. First, we tag the image locally with the repository and tag metadata that we want the container to hold in the new repository, then we push it to the new repository:

docker tag dockerdemo:test1  us.icr.io/getnode/dockerdemo:test1
docker push us.icr.io/getnode/dockerdemo:test1
The push refers to repository [us.icr.io/getnode/dockerdemo]
ef27c95b3262: Pushed 
52e3122c6402: Pushed 
76de4fb8c77b: Pushed 
2d0f88d2dd7b: Pushed 
d8b9f4ebf971: Pushed 
4a38d89e6259: Pushed 
d9ff549177a9: Pushed 
test1: digest: sha256:515cf2d0be96071f099f3c36c839039551ece4d91bc23e0d83d41ad68c57cafb size: 1786

Prerequisites for setup

This application uses Cloudant DB as a resource, thus a Kubernetes instance and a Cloudant db resource will need to be configured for when you deploy; this can be done by following instructions here:

Deploy to Kubernetes

When you’re ready to deploy to Kubernetes, your yaml file will define the following:

  • The container and Docker repository information to use to pull the container(s)
  • The network services to create
  • Attributes to control failure detection and are self-healing
  • The Docker registry information from which to pull our container
  • A Cloudant url secret describing the Cloudant DB resource this application is using

Now let’s look at our original Cloud Foundry application that is now a Docker container. Our yaml file to define the deployment to Kubernetes should be written as:

# Update <REGISTRY> <NAMESPACE> values before use
apiVersion: apps/v1
kind: Deployment
metadata:
  name: get-started-node
  labels:
    app: get-started-node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: get-started-node
  template:
    metadata:
      labels:
        app: get-started-node
    spec:
      containers:
      - name: get-started-node
        image: us.icr.io/getnode/dockerdemo:test1
        ports:
        - containerPort: 8080
        imagePullPolicy: Always
        env:
        - name: CLOUDANT_URL
          valueFrom:
            secretKeyRef:
              name: cloudant
              key: url
              optional: true

With this file, we can now deploy our containerized application to Kubernetes as seen here in our example:

kubectl create -f deployment.yaml
deployment.apps/get-started-node created

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
get-started-node-7b45c6cf8b-7zbnd   1/1     Running   0          33s

Now if we tell Kubernetes to describe the pod, we can see the information that we specified in the deployment yaml in the form of the Container Registry, Cloudant secret, and port number:

kubectl describe pod get-started-node-7b45c6cf8b-7zbnd
Name:               get-started-node-7b45c6cf8b-7zbnd
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               10.76.215.122/10.76.215.122
Start Time:         Thu, 11 Apr 2019 23:48:25 -0700
Labels:             app=get-started-node
                    pod-template-hash=7b45c6cf8b
Annotations:        kubernetes.io/psp: ibm-privileged-psp
Status:             Running
IP:                 172.30.151.135
Controlled By:      ReplicaSet/get-started-node-7b45c6cf8b
Containers:
  get-started-node:
    Container ID:   containerd://8e0c577fff5cd40462c0710e0deff0fd9db113aa71d6167434a318ab86663fc5
    Image:          us.icr.io/getnode/dockerdemo:test1
    Image ID:       us.icr.io/getnode/dockerdemo@sha256:515cf2d0be96071f099f3c36c839039551ece4d91bc23e0d83d41ad68c57cafb
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 11 Apr 2019 23:48:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      CLOUDANT_URL:  <set to the key 'url' in secret 'cloudant'>  Optional: true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q2t2d (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-q2t2d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-q2t2d
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                    Message
  ----    ------     ----  ----                    -------
  Normal  Scheduled  63s   default-scheduler       Successfully assigned default/get-started-node-7b45c6cf8b-7zbnd to 10.76.215.122
  Normal  Pulling    62s   kubelet, 10.76.215.122  pulling image "us.icr.io/getnode/dockerdemo:test1"
  Normal  Pulled     44s   kubelet, 10.76.215.122  Successfully pulled image "us.icr.io/getnode/dockerdemo:test1"
  Normal  Created    44s   kubelet, 10.76.215.122  Created container
  Normal  Started    44s   kubelet, 10.76.215.122  Started container

Now let’s create a network service to access the container:

kubectl expose deployment get-started-node --type NodePort --port 8080 --target-port 8080
service/get-started-node exposed

Examine the service to determine the port that is exposed. We can see that port 31936 will map to port 8080, so that will be our external port:

kubectl get service
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
get-started-node   NodePort    172.21.182.149   <none>        8080:31936/TCP   6m13s
kubernetes         ClusterIP   172.21.0.1       <none>        443/TCP          7h59m

Now let’s find the public IP of our worker node so that we can verify our app is running:

ibmcloud cs workers cftokube

Plugin version 0.2.99 is now available. To update run: ibmcloud plugin update container-service -r Bluemix

OK
ID                                                 Public IP        Private IP      Machine Type   State    Status   Zone    Version   
kube-hou02-pa23af55b0d7c84af5b7b942ed48b35717-w1   173.193.112.17   10.76.215.122   free           normal   Ready    hou02   1.12.7_1548   
marcs-mbp:kubernetes mvelasc@us.ibm.com$

Our full URL is: http://173.193.112.17:30124

db hello db hello name

Conclusion

We’ve seen how we can take an application from Cloud Foundry, build a Docker container, and then deploy it to Kubernetes. The whole process required no change to the application, as most of our time was spent customizing the container or Kubernetes environment that encapsulated the container. Starting from Cloud Foundry does not have to entail a great amount of effort to port to a Docker container, and once a Docker container is generated it is a trivial task to extend that to a Kuberenetes environment.

What’s next?

So you built a container and deployed it to Kubernetes. Why not try out an alternative to starting out with Docker by going through this tutorial, “Containerization: Starting with Docker and IBM Cloud.” Or see if you can walk through our code patterns, “Use a Kubernetes cluster to deploy a Fabric network smart contract onto blockchain” and “Deploy and use a multi-framework deep learning platform on Kubernetes.”

Need some more basic Kubernetes tutorials? Then make sure to check out our beginner’s Learning Path to Kubernetes.

Marc Velasco