Manually creating resources to represent the deployment targets has always been challenging, especially when there is a large number of middleware instances such as CICS regions and DB2 databases. You will have to create tens, if not hundreds of resources, set properties on each resource to identify unique attributes of the target subsystem, come up with a structure to group them and eventually you may have to review from time to time to make sure the resource model is up-to-date. The new zOS Management Facility plug-in allows you to automatically discover software instances provisioned by z/OS Management Facility or by z/OS Provisioning Toolkit. A collection of resources will be created in the resource tree to represent the software instances. Following picture shows an example of the discovery result.
Setup
Follow the setup instructions described in Part 1 to initialize the swarm manager and worker hosts. This basically involves installing Docker Engine 1.13 or higher on each host. You will need to install a UCD agent on each of the swarm hosts. The UCD agent processes will need to be run by a user that can execute the Docker command. You will also need to load the Docker plugin into your UCD server via Settings->Automation Plugins->Load Plugin.
Getting Started
Download the example UCD application “DockerSwarmGettingStarted” and import it into your UCD server via Applications->Import Applications. Be sure to click the Import with Snapshots checkbox to import required component version artifacts.
Once the import is complete, open the DockerSwarmGettingStarted application in UCD and create a new environment. Add a Base Resource to the new environment and add the agents for each of the swarm hosts setup above to the Base Resource. Next, add the GettingStarted component to each of the agents. Click the Add Tag icon to add a resource tag named SwarmWorker to each of the GettingStarted components under agents associated with swarm worker nodes.
Click the Add Tag icon to add a resource tag named SwarmManager to the GettingStarted component under the agent associated with swarm manager node. When you are done your new environment should look similar to the following.
Now that we have the application environment setup, we will take a look at the application process named CreateDockerSwarmAndDeployStack which creates/initializes the swarm, adds worker nodes to the swarm, and then deploys the GettingStarted application to the swarm. The steps in this process are shown below.
Step 1 is an Install Component step that will execute the Initialize Swarm process from the GettingStarted component to create/initialize the docker swarm. It will only execute on the agent that has the SwarmManager resource tag. Step 2 is a For Each Tag step from the Utility Steps section of the design pallet. For each component that has the SwarmWorker tag, step 3 is executed. This will execute the Join Swarm component process on each swarm worker host to join them to the docker swarm created in step 1. Finally, step 4 is executed on the swarm manager host to download the docker-compose file which defines the GettingStarted example application and run a ‘docker stack deploy’ command to deploy the application into the swarm.
Now we will take a closer look at the GettingStarted component processes that are invoked by the CreateDockerSwarmAndDeployStack application process. The Initialize Swarm process is displayed below.
The Initialize Swarm step from the Docker plugin will create/initialize a docker swarm manager. It will also create output properties (swarmWorkerToken, swarmManagerToken, swarmManagerAddress) that are used to join worker/manager nodes to the swarm. The next two steps in this process will save those values as application properties so they can be referenced when joining the worker nodes to the swarm.
The Join Swarm process is displayed below.
The Docker plugin step Join Swarm will use the application properties named swarmManagerAdddress and swarmManagerToken to add each worker node to the Docker swarm. The Add Inventory Status step is from the Utility Steps section of the palette. It is used to add an inventory entry for the worker node to the environment to indicate the swarm worker has been created.
Finally, let’s take a look at the Deploy Stack component process which is shown below.
First we’ll use a Download Artifacts step from the Repositories section of the palette to download the docker-compose.yaml file that describes the GettingStarted application. Next we’ll invoke the Docker plugin step named Deploy Stack, specifying the downloaded docker-compose.yaml as input. This will launch the application into the docker swarm.
Now that we have walked through the UrbanCode Deploy application and component processes, let’s launch a deployment into our UCD environment and access our GettingStarted application. Go to the DockerSwarmGettingStarted application, Environments tab, and click on the Request Process button for your environment.
Clear the Only Changed Versions checkbox, select CreateDockerSwarmAndDeployStack from the Process field pull-down. Click Choose Versions and select Latest Available from the Select for All pulldown. Click OK, and Submit.
Once your UCD deployment has completed successfully you can access the GettingStarted application on any of the manager or worker nodes in your swarm. Use a browser to access http://swarm-node:80 where swarm-node is the hostname or IP address of one of your swarm worker nodes or the manager node. You will see output similar to the following.
Each time you access the application the Visits count will increment by one. To see a visual representation of where the different containers of your application are running in your swarm you can access http://swarm-node:8080. You should see something like the following.
I hope this post has given you a taste of what you can do with IBM UrbanCode Deploy and Docker swarm mode. To see more information about what you can do with UrbanCode Deploy and containers checkout our new container landing page and let us know what other topics you’d like to see.
IBM UrbanCode Deploy version 6.2.6 is available now and here are some of the updates and new features that we think you’ll find useful:
Core enhancements
Enforce snapshots You can ensure that snapshots are used for specific environments. When you use this option, only snapshots can be deployed to the affected environments.
View role mappings across teams Easily identify how user permissions are acquired. Determine when user permissions are assigned directly or acquired by group membership.
Improved logging and auditing New features include writing log files to long-term storage.
Improved license management Track agent usage and identify high levels of use.
Cloud BluePrint Designer enhancements
New Heat resource types that you can use to create Azure load balancers, VM extensions, availability sets, virtual network peering, and more.
Support for ordering SoftLayer block storage from a blueprint.
Support for Aurora RDS, Application Load Balancers, and Elastic File System (EFS for HEAT templates).
Create Terraform documents in the blueprint designer, and numerous enhancements to the Terraform extensions (provider, provisioner).
New and updated plug-ins
Nexus and Artifactory provide improved support for Maven and NuGet.
SalesForce plug-in adds new steps to validate, test, and “quick deploy.”
ServiceNow plug-in now supports the Jakarta release.
Venafi is a new community plug-in for certificate management.
New capability to discover z/OS service instances.
When you have many cloud resources, or when your blueprint design server and clouds are not in the same region, communication between the server and clouds can slow down. This communication is faster when you cache information about flavors, regions, and images on the cloud discovery service.
Caching is enabled by default on the cloud discovery service. By default, the blueprint designer does not use this cached information; however, now you can enable it. To use the cached information, you configure settings in the cloud discovery service and enable the cache in the blueprint designer system settings. As a result, when you access a cloud provider for the first time, the cloud discovery service caches the resources from that cloud, based on the settings file.
UrbanCode Deploy makes the management and deployment of Kubernetes Helm charts simple. Values in your Helm chart may be altered without requiring the editing of text files using UrbanCode Deploy. UrbanCode Deploy’s audit tracking keeps a record of who deployed which version of a Helm chart where, and may be used to manage releases across environments. Access control and quality gates may be added to restrict who may deploy certain Helm charts. Charts may even be compared using UrbanCode Deploy, highlighting differences.
In this article, we will walk through the steps used to create the application deployment seen in the video above. This includes:
Creating multiple versions of a simple Helm chart
Designing an UrbanCode Deploy process to deploy the Helm chart to both IBM Cloud Platform (Bluemix) and IBM Cloud Private
Highlighting how values in a Helm chart may be replaced by an UrbanCode Deploy process
Verifying the deployments were successful
Prerequisites
An IBM Cloud Private environment
An IBM Cloud Platform (Bluemix) standard Kubernetes cluster (not a lite cluster).
The Bluemix CLI installed on your UrbanCode Deploy agent machine
The Kubernetes CLI installed on your UrbanCode Deploy agent machine
The Helm client installed on your UrbanCode Deploy agent machine (Note: for ease of use, install the version of Helm which is already being used in your IBM Cloud Private environment. See Setting Up IBM Cloud Private for more details)
UrbanCode Deploy 6.2.2 or later with the Kubernetes plug-in version 11 or later installed
The Helm Hello World App sample application should also be installed into your IBM UrbanCode Deploy environment. It can be found on Github here. Included with the application is the Helm Chart component template and a sample component named Helm Hello World.
Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste madness.
Please review the Helm site and documentation if you are unfamiliar with Helm.
First, we will create three versions of a Hello World container image. If you’ve already worked through the Kubernetes Blue-Green Deployments Working Example, you may reuse those container images. Additionally, you may use container images created by the author and stored on Docker Hub, however there is no guarantee these will be preserved or maintained. If the author’s container images exist, they may be found here.
If you wish to create your own container images, follow the steps from the Kubernetes Blue-Green Deployments Working Example here.
Note this section of code in our container image’s app.py file:
Our web app will display the text Hello followed by the variable name. The variable name is set to the value of the system environment variable NAME. If there is no system environment variable NAME, a default value of world is used. When we create our chart, we will set the environment variable NAME to a value we will receive from UrbanCode Deploy.
Creating a Chart Directory Structure
Choose a directory where you want to store your chart. For example, /home/mra/charts.
On the command line, while in your charts folder, we will run a command to further create a chart directory structure. Run the command:
helm create mychart
This creates a folder named mychart. The mychart folder contains two folders and two files:
Creating Templates
If you worked through the Kubernetes Blue-Green Deployments Working Example, you’ll remember that our application was represented by two yaml files. One described the load balancer (loadbalancer.yaml), while the other described our webpage (webpage.yaml). We will use these files as the basis of our chart’s templates.
In the templates directory, you may find a default service.yaml file created for you. Delete that file, then create a new service.yaml file. For the contents of your service.yaml file, enter:
apiVersion: v1
kind: Service
metadata:
name: my-load-balancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
color: blue
In the templates directory, you may find a default deployment.yaml file created for you. Delete that file, then create a new deployment.yaml file. For the contents of your deployment.yaml file, enter:
The image name is set to amatthew99/ucdbgdemo:v1. If you are not using the author’s container images, change amatthew99 to your Docker Hub ID.
Under env, note we are now creating an environment variable named NAME and setting it to a value we will get from the values.yaml file.
Updating the Values File
As our deployment.yaml file shows, our chart will set an environment variable NAME to a value specified in the values.yaml file. The NAME environment variable is used by our web application when displaying the greeting on our webpage. If our NAME environment variable is not set, a default value of world is used, so our greeting would appear as Hello world.
In your mychart directory, find the values.yaml file. Insert a line:
helloname: @@@hello.name@@@
For example:
While this would seemingly change our greeting to Hello @@@hello.name@@@, the value @@@hello.name@@@ will be used as a token by UrbanCode Deploy, allowing us to manage the value in UrbanCode Deploy itself.
Updating the UrbanCode Deploy Component to Manage the Chart
There are different ways we can manage our chart versions in UrbanCode Deploy. One example would be pulling our chart from a source control system. For this demo, we will store the chart with the component directly as a component version artifact.
We start by moving our chart into a directory whose name will represent our chat version. Go to your mychart folder and create a folder named 1.0. Next, move the contents of your mychart folder into this folder.
Your mychart/1.0 directory should now contain your charts and templates directories and your Chart.yaml and values.yaml files.
In UrbanCode Deploy, go to your Component page and click on the Helm Hello World component.
Click on the Configuration tab
In the Source Configuration Type field, select File System (Versioned)
In the Base Path field, enter the location of your mychart folder (such as /home/mra/charts/mychart)
Under Default Version Type, select the Import new component versions using a single agent option
In the Agent for Version Imports field, select the agent which resides on your chart’s machine.
Click the Save button
Next, we will import the chart as a component version artifact.
Click on your component’s Versions tab
Click the Import New Versions button.
Refresh until version 1.0 appears
Click on version 1.0 and verify the you chart appears:
Creating a Second Version of your Helm Chart
We now have version 1.0 of our helm chart stored in UrbanCode Deploy. Next, we will create version 2.0 of our chart.
Version 2.0 of our chart will simply use version v2 of our container image. In your mychart/1.0/templates folder, edit the deployment.yaml file, changing the version of our image from v1 to v2:
Save your changes.
Rename your mychart/1.0 folder to mychart/2.0
In UrbanCode Deploy, go to the component you created. Click on its Versions tab, click on the Import New Versions button, and refresh until version 2.0 appears.
You now have two different versions of your chart stored as component version artifacts inside UrbanCode Deploy.
How would a user know what has changed between version 1.0 and 2.0 of our chart? UrbanCode Deploy includes a compare feature that will highlight differences between component versions. Click the Compare link for version 2.0 of our chart:
From the Version drop-down, select 1.0 as the version to compare version 2.0 against. Click the Submit button. We are alerted that there is a difference between our templates/deployment.yaml files:
Click the Compare link to generate a visual representation of the differences in the files.
We can see that version 2.0 of our chart is using version v2 of our container image, while version 1.0 of our chart is using version v1 of our container image.
Setting up IBM Cloud Private
There are two pieces to Helm. One is the Helm client, which you may have already installed as part of this guide’s prerequisites. The second part is the Helm server, called Tiller. Tiller is installed on each Kubernetes cluster which uses Helm.
An installation of IBM Cloud Private may be thought of as a single Kubernetes cluster. IBM Cloud Private comes with Tiller already installed. Therefore, there is no need to manually install Tiller on IBM Cloud Private.
You may run into conflicts if your Helm client is at a newer version than the Tiller instance you are connecting to. Therefore, it is suggested the version of the Helm client you install should match or be older than the version of Tiller running in your IBM Cloud Private environment.
To determine which version of Tiller IBM Cloud Private is using, in the IBM Cloud Private console go to Workloads, then Deployments. Find the deployment named tiller-deploy and click on it.
Scroll down to the Pods section, then click on the tiller-deploy pod.
Click on the Containers tab.
The Tiller version should be displayed in the IMAGE column:
In the example above, the version of Tiller being uses by IBM Cloud Private is 2.6.0.
Setting up the Kubernetes Config File for IBM Cloud Private
For our demo, we will update the default .kube/config file so it may be used with IBM Cloud Private.
First, verify you are working with the default .kube/config file. The $KUBECONFIG environment variable is used to override the default .kube/config file. Check to see if the $KUBECONFIG environment variable is set by running the echo $KUBECONFIG command. If it returns a value, set $KUBECONFIG to your ./kube/config file. For example, export KUBECONFIG=/root/.kube/config
The simplest way to update the .kube/config file is to:
Go to the IBM Cloud Private console.
Click on the user name in the upper right corner
Click Configure Client
Copy the provided commands
Paste the provided commands into your terminal and press enter
This will work, however the provided token used in the set-credentials command will eventually expire (usually after 12 hours), which is not ideal when building an automated process.
Fortunately, you may acquire a token from IBM Cloud Private that will not expire. For details, see Option 2 on this page of the IBM Cloud Private documentation.
Following the documentation, perform these steps:
Copy the Configure Client commands from the IBM Cloud Private dashboard and paste them into your client. After the series of commands run, you should see a message stating that the context has been switched to cfc
Run the command kubectl get secret –namespace=default which will return something like this:
Note the entry with the name that begins with default-token- (in the example above it is default-token-wllrq) with the type set to kubernetes.io/service-account-token. We need to get the details of this service account token. Do so by running the following command, changing default-token-XXXXX to the name of your token.
kubectl get secret default-token-XXXXX –namespace=default -o yaml
Information about the service token is returned, including an entry titled token. This is the token we may use which does not expire, however it is base64 encoded. Decode the token with command:
echo [token value] | base64 -d
The returned value is the token we want to use.
Return to the IBM Cloud Private console and copy the Configure Client commands again. Paste them in a text editor, then update the set-credentials line to use the token we just decoded.
Copy and paste these commands in your terminal and press Enter
Your .kube/config file has been updated for use with IBM Cloud Private utilizing a service token which does not expire.
Setting up the IBM Cloud Platform (Bluemix) Kubernetes Cluster
Create a standard Kubernetes cluster in IBM Cloud Platform (Bluemix). For detailed instructions, see this section of the Kubernetes Blue-Green Deployments Working Example document.
The Kubernetes cluster does not contain Tiller by default, so we will need to install it.
From the terminal, login to Bluemix.
Next, run the bx cs init command to initialize the Container Service plug-in.
Run the bx cs cluster-config [cluster name] command to download the configuration for your cluster.
The result of that command shows a line starting with export KUBECONFIG=. Copy that line, paste it in the terminal, and press enter to set the KUBECONFIG environment variable. Helm uses this environment variable to determine which cluster it is working with.
Now that Helm knows which Kubernetes cluster to use, we may install Tiller onto the cluster by running the command:
helm init
Tiller is now installed on your IBM Cloud Platform (Bluemix) Kubernetes Cluster.
Setting up the Kubernetes Config File for IBM Cloud Platform (Bluemix)
For IBM Cloud Private, we updated the default .kube/config file, allowing us to use that file when working with IBM Cloud Private. IBM Cloud Platform (Bluemix) works differently than IBM Cloud Private. IBM Cloud Platform downloads a unique configuration file for each Kubernetes cluster.
By default, this file contains a token that expires after 30 days of inactivity.
For this demonstration, our process will login to Bluemix and download a new configuration file each time the process is run. This ensures the token will be valid.
It is possible to download a configuration file which does not expire, allowing you to avoid these additional steps. To do so, from the terminal, login in to Bluemix with the bx login command. Next, run the bx cs init command to initialize the Container Service plug-in. Finally, run the command:
bx cs cluster-config clusterName --admin
Setting up the UrbanCode Deploy Application
In UrbanCode Deploy, click on the Resources tab, then click the Create Top-Level Group button.
Enter a name for your top-level group, such as Helm Hello World Chart.
From the Actions button next to your Helm Hello World Chart group, select Add Group. Name this new group DEV and click the Save button.
From the Actions button next to your Helm Hello World Chart group, select Add Group. Name this new group PROD and click the Save button.
Add your agent to both your DEV and PROD folders.
Add your Helm Hello World component to both of your agents.
When finished, your resource tree should look something like this:
Click on the Applications tab in UrbanCode Deploy, then click on the Helm Hello World App application you previously imported. You should see something like this:
Click on the Dev – Bluemix environment. Click the Add Base Resources button. Check the box next to your DEV group, and click the OK button. You should now see something like this:
Go back to your Helm Hello World App application. Click on the Prod – IBM Cloud Private environment. Click the Add Base Resources button. Check the box next to your PROD group, and click the OK button.
Updating Environment Properties
Next, let’s update our environment properties for both environments.
Go to your Helm Hello World App application and click on the Dev – Bluemix environment. Click on the Configuration tab, then click Environment Properties.
You should see four environment properties:
cluster.name – This is the name of our Bluemix Kubernetes cluster. Update the value of this property to match the name of your cluster.
hello.name – This property is used by our web application to greet visitors. If we change the value to John Doe, our web page will display a Hello John Doe! greeting. We will discuss this value later.
kube.context – Name of the context in the Kubernetes config file to use. Bluemix uses the cluster name as the context name in its configuration file, so leave the value ${p.cluster.name}.
kubeconfig – Location and name of the Kubernetes config file to use. For our demonstration, the automated process will connected to Bluemix and download a new configuration file. The process will then update this value. For now, we may leave it blank.
Go to your Helm Hello World App application and click on the Prod – IBM Cloud Private environment. Click on the Configuration tab, then click Environment Properties.
You should see three environment properties:
hello.name – This property is used by our web application to greet visitors. If we change the value to John Doe, our web page will display a Hello John Doe! greeting. We will discuss this value later.
kube.context – Name of the context in the Kubernetes config file to use. The configure client commands we copied from IBM Cloud Private created a context named cfc, so the value here should be set to cfc.
kubeconfig – Location and name of the Kubernetes config file to use. For our demonstration, our IBM Cloud Private environment is using the default .kube/config file. Update this field to point to the location of this file if it is not stored at /root/.kube/config.
Notice that our IBM Cloud Private environment did not include a cluster.name property. This is because the cluster.name property is only used for IBM Cloud Platform (Bluemix) specific steps in our process.
Walkthrough of the Deployment Process
In this section, we will examine the Deploy Helm Chart component process, explain how it works, and make some required updates.
In UrbanCode Deploy, click on Components, click on your Helm Hello World component, click on the Processes tab, then click on the process named Deploy Helm Chart to view the process.
Clean Working Directory
This step simply deletes all files that are in the agent’s working directory to ensure we are staring with an empty directory.
Download Artifacts
This step downloads our Helm chart to the working directory. Note the Directory Offset property has been set to /helm_chart. This means our chart will be downloaded to a folder named helm_chart in the working directory. This ensures that, when the process runs helm commands, we point it to a directory that contains only the helm chart.
Replace Tokens
Recall we edited the values.yaml file of our chart to insert a variable named helloname with a value of @@@hello.name@@@. The Replace Tokens step is where UrbanCode Deploy will replace the token @@@hello.name@@@ with a value. Let’s take a closer look at the properties for this step:
The Includes Files field is set to **/*. This means all files in and under the working directory will be inspected for tokens. The Start Token Delimiter and End Token Delimiter fields are both set to @@@. UrbanCode Deploy will look for these characters to signify the presence of a token. The Property List field contains property names which will be used as tokens. We need to have a property named hello.name somewhere in this property list in order to have @@@hello.name@@@ replaced with a value.
So where did we set the hello.name property? It could be set in several different places or even provided in a properties file. For this demo, hello.name is set as an environment property. Let’s go back to our UrbanCode Deploy application Helm Hello World App. Click on the Dev – Bluemix environment, then click the Configuration tab, then click on Environment Properties.
You’ll see the hello.name property has been added here. It’s value is set to yet another property, ${p:environment.name}. This property will resolve to the current UrbanCode Deploy environment’s name. In this case, the environment’s name is Dev – Bluemix. When we deploy our web application to Bluemix, we should expect to see the greeting Hello Dev – Bluemix.
The same property and same value have been set in our Prod – IBM Cloud Private environment. We should see the greeting Hello Prod – IBM Cloud private when we deploy to our production environment.
Back to the Replace Tokens step, the Property List field includes the value ${p:environment/allProperties}, meaning all environment properties will be included in this list.
In summary, the Replace Tokens step will replace @@@hello.name@@@ in the values.yaml file with the value of our environment property hello.name.
Check Environment
Our process will now go down one of two paths depending on which environment we are deploying to. If we are running in our Dev – Bluemix environment, we will proceed to the Connect to Bluemix step. If we are running in our Prod – IBM Cloud Private environment, we will proceed to the Get Environment Properties step.
Connect to Bluemix
This step runs a shell script to connect to Bluemix. Edit the script of this step and updated it with your credentials. If using an API key to connect to Bluemix, your script may look something like this:
This step runs the bx cs init command, followed by the bx cs cluster-config [cluster name] command to download a Kubernetes config file for use with your Bluemix cluster. This step will generate an output property named ${p:Get Kube Config File – IBM Cloud/kubeconfig} containing the location and name of the downloaded Kubernetes config file.
Create Environment Property
This step updates the kubeconfig environment property’s value to the location and name of the Kubernetes config file downloaded in the previous step.
Get Environment Properties
Since the Create Environment Property step may have updated the value of the kubeconfig environment property, we must run the Get Environment Properties step to ensure we use updated environment properties going forward.
Use Context
This step sets the current context of the Kubernetes config file. Helm commands use the current context when determining which cluster they are applying the commands to. Note the Global Flags field (under Show Hidden Properties) is specifying which Kubernetes config file to use.
Helm Init
At this point of our process, our Kubernetes command line interface has been configured to work with either IBM Cloud Platform (Bluemix), or IBM Cloud Private. We now need to initialize Helm. The Helm Init step will do just that. Let’s look at the step’s properties (including the hidden properties).
By default, running the helm init command installs Tiller onto the Kubernetes cluster. Since IBM Cloud Private already has Tiller installed, and since we already installed Tiller onto our Bluemix cluster, we do not want to install Tiller again. To avoid doing so, we set the following flag when running the command:
--client-only
Also note the Kube Config File property. This field specifies which Kubernetes config file to use if not using the default. Here, we are setting it to our kubeconfig environment property (and do so for subsequent Helm steps).
Helm Does Release Exist
An installed instance of a chart is called a release. One chart may be installed multiple times on a cluster, resulting in multiple releases running on a single cluster, each based off the same chart. Each release may be managed independently. For example, if you have five releases of a chart running in a cluster, three of them may be upgraded to a new version of your chart, while two remain at an older version.
Each of our component resources we placed in the resource tree represents a release. In other words, our Helm Hello World component manages our chart, and each Helm Hello World component resource manages a release.
If we click on one of our Helm Hello World component resources, then click on the Configuration tab, we should see the following:
Note the role property named helm.release. This property was created by our Helm Chart component template. The value of the helm.release role property is currently empty because a release has not yet been created. Once a release is created, this property will be populated with the name of our release, allowing us to manage the release going forward.
Going back to our step, if you look at its properties, the Release property is set to our component resource’s helm.release property.
This step performs a simple check to determine if the release exists or not and sets an output property with that result.
Does Release Exist
This step takes the result of the Helm Does Release Exist step and sends us down one of two paths, depending on the result.
Helm Install
If the release does not exist, we run the helm install command to install the chart. Let’s look at the properties of this step.
While there are multiple ways to specify which chart to install, one way is to specify the directory which contains your chart files. That’s what we do here in the Chart property (remember the Download Artifacts step downloaded our chart to this directory).
Under hidden properties, the Server URL and Resource Id fields are used when UrbanCode Deploy updates the helm.release component resource role property. Their default values should be left as is.
Helm Upgrade
If the release already exists, we want to perform an upgrade. This step is similar to the Helm Install step, but contains a property named Release. The Release property is set to ${p:helm.release} which resolves to the component resource role property.
Deployment and Verification
It’s time to deploy our chart!
Deploy Version 1.0 to Bluemix
Go to your Helm Hello World App application in UrbanCode Deploy
Click the Request Process button next to our Dev – Bluemix environment
Uncheck the Only Changed Versions checkbox
In the Process field, select Deploy Helm Chart.
Click the Choose Versions link
Under Versions to Deploy, click the Add… link, select 1.0, then click the OK button.
Click the Submit button to begin the deployment
Once the deployment completes, let’s verify the component resource role property was updated with the name of our release. In UrbanCode Deploy, go to the Resource page, then find your Helm Hello World component resource under you DEV group and click on it. Click on the Configuration tab and verify the helm.release property now has a value (by default, Helm generates a release name).
In the example above, the release is named pruning-kangaroo.
Let’s next verify our web application is running and the greeting it displays is correct.
From a terminal, follow these steps to work with your Bluemix Kubernetes cluster:
Login with a bx login command
Run the bx cs init command to initialize the Container Service plug-in
Run the bx cs cluster-config [cluster name] command to download a Kubernetes config file to use with your cluster
Copy the returned export KUBECONFIG=… line and paste it in the terminal to set the KUBECONFIG environment variable
Let’s now check to see if our release has truly been deployed. Run the command helm list and verify the expected release name is displayed.
To visit our web application, we need to know which port it is running on. Run the kubectl get services command.
You should see an entry named my-load-balancer. Under Ports, you should see that port 80 is mapped to a certain port (in the example above, it has been mapped to port 30828). Note your port number.
We next need to get the IP address of one of our cluster worker nodes. In the IBM Bluemix console, go to your cluster and click on Worker Nodes.
Note the public IP address of one of your worker nodes. Use this public IP address and the port number you noted earlier to build your URL. In the example above, the URL would be http://169.48.204.217:30828. Go to the URL in a browser and verify the greeting says Hello Dev – Bluemix and the version is listed as v1.
Deploy Version 1.0 to IBM Cloud Private
Version 1.0 of the chart looks good on Bluemix and is ready to be deployed to the IBM Cloud Private production environment.
Go to your Helm Hello World App application in UrbanCode Deploy.
Click the Request Process button next to our Prod – IBM Cloud Private environment
Uncheck the Only Changed Versions checkbox.
In the Process field, select Deploy Helm Chart
Click the Choose Versions link
Under Versions to Deploy, click the Add… link, select 1.0, then click the OK button.
Click the Submit button to begin the deployment
Let’s verify our application is running correctly on IBM Cloud Private. We first need to configure our Kubernetes client to work with IBM Cloud Private. From the IBM Cloud Private console, click on your user name in the upper right corner, click Configure Client, copy the series of commands, paste it into your terminal, and hit enter.
Run the helm list command to list your releases.
In the example above, the release name is washing-garfish.
Run the kubectl get services command to once again get the port number used by my-load-balancer.
In the example above, the port is 31406.
We need to once again get the IP address of a worker node. In the IBM Cloud Private console, go to Platform, then Nodes. Note the IP address of one of the worker nodes.
Use this to once again build your URL. In the example above, the URL could be http://9.42.23.45:31406.
Go to the URL in a browser and verify the greeting says Hello Prod – IBM Cloud Private and the version is v1.
Deploy Version 2.0 to Bluemix
We have made updates to our chart and want to deploy their latest version to Bluemix. Simply follow the same steps we used before when deploying to Bluemix, only choose version 2.0 of our chart.
When the deployment is complete, refresh our Bluemix URL and verify the version is now v2. It may take a few seconds for Kubernetes to update all your nodes.
Deploy Version 2.0 to IBM Cloud Private
Simply follow the steps we used previously to deploy to IBM Cloud Private, but deploy version 2.0 of our chart. Once the deployment is complete, refresh your IBM Cloud Private URL to verify version 2.0 has been deployed.
Conclusion
UrbanCode Deploy can make the management and deployment of Kubernetes Helm charts simple. Values in your Helm chart may be altered without requiring the editing of text files using UrbanCode Deploy. UrbanCode Deploy’s audit tracking keeps a record of who deployed which version of a Helm chart where. Access control and quality gates may be added to restrict who may deploy certain Helm charts. Charts may even be compared using UrbanCode Deploy, highlighting differences.
Hopefully, this document will inspire you to experiment deploying your Helm charts using UrbanCode Deploy. Comments are always appreciated below this article. Happy Helming!
With the new ServiceNow integration, Continuous Release can manage ServiceNow changes requests. By adding ServiceNow-type tasks to a deployment plan, you can create ServiceNow change requests and manage them through their entire lifecycle.
A ServiceNow task can perform any of these actions:
Create a change request and set any request properties. You can use the ID returned by ServiceNow with other tasks that affect the change request.
Wait actions respond to changes in a ServiceNow change request. For example, a task might wait for the Approval field to change to Approved.
Update actions can modify any request property, including state. For example, you might update a change request’s state to Closed.
A typical deployment plan might contain a task that creates a changes request, another task that waits for activity from ServiceNow, and several others that update and close the change request.
A ServiceNow task can use the default types–normal, emergency, and standard–or custom types, such as Expedited. ServiceNow tasks also support Continuous Release properties. A single task can reference both Continuous Release and ServiceNow properties.
ServiceNow tasks work with Internet-accessible instances of ServiceNow, such as my_instance.service-now.com. ServiceNow tasks work with the Jakarta release of ServiceNow and reference the current API version.