Introduction

The next generation of Dedicated solutions

Every IBM Cloud Service deals with a customer’s intellectual property (IP) – data, code, AI models, configuration, “meta-data”, etc. Services generally ingest forms of this IP, operate on it, transform it in a meaningful way and store the data and/or the results. Enterprise customers rightfully need to understand the security and geo-locality aspects of these processes as their IP flows through a service instance.

The model adopted by the current IBM Cloud Dedicated offering today to address these concerns is “fully managed and physically isolated in a target data-centre with Vyatta-based network isolation – including customer specific service control planes”.

Going forward, IBM Cloud Services will offer new isolation features for the deployment and operation of the service instances, such as encryption of data at rest and in motion, virtual or physical compute isolation, network isolation with customer specific subnets/endpoints, etc.

In short, as customers look to utilize the many Services and related plans that IBM Cloud offers, data isolation, privacy and security are key factors in the decision-making process. IBM Cloud Services are designed in a way that gives greater flexibility and control to the end-user when it comes to building ‘dedicated’ solutions: leveraging the same self-service portal as the one supporting the Public Cloud service offerings, the customer is able to create one or more dedicated environments each supporting a combination of Services that are created and integrated with each other. While creating each of the needed services for the ‘dedicated’ environment, the customer simply needs to elect the appropriate Service flavour (or plan) that will guarantee the necessary degree of data segregation, data encryption and performance.

Secure Perimeter

In order to create this isolated solution, an isolated ‘network perimeter’ must be established. We will refer to this ‘network perimeter’ as the “Secure Perimeter” or “SP”. These perimeters reside behind Vyatta gateway devices that the end user owns and manages within their customer IBM Clound infrastructure account. In order to assist the customer with the creation and management of these SPs within their own IBM Cloud infrastructure account, IBM has developed some supporting automation and documentation that can be leveraged by the customer as a set of best practices.

Dedicated Solution patterns

With the SP created, the end user can now utilize the “Secure Perimeter Segment” or “SPS” to deploy applications that are secure behind their gateways. These applications can be deployed to communicate with other applications or services running behind the gateway or they can connect to, for example, IBM Services running on the public Cloud infrastructure or to another external endpoint. Once the ‘dedicated environment’ communication backbone is established, the next logical step is to start deploying Services and Customer applications on this.

By creating and integrating multiple IBM Services that support the ‘Dedicated’ criteria of data segregation, isolation, encryption etc., it is possible to establish ‘solution patterns’. This document captures the key patterns that can help the Customer to quickly and efficiently build their own end-to-end ‘dedicated solutions’, using the generally available Public IBM Cloud Console.

Ded-Pat-01 Pattern – “Dedicated Workload using Public and Dedicated Services, via Public connectivity”

Overview

This pattern demonstrates the usage of the following features:

In order to support this solution pattern, this blog documents the process of deploying a sample application – Bluechatter (git repository) – which runs within the SPS on an IBM Cloud Container Service cluster and will connect to an IBM Compose for Redis Database hosted on an IBM Compose Enterprise cluster. This example highlights how an IBM Service – Compose Enterprise – can be deployed to a private isolated cluster of dedicated physical machines and be accessed from an application running on a kubernetes cluster which itself is behind an SPS. Additional services are currently available that can also be deployed either within the SPS or on dedicated hardware. Refer to the section below – Additional Services – which describes these services.

The concepts of an “SP” and an “SPS” are shown below.


The SP is comprised of the Vyatta Gateway with additional firewall rules and the SPS is comprised of a pair of public/private VLANs on the SP which are isolated from any other SPS.

With the SPS in place, an IBM Cloud Container Service Cluster can be deployed on the SPS VLANs. The Kubernetes Worker(s) belonging to the deployed IBM Cloud Container Service Cluster is(are) now behind the SP and therefore isolated from the rest of the customer’s infrastructure. An application can be spun up on the IBM Cloud Container Service Cluster and this application can connect to any valid endpoint, be it within the SPS or external.

An option available to the customer application is to connect to a deployed IBM Service.

To help demonstrate the concepts being discussed here, let’s deploy an application to an IBM Cloud Container Service cluster, running on an SPS. This application will need a Compose for Redis Database to connect to. In this example the Compose for Redis Database will be deployed from the catalog to the IBM Compose Enterprise platform. Deploying the Compose for Redis Database from the catalog will provide us with the endpoint information the application requires.

The application itself – Bluechatter – is a simple chat/IRC type application for your browser which allows multiple users to chat when online at the same time.

The following prerequisites must be in place. Listed with each prerequisite is a link to the documentation for installing/configuring the prerequisite.

With the prerequisites in place:

  • An IBM Cloud Container Service cluster will be deployed
  • Compose Enterprise will be deployed and once available a Compose for Redis Database deployed
  • The Bluechatter application will be configured and deployed to use the Compose for Redis Database

Prerequisite Verification

Before the deployment of the services and Bluechatter application can be performed, the prerequisites need to be verified.

IBM Cloud CLI Verification

To verify that the IBM Cloud CLI is installed, launch a terminal window and type:

bx login

This should prompt you to login to an API endpoint. Select the appropriate endpoint and login with your email address and password. If you need to login using an API key, use the following syntax:

bx login -a api.<region>.bluemix.net -u username@domain.com --apikey <api key>

Kubectl Verification

To verify that kubectl is installed type:

kubectl version --short

This should return the version information for the client. The server information is not returned because the IBM Cloud Container Service cluster has not been created yet.

Container Registry and Namespace Verification

To verify that the container registry plug-in is installed type:

bx plugin list

This should list the “container-registry” plug-in.

To verify a namespace is configured, type:

bx cr namespace-list

See the documentation here if you need to add a namespace.

Git Verification

To verify that git is installed type:

git --version

It should return the version of git.

Docker Verification

To verify that docker is installed and running, type:

docker ps

This should return a list of running containers, which should be empty unless a container has been deployed.

Deploy IBM Cloud Container Service Cluster

An IBM Cloud Container Service cluster can be deployed from either the IBM Cloud Catalog or the IBM Cloud CLI. As we will be using VLANs created as part of the SPS, the catalog will be used to demonstrate the process. The catalog is more user-friendly to select the VLANs associated with the SPS. The equivalent command line will be shown for reference at the end of this section

Login and navigate to the IBM Cloud catalog page  and search for “kube” – click the “Containers in Kubernetes Clusters” panel. On the “Kubernetes Cluster” page, click “Create”. You must select the “Standard” offering as the “Free” offering does not support the selection of datacenter location and VLANs, which are required to place the cluster within the SPS.

Give the cluster a unique name and ensure the correct datacenter is selected. Select the stable/default kubernetes version, the preferred hardware isolation and the preferred machine type. Select the number of required worker nodes. For the purpose of this deployment, a single worker node is sufficient. If the cluster will be used for additional containers, size the number of workers accordingly. For the private and public VLANs, select the VLANs that were created and associated with the Vyatta during the creation of the SP. Select whether to encrypt the disk or not and then click “Create Cluster”. Wait for the cluster to complete provisioning and the status of the worker node(s) is/are “Ready”.

Note: The subnets that are created with the deployment of the IBM Cloud Container Service cluster must be added to the SP white-list. To do so, follow the section “Kubernetes Deployment” as documented in the Setup a Secure Perimeter in IBM Cloud

To work with the cluster from the command line:

List the cluster(s) you created:

bx cs clusters

Get the context file for the cluster so you can work with the cluster via kubectl:

bx cs cluster-config <cluster name>

Export the cluster context for use with kubectl:

export KUBECONFIG=<path to config>

Note: The cluster-config command’s output will display the complete export command to run

To create an IBM Cloud Container Service cluster from the CLI similar to the cluster created above, use the following syntax:
bx cs cluster-create --name <name> --location <datacenter> --public-vlan <public VLAN ID> --private-vlan <private VLAN ID> --workers 1 --machine-type u2c.2x4 --hardware dedicated

Note the Secure Perimeter will support private only interface Clusters. The public access required to create these clusters is provided via the vyattas.

To create an IBM Cloud Container Service cluster from the CLI that has only a private interface, use the following syntax:

bx cs cluster-create --name <name> --location <datacenter> --private-vlan <private VLAN ID> --workers 1 --machine-type u2c.2x4 --hardware dedicated

 

To determine the VLAN ID that corresponds to the SPS VLANs use:

bx cs vlans <location>

Implementation

Deploy ‘Compose for Redis Database’ on Compose Enterprise

As mentioned in the overview section above, the Compose Enterprise service will be used to host the Redis Database. First the Compose Enterprise service needs to be ordered and deployed. Once that service is available the Redis Database can be deployed into it. The Compose Enterprise Service provides a private isolated cluster of dedicated physical machines for IBM Cloud users to optionally provision their Compose databases into. This provides the security and isolation required by enterprise compliance and uses dedicated networking to ensure the performance of the deployed databases. After the cluster is online, any space within this organization may deploy a ‘Compose for IBM Cloud’ database into it.

Deploy Compose Enterprise

To deploy the Compose Enterprise service, navigate to IBM Cloud → Catalog → Search for “Compose Enterprise” and click the Compose Enterprise panel.
In the Compose Enterprise order form, specify a Service name, select the correct region, organization and space. Add the contact details as required. Optionally provide a name for the provisioned cluster that the service will be deployed into. Clicking “Create” will trigger the order process that will take a number of days to complete.  Once the Compose Enterprise Cluster is available a Compose for Redis Database can be deployed.

Bluechatter deployment

The Bluechatter application is available from the following git repository: https://github.com/IBM-Cloud/bluechatter
This application can be deployed following the steps included in the README, but that would deploy both the application and a Redis Database service within the same Kubernetes cluster. With some minor modifications to the code, the Compose for Redis Database that has been deployed to the IBM Compose Enterprise cluster can be used. This demostrates how an application running within the SPS can communicate with a service that is external to the SPS.

First, let’s clone the git repo and change directories to the bluechatter directory:

git clone https://github.com/IBM-Bluemix/bluechatter.git
cd bluechatter

 

Update the manifest.yaml file:
Change “redis-chatter” to the “Service name” recorded above for the Compose for Redis Database. This change needs to be done in two places.
Save the file.

Build the docker image and push to your namespace:

docker build -t registry.ng.bluemix.net/<namespace>/bluechatter_app:latest .
docker push registry.ng.bluemix.net/<namespace>/bluechatter_app:latest

Automated deployment using Terraform

The following software prerequisites must be in place. Listed with each prerequisite is a link to the documentation for installing/configuring the prerequisite.

  • terraform – Install Terraform
  • python Version 2.7.10 – Install Python
    • The following python modules are required
      • requests==2.18.4  -> pip install requests==2.18.4; pip install ‘requests[security]’
      • SoftLayer==5.4.2 4  -> pip install SoftLayer==5.4.2
      • pyOpenSSL==17.5.0 -> pip install pyOpenSSL==17.5.0

Install version 0.8.0 of ibm-cloud-provider:

mkdir –p /home/terraform_providers/terraform-provider-ibm

cd /home/terraform_providers/terraform-provider-ibm

 

Depending on whether you are using MAC or Linux download the correct binary

LINUX:

wget https://github.com/IBM-Cloud/terraform-provider-ibm/releases/download/v0.8.0/linux_amd64.zip

unzip linux_amd64.zip

MAC:

wget https://github.com/IBM-Cloud/terraform-provider-ibm/releases/download/v0.8.0/darwin_amd64.zip

unzip darwin_amd64.zip

 

Add the following line to the Terraform CLI configuration file:

vi $HOME/.terraformrc

providers {
    ibm = "/home/terraform_providers/terraform-provider-ibm/terraform-provider-ibm"
}

 

To deploy Bluechatter using Terraform, pull down the GitHub repository for Secure Perimeter:

mkdir -p $HOME/SecurePerimeter
cd $HOME/SecurePerimeter
git clone  https://github.com/IBM/secure-perimeter.git
cd secure-perimeter/BlueChatter_demo

 

Init terraform to verify everything is working and ready:

$ terraform init

  Initializing provider plugins...
  - Checking for available provider plugins on https://releases.hashicorp.com...
  - Downloading plugin for provider "kubernetes" (1.1.0)...

  The following providers do not have any version constraints in configuration,
  so the latest version was installed.

  To prevent automatic upgrades to new major versions that may contain breaking
  changes, it is recommended to add version = "..." constraints to the
  corresponding provider blocks in configuration, with the constraint strings
  suggested below.

  * provider.kubernetes: version = "~> 1.1"

  Terraform has been successfully initialized!

  You may now begin working with Terraform. Try running "terraform plan" to see
  any changes that are required for your infrastructure. All Terraform commands
  should now work.

  If you ever set or change modules or backend configuration for Terraform,
  rerun this command to reinitialize your working directory. If you forget, other
  commands will detect it and remind you to do so if necessary.

 

The file variables.tf is the configuration file for this deployment. In order to configure this deployment either populate the variables.tf file with default values or run `terraform apply` and fill in parameters when asked.
The terraform deployment takes eight parameters:

  • bluemix_api_key – The IBM Cloud platform API key.
  • docker_image – The URL for the Bluechatter docker image that was built further up in this guide.
  • kube_cluster_name_id – The name or ID of the Kube cluster to deploy the bluechatter pod.
  • compose_cluster_id – The ID of the Compose Enterprise cluster. You can get this by navigating to the Dashboard –> Data & Analytics –> ‘service name’ that you called your compose enterprise deployment –> Instance Administration API –> Cluster ID.
  • expose_on_port – The node port to access the Bluechatter, defaults to 30089.
  • org – The org to deploy the Compose for Redis service onto.
  • space – The org space.
  • region – The region name where your Compose and Kubernetes cluster have been deployed into.

Run the Terraform apply process:

terraform apply

 

Enter the required parameters and enter ‘yes.’ Terraform will begin to provision resources to stand up Bluechatter, including deploying a Compose for Redis instance and automatically linking that service to the Bluechatter pod.

Once the process is finished Terraform should return an output similar to the following:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Outputs:

bluechatter_url = 169.48.108.115:30089

 

bluechatter_url is the worker IP and node port of the Bluechatter instance. Simply enter this into your browser and you should be met by the welcome page of the Bluechatter app.

You can also verify the various resources have been created by running the following commands through the CLI:

Verify the Compose for Redis instance has been created:

$ bx cf services
Invoking 'cf services'...

name                                service               plan         bound apps   last operation
bluechatter_redis                   compose-for-redis     Standard                  create succeeded

 

Verify the Bluechatter pod has been created:

$ kubectl get pods
NAME                READY     STATUS             RESTARTS   AGE
bluechatter-hp2h7   1/1       Running            0          2m

 

Verify a node port has been created:

NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)           AGE
web       NodePort       172.21.99.117   <none>          80:30089/TCP      2m

 

Manual deployment

This section is for the manual deployment of bluechatter without using Terraform.

Deploy Compose for Redis Database

Navigate to IBM Cloud → Catalog → Search for “Redis” and select the “Compose for Redis” panel. Scroll down to the end of the order form and select “Enterprise” from the “Pricing Plans”.  Doing this enables the “Select Compose Enterprise cluster for deployment” drop-down and allows you to select your Enterprise cluster. Select your Enterprise cluster and update the “Service name” as required. Click “Create”. Record the “Service Name” as this will be used in the “manifest.yaml” file shortly.

Wait for the provisioning to complete.

We now have an instance of Compose for Redis running on IBM Compose Enterprise and we have the connection details that the Bluechatter application will require. Next, we will configure and deploy the Bluechatter application.

Update the kubernetes.yml file:
Delete the first two blocks of code entirely, these two blocks of code make reference to a Redis deployment and a Redis service. It should be 35 lines to remove. The file will then start with the following code block:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web

Next, update the image to use your namespace:
image: registry.ng.bluemix.net/<namespace>/bluechatter_app
Save the file.

Finally, deploy the application:
kubectl create -f kubernetes.yml

Wait a few minutes for your application to be deployed. You can monitor the progress using:
kubectl get pods

This will show the status of the web application deploying. Proceed once the status is “Running”.

When the application has deployed successfully, retrieve the public IP of your cluster workers:
bx cs workers <your-cluster>

Retrieve the port assigned to your application:
kubectl get services

The output should be similar to:

NAME      TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
web       NodePort       172.21.99.117   <none>          80:30089/TCP   2m

You should be able to access the application via:
http://worker-public-ip-address:port

Congratulations on deploying an application within an SPS that communicates with the public endpoint of a service deployed on the IBM Compose Enterprise service.

Additional Services

There are a number of additional services that can be deployed, either within the SPS or on dedicated hardware.
These services currently include:

Service Isolation More Information
Bare metal server Dedicated Hardware Getting started with Bare Metal Servers
Compose Enterprise Dedicated Cluster About Compose Enterprise
Cloudant Dedicated Hardware Getting started with Cloudant
Log Analysis Public Multi-tenant service Getting started with Log Analysis
Monitoring Public Multi-tenant service Getting started with IBM Cloud Monitoring
Public Virtual Server Public Multi-tenant service with SPS Network Isolation Getting started with Virtual Servers
Dedicated Virtual Server Single-tenant service with SPS Network Isolation1 Getting started with Virtual Servers
DB2 (formerly DashDB) Dedicated Instance/Dedicated Hardware Getting started with DB2

Notes: 1 – Requires a Dedicated Host to run the dedicated VSI on – see Provisioning dedicated hosts and instances

IBM Cloud Dedicated customer

If you are an IBM Cloud Dedicated customer and want to use Kubernetes applications integrated with Cloud Foundry applications and services in your Dedicated environment, see Kubernetes and Cloud Foundry integration in IBM Cloud Dedicated

Join The Discussion

Your email address will not be published. Required fields are marked *