Cloud-native development grows up

When you book an airline ticket, apply for a new passport, access your insurance documents or bank account, you’re typically relying on software built by enterprise developers in corporate labs and cities around the world.

Enterprise developers are busy people. Customer’s expectations are higher than they’ve ever been for fast, mobile, and secure access to information. We already accept that microservices and cloud-based solutions offer the only real flexible, scalable future for the enterprise. Yet only 20% of enterprise software has moved to the cloud.

What is preventing more companies from moving to the cloud? Enterprise developers are often pressed for time to learn on the job. Even if they have time, it can be difficult knowing where to start with cloud-native development. The technology is constantly evolving, and opinions on best tools and approaches vary even within small organizations.

In this blog post, we introduce you to new cloud-native products and open source projects from IBM that simplify your journey to the cloud.

Cloud Pak for Applications

Cloud Pak for Applications aims to clear some of the mystery around Cloud Native development by:

  • Bundling the best in class libraries and frameworks for developing secure, fast and scalable solutions
  • Instilling customizable and consistent access to frameworks approved by an organization.

The image below shows the underlying technology that is included in Cloud Pak for Applications:

Cloud Pak Architecture

Cloud Paks for Applications contains a few big components, one of which is Kabanero – a collection cloud-native tools and libraries that we think are essential for cloud-native development.

We’re introducing a new collection of code patterns, articles and tutorials that gently introduce the concepts of Kabanero within Cloud Pak for Apps as a smart, disciplined and consistent approach to creating cloud native applications in the enterprise.

Because Kabanero is a core component of our cloud-development offering, let’s take a closer look at the underlying technology.

Kabanero development technologies

Kabanero is the open source foundational layer of Cloud Paks for Applications. Kabanero itself is made up of accepted, best-in-class cloud technologies which are all open source. You can see a graphical representation of many of the important technologies below:

Kabanero Architecture

One of the special ingredients in Kabanero is Appsody which uses technology stacks and templates to create a disciplined and consistent approach to developing apps within an enterprise organization.

Our approach to creating developer resources around Kabanero and Cloud Paks for Apps is to focus on workflows using the cloud DevOps components, providing tutorials around them, and code patterns that can be cloned and explored as reference models. In our first collection of developer resources, we’re sharing an Appsody code pattern that walks through the basics of creating an application that has two microservices – with presentation and business logic, as well as digging into approaches for using Appsody in your own projects.

Building with Appsody for consistent results

Appsody is an open source project that simplifies and controls cloud-native application development. Appsody’s primary component is a stack, which builds a pre-configured Docker image that developers can immediately use to create applications in a cloud environment. Appsody allows stack builders to decide which parts of the users’ resulting application images are fixed (a set of technology choices and configurations defined by the stack image) and which parts stack users can modify/extend (templates).

One way to think about Appsody is that it can give developers the advantages of a Platform as a Service (PaaS) environment (in terms of not having to worry about installing and configuring the underlying technology components), while allowing architects the flexibility to define those technology components using Docker images.

Appsody stacks

An Appsody stack represents a pre-configured set of technologies aimed at simplifying the building of a particular type of cloud native application. This might include a particular environment (for example, node.js, or perhaps python-flask), combined with integrated choices for monitoring, logging etc. Stacks are published in stack repositories, which can either be public or private to an enterprise. Developers can then use the Appsody CLI to pull in the appropriate stack for the application they are building. Kabanero contains all the tools for using and contributing to public stack repositories, as well as a set of curated stacks suitable for the enterprise.

Appsody goes even further than simplifying the use of pre-configured technologies. It enables developers to create and test applications within a local containerized environment from the start using Rapid Local Development Mode. After those initial tests are run, developers can then deploy the final application to cloud-based testing and production clusters. Developing in containers from the start reduces the likelihood of subtle problems being introduced when containerization is added late in the development process.

Appsody templates

Appsody stacks come with one or more templates. A template represents a starter application using that stack and comes ready to be run and deployed. Developer can modify the template to build out their application.

The following image shows the flow of how a developer uses Appsody to pull down and modify a stack, build it and then deploy it to a remote Kubernetes cluster.

Appsody Architecture

The above flow shows the manual deployment to a Kubernetes cluster. In more production-orientated environments, GitOps might trigger the build and deploy steps and Tekton Pipelines would drive the deployment. Kabanero Collections, which is part of Cloud Pak for Applications, brings together Appsody stacks, GitOps, and Tekton Pipelines to provide an enterprise-ready solution for grown-up cloud-native application development and deployment.

Ready to start?

Now that you understand the technology that underlies IBM Cloud Pak for Applications, you’re ready to start exploring the content that we’ve created. We’ve selected two different paths to help you get started with Cloud Pak for Applications:

Anton McConville
Henry Nash
Brad Topol

Applications are moving to the cloud. It’s time for developer tools to move, too

The cloud developer landscape is changing rapidly. Every day, there are new tools, new patterns, new technologies, and new frameworks for developers to learn and use. In cloud-native development, cloud architectural patterns like microservices require that developers rethink how they develop applications. Testing environments are more complex. Requirements for consistency in production environments and even basic setup and configuration for developer environments can be time-consuming operations. Developers need better tools to keep up with this quickly changing landscape.

That’s why we’ve joined a new working group at the Eclipse Foundation — the Eclipse Cloud Development Tools Working Group — whose goal is to accelerate the creation of those cloud-based developer tools. This is a vendor-neutral working group with members from a broad set of companies who work together to define standards for creating and using cloud-based developer tools.

We are working together to:

  • Define de-facto standards to drive broad adoption of cloud IDEs and container-based development tools
  • Enable an ecosystem for extenders and developer tool providers via these standards
  • Integrate with key enablers for cloud native development, CI, and test automation

Why do standards matter?

While standards may sound counter to rapid innovation, they are key enablers of extensibility, and interoperability. There are de-facto standards emerging for cloud-based tools in workspace definitions, extensions for languages support, tracing, and debugging. Our work group focuses on getting developers to adopt these standards. In turn, this will make the cloud-based developer tools interoperable with other cloud technologies. I believe that once we establish cloud development tools standards, it will enable a marketplace ecosystem for extensions which in turn benefits users and our customers.

Cloud native is a new way for developers to think

Developers are always trying to develop applications faster. Cloud-native tools, running in the cloud, will give developers new capabilities that leverage and exploit cloud capabilities from the very start of their development process. In turn, this lets developers test, build, monitor, and deploy applications faster in an environment that mirrors their production systems. This high fidelity development environment will enable productivity, so developers can focus on their work and innovate faster.

Some use cases where I can see how cloud-native developer tools will speed and improve development include:

  • Simpler setup and installation of development dependencies
  • Accessible, easy-to-use tools for A/B testing, always-on monitoring, and testing experimental aspects of development
  • Browser-based development to lower the barriers of entry for developers working in the cloud

The way that this will enhance how developers can get started and quickly create, test, monitor, and deploy applications is hard to overstate.

An example of cloud-native tools that we’ll champion in this group

One of the Eclipse projects that I’m excited to see championed through this new workgroup is Eclipse Codewind. This tool is an IDE extension that bundles performance and monitoring tools and enables you to develop in containers within your own IDE. You can make changes against all of your apps using the simple extension and instantly see how those changes perform in your development cluster. Tools like Codewind will help you develop better-performing, error-prone applications faster than ever.

The working group is just getting started, and their are a lot of great things we are going to accomplish. The participants are from leading companies and their developers work in many exciting projects at Eclipse, so working together on standards will benefit all of our companies.

Get involved

If you are interested in promoting interoperative tools that run in the cloud, standards that allow those tools to be extended into any cloud, and an ecosystem to support the adoption of the standards and cloud-native hosted tools, view our Charter and ECD Working Group Participation Agreement (WPGA), or join the ECD Tools mailing list.

If you’re a developer who wants to enhance cloud-native development tools, check out the projects at the Eclipse Foundation. I’d say that for cloud tools as well as other projects, there a a bunch of great projects doing innovative things in open source at Eclipse. It’s a great way to work, and a great group of developers driving key innovations.

John Duimovich

Istio 1.3 is out: Here’s what it means for you

Companies with large, monolithic applications are increasingly breaking these unwieldy apps into smaller, containerized microservices. Microservices are popular because they offer agility, speed, and flexibility, but they can be complex, which can be a hurdle for adoption. And having multiple microservices, rather than a single monolith, can increase the attack surface of the app.

Istio gives control back to app developers and mesh operators. Specifically, Istio is an open source service mesh platform that ensures that microservices are connecting to each other in a prescribed way while handling failures. With Istio, it’s easier to observe what is happening across an entire network of microservices, secure communication between services, and ensure that policies are enforced.

A new release of Istio, version 1.3, makes using the service mesh platform even easier.

Istio 1.3 improves usability

Because Istio has so many features, it was more complex than other open service mesh projects I’ve tried. If we were going to accomplish our goal of making Istio the preferred service mesh implementation, we had to make it easier for other developers to use.

Specifically, we had to simplify the process for developers to move microservices to the Istio service mesh, regardless of whether they wanted to leverage security, traffic management, or telemetry first. We created a User Experience Work Group that engaged with the community to improve Istio’s user experience. Through community collaboration across many work groups and the Envoy community, I’m excited to see these changes in Istio 1.3:

  • All inbound traffic will be captured by default. There is no need to declare containerPort in your Kubernetes deployment YAML for Istio to indicate the inbound ports you want your Envoy sidecar to capture.
  • A single add-to-mesh command in the CLI adds existing services to Istio mesh regardless of whether the service runs in Kubernetes or a virtual machine.
  • A describe command that allows developers to describe the pod and service needed to meet Istio’s requirements and any Istio-associated configuration.
  • Automatic protocol detection is implemented and enabled by default for outbound traffic, but disabled for inbound traffic to allow us to stabilize this feature. You will still need to modify your Kubernetes service YAML to name or prefix the name of the service port with the protocol for v1.3, but I expect this requirement to be eliminated in a future release.

Refer to Istio 1.3’s release blog and release note for more details about the release.

Istio 1.3 in action

A little over a year ago, I tried to move the popular Kubernetes guestbook example to run in the Istio mesh. It took a few days because I didn’t follow the documentation closely and discovered the proper documentation only after I finished. Injecting a sidecar didn’t cause me a problem; I was tripped up by the list of requirements to pods and services.

Using this same example, let’s see how using Istio 1.3 simplified this process.

I had already deployed the upstream guestbook sample in my Kubernetes cluster in IBM Cloud (1.14.6) by using the guestbook-all-in-one.yaml file. I uncommented out line 108 to use type: LoadBalancer for the front-end service:

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-69859f6796-4nj7p      1/1     Running   0          49s
frontend-69859f6796-772sw      1/1     Running   0          49s
frontend-69859f6796-n67w7      1/1     Running   0          49s
redis-master-596696dd4-ckcj4   1/1     Running   0          49s
redis-slave-96685cfdb-8cfm2    1/1     Running   0          49s
redis-slave-96685cfdb-hwpxq    1/1     Running   0          49s

I used the quick start guide to install Istio 1.3. The community is making progress to reduce the number of custom resource definitions (CRDs), and we are down to 23. We will continue to reduce the number of CRDs based on the Istio features the users install.

Once installed, I used the add-to-mesh command to add the frontend service to the Istio mesh:

$ istioctl x add-to-mesh service frontend
deployment frontend.default updated successfully with Istio sidecar injected.
Next Step: Add related labels to the deployment to align with Istio's requirement: https://istio.io/docs/setup/kubernetes/additional-setup/requirements/

As the result, my frontend pods had sidecar injected and running.

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-69577cb555-9th62      2/2     Running   0          25m
frontend-69577cb555-pctrg      2/2     Running   0          25m
frontend-69577cb555-rvjk4      2/2     Running   0          25m
redis-master-596696dd4-dzf29   1/1     Running   0          26m
redis-slave-96685cfdb-2h789    1/1     Running   0          26m
redis-slave-96685cfdb-7sp6p    1/1     Running   0          26m

I described one of the frontend pods to see if the pod and associated frontend service met the requirements for Istio.

$ istioctl x describe pod frontend-69577cb555-9th62
Pod: frontend-69577cb555-9th62
   Pod Ports: 80 (php-redis), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: frontend
   Port:  80/UnsupportedProtocol
   80 is named "" which does not follow Istio conventions
Pilot reports that pod is PERMISSIVE (enforces HTTP/mTLS) and clients speak HTTP

The code clearly told me what was missing! I needed to name the port for its protocol, which is HTTP in this case. The version label for telemetry is optional for me as the frontend service only has one version.

I edited the frontend service using kubectl.

$ kubectl edit service frontend

I added a line to name the service, using name: http and saved the change.

…
  ports:
  - nodePort: 30167
    port: 80
    protocol: TCP
    targetPort: 80
    name: http
…
service/frontend edited

I repeated the same istioctl x add-to-mesh service redis-master and istioctl x add-to-mesh redis-slave commands to add redis-master and redis-slave to the mesh. For redis, I just used the default tcp protocol, so there was no need to name the port.

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-b8595c9f-gp7vx        2/2     Running   0          51m
frontend-b8595c9f-pr5nb        2/2     Running   0          51m
frontend-b8595c9f-q7fx9        2/2     Running   0          50m
redis-master-5589dc575-bqmtb   2/2     Running   0          51m
redis-slave-546f8d974c-gq4sn   2/2     Running   0          50m
redis-slave-546f8d974c-qjjwl   2/2     Running   0          50m

I added the frontend, redis-master and redis-slave services to the mesh! I visited the guestbook app using the load balancer IP:

$ export GUESTBOOK_IP=$(kubectl get service frontend -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ curl $GUESTBOOK_IP

I used the istioctl dashboard grafana command to launch Grafana, which served up the guestbook service metrics. Grafana metrics

From there, I used istioctl dashboard jaeger to launch Jaeger, which gave me the guestbook traces for each guestbook request. Note that they are individual trace spans–not correlated. I expected the individual trace spans because I would need to propagate the trace headers so that multiple trace spans could be tied to each individual request.

Grafana metrics

After that, I used istioctl dashboard kiali to launch Kiali, which allowed me to visualize the guestbook app.

Grafana metrics

I was able to observe the microservices at the app layer with Grafana, Jaeger, or Kiali, without needing to modify the service code or rebuild the guestbook container images! Depending on your business needs, you can either secure your microservices or build resilience into your microservices or control traffic by shifting as you develop newer versions of your microservices.

I believe the community is committed to make Istio easier to use as developers move their microservices into the mesh or out of the service. Live editing the service will not be necessary once the intelligent protocol detection feature is enabled by default for inbound traffic. This feature should be added soon.

Istio’s open source strength

Istio’s open architecture and ecosystem combine to make the technology effective. There is no vendor lock-in limiting the types of services you can use with it. Plus, Istio can run in any cloud model — public, private, on-premises, or a hybrid cloud model.

Istio was founded by IBM, Google, and Lyft in 2017, and a thriving community has grown around it to include other partners, such as Cisco, Red Hat, Pivotal, Tigera, Tetrate, and more. With over 400 contributors from over 300 companies at the time of this writing, the Istio project benefits from an active, diverse community and skill set.

Istio and IBM

IBM has been involved with Istio from before it was released to the public, with IBM donating our Amalgam8 project into Istio. As the second top contributor to the open project, IBMers now sit on the Istio Steering Committee and the Technical Oversight Committee, and co-lead the Environment workgroup, Build/Test Release workgroup, Performance and Scalability workgroup, User Experience workgroup, and Docs workgroup.

Why all the interest in Istio? With development teams looking to quickly scale, they need tools that can free them up to innovate and simplify how to build and deploy apps across environments — and that’s why IBM continues to invest in Istio.

Multiple cloud providers offer a managed Istio experience to simplify the install and maintenance of the Istio control plane. For example, IBM Cloud has a managed Istio offering that enables you to install Istio with a single action.

We know that our clients need to modernize their apps and app infrastructures, and we believe that Istio is a critical technology to help them do this safely, securely, and — with the new changes in the 1.3 release — easily.

Get involved with Istio

Because Istio is open source, we rely on an active community of developers to help improve, secure, and scale the tech. Here are a few ways for you to get involved:

Lin Sun is a Senior Technical Staff Member and Master Inventor at IBM. She is a maintainer on the Istio project and also serves on the Istio Steering Committee and Technical Oversight Committee.

Lin Sun

Microsurvival Part 4: On to Kubernetes

Note: This blog post is part of a series.

Hey Appy! I’m glad you’re back. Before we get started, I wanted to share with you a poem I’ve been working on.

Ops teams were on their knees.

Ops teams were in tears.

But Kubernetes came in sight.

Kubernetes became their guide.

So, what’d you think? Why are you so interested in the ceiling all of a sudden? Hmm, I’ll take that as a sign to continue working on my poetry writing skills.

Anyway, our last conversation covered container technologies. Now, I think it’s time to dive into Kubernetes, don’t you think?

What is Kubernetes?

Where do I begin? Let’s start with the meaning of Kubernetes. Kubernetes is a Greek word that means helmsman. Now, why do we need it? As applications like you grow and the number of components increase, the difficulty of configuring, managing, and running the whole system smoothly also increases. And since humans always tried to automate difficult and repetitive tasks, Kubernetes was created.

Kubernetes is one of the many container orchestrators out there that runs and manages containers. What does a container orchestrator do? It helps the operations team to automatically monitor, scale, and reschedule containerized applications inside a cluster in the event of hardware failure. It enables containerized applications to run on any number of computer nodes as if all those nodes were a single, huge computer. That makes it a whole lot easier for both developers and operations team to develop, deploy, and manage their applications. Your parents, Dev and Ops, would surely agree with this magic.

Kubernetes architecture

Next is the Kubernetes architecture. A Kubernetes cluster is a bunch of master nodes and worker nodes. A master node manages worker nodes. You can have one master node or more than one if you want to provide high availability. The master nodes provide many cluster-wide system management services for the worker nodes, and the worker nodes handle our workload. However, we won’t be interacting with them a lot (not directly at least). To set your desired state of the cluster, you need to create objects using the Kubernetes API. We can use kubectl to do that, which is the command-line interface of Kubernetes.

Here, let me draw you a picture:

image

The master node consists of basically four things:

  1. etcd is a data store. The declarative model is stored in etcd as objects. For example, if we say we want five instances of a certain container, that request is stored in the data store.
  2. Kubernetes controller manager watches the changes requested through the API server and attempts to move the current state of the cluster towards the desired state.
  3. Kubernetes API server validates and configures data for the API objects, which include pods, services, replication, controllers, and more.
  4. Kubernetes scheduler takes charge of scheduling pods on nodes. It needs to consider a lot information, including resource requirements, hardware/software constraints, and many other things.

Each worker node has two main processes running on them:

  • kubelet is something like a node manager. The master node talks to the worker nodes through kubelet. The master node tells kubelet what to do, and then kubelet tells the Pods what to do.
  • kube-proxy is a network proxy that reflects Kubernetes networking services on each node. When a request comes from outside of the cluster, the kube-proxy routes that request to the specific pod needed, and the pod runs the request on the container.

Now this picture has more details, but you can see how everything works together:

image

We use Kubernetes API objects to describe how our cluster should be, what applications to run in it, which container images to use, and how many of them should be running.

Get to know the following basic Kubernetes objects:

  • Pods, are the smallest deployable units of computing that can be created and managed in Kubernetes. Containers of an application run inside these pods.
  • Pods are mortal and are created and destroyed dynamically when scaling. For the pods to communicate with each other, we need services. A service is an abstraction which defines a logical set of pods and how to access them.
  • Volume is an abstraction that solves two problems. The first problem is that all the files inside a container are lost when it crashes. The kubelet restarts the container, but it is a new container with a clean state. The second problem is that two containers running in the same pod often share files.
  • Namespace lets you create multiple virtual clusters (called namespaces) backed by the same physical cluster. You use them in huge clusters with many users belonging to multiple teams or multiple projects.

Now, pay attention to the controllers, which build upon the basic objects to give us more control over the cluster and provide additional capabilities:

  • ReplicaSet ensures a set number of replica pods are running at any given time.
  • After you define a desired state in a deployment object, the deployment controller changes the current state to the desired state at a controlled rate.
  • A DaemonSet ensures that all (or some) nodes run the specified pod. When more nodes are added, pods are added to them.
  • Jobs create one or more pods and after they are successfully completed, the job is marked as complete.

Okay, I know that was a lot of information. But, guess what? This is far from enough to get a full idea of Kubernetes! However, it is enough to get you off to a good start. Don’t you agree, Appy?

Hey! Are you dozing off? Wow, maybe lectures aren’t your thing. Let’s try a hands-on approach.

Lucky for you, there are several labs that can help you understand the core concepts of Kubernetes that I just described. These labs only require you to have an IBM Cloud account. You can then create a free Kubernetes cluster to play with.

I hope you tell your parents, Dev and Ops, what you have learned so far from our meeting today and from the labs after you complete them. I am sure both of them will be happy to hear about it. Don’t forget to deliver my regards as well.

Have a great adventure, Appy! I know you’re off to a good start.



A previous version of this post was published on Medium.

Amro Moustafa

Microsurvival Part 3: Hello Docker

Note: This blog post is part of a series.

Hello, Developer! I am so glad you could join me today. I’m extremely happy that your child Appy told you about what we discussed, but even happier that you suggested meeting today. I know you want the best for Appy, to look good and get along with others. We’ve been talking about an appropriate and safe environment for Appy, especially as Appy continues to grow and mature.

So, you’re trying to work with Docker and need some tips, correct? Well, I am more than happy to help. It’s pretty easy. I see you have a Windows laptop like me, so you can follow along just fine!

First, let’s start by installing Docker. You will need to follow the instructions specified for your operating system. Make sure your version of Windows is compatible.

Now that you have Docker, you can run Docker commands using the Docker executable.

Because it’s common to start developing with a “Hello World!” program, let’s run a “Hello World!” container. Try running the command docker run busybox echo "Hello world" and you should get a similar output:

> docker run busybox echo "Hello world"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
90e01955edcd: Pull complete
Digest: sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
Status: Downloaded newer image for busybox:latest
Hello world

Allow me to explain what running this command did for us. Docker first searched for the image we are trying to pull on your local machine but couldn’t find it. The image was pulled from Docker Hub instead, Docker’s public registry of ready-made container images. The image we pulled is a BusyBox image, which combines tiny UNIX tools into a single executable. Then, Docker created an isolated container based on the image. Optionally, we specified which command to execute when running the container. We downloaded and ran a full application without the need to install it or any of its dependencies, all in the same command. Fascinating, don’t you agree?

What’s a docker run command?

Now, let me elaborate a bit more on the docker run command. This command runs existing images or pulls images from the Docker Hub registry. These images are software packages that get updated often, so there is more than one version for each image. Docker allows multiple versions of an image with the same name, but each version must have a unique tag. If you run the docker run <image> command without a tag, Docker assumes you are looking for the latest version of the image, which has the latest tag. To specify the version of the image you are looking for, simply add the tag docker run <image>:<tag>.

You might want to list the images using docker images to check the images created, their tags (or versions), creation dates, and sizes. After you run it, you should get an output similar to the following example:

> docker images
REPOSITORY    TAG      IMAGE ID       CREATED       SIZE
busybox       latest   59788edf1f3e   8 weeks ago   1.15MB

You can also use the docker container list command to list the running containers. If you run it right now, you probably won’t get any containers listed because the container no longer running. But if you add the -a or --all flag, both running and stopped containers are displayed in a similar output to this example:

>docker container list -a
CONTAINER ID IMAGE COMMAND CREATED ... 47130c55f730 busybox "echo 'Hello world'" About an hour ago ...

(Some of the details are omitted and replaced by ...)

Do you find the command docker container list a bit long? If so, there is an alternative command, docker ps, with the same function. You can optionally add the -a flag to show the stopped containers as well.

Since the container shows up as a stopped container, you can start it up again by using the docker start <container ID> command. And, you can stop a running container by using the command docker stop <container ID>.

Create a Docker image

Now that you know how to run a new container using an image from the Docker Hub registry, let’s make our own Docker image. The image we will create consists mainly of two things: the application you want to run and the Dockerfile that Docker reads to automatically build an image for our application. The Dockerfile is a document that contains all the commands that Docker users could call on the command line to assemble an image. Let’s first start with the simple Node.js application, and name it app.js. Feel free to customize the name if you’d like.

const http = require('http');
const os = require('os');var server = http.createServer(function(req,res){
  response.end("Hostname is " + os.hostname() + "\n");
})
server.listen(3000);

As you can see in this code sample, we are just starting an HTTP server on port 3000, which will respond with “Hostname is (the hostname of the server host)” to every request. Make a directory and name it as you like, then save the app code inside of it. Make sure no other files are present in that directory.

Now that we’ve created an application, it’s time to create our Dockerfile. Create a file called Dockerfile, copy and paste the content from the following code sample into that file, and then save it in the same directory as your app code.

FROM node:8
COPY app.js /app.js
CMD ["node", "app.js"]

Each FROM statement has a meaning in the Dockerfile. FROM designates which parent image you are using as a base for the image you are building. It is always better to choose a proper base image. We could have written FROM Ubuntu, but using a general-purpose image for running a Node application is unnecessary, because it increases the image overhead. In general, the smaller the better.

Instead, we used the specialized official Node runtime environment as a parent image. Another thing to note is that we specified the version with the tag FROM node:8 instead of using the default latest tag. Why? The latest tag result in a different base image used when a new version is released, and your build may break. I prefer to take this precaution.

We also used COPY <src> <dest> to copy new files or directories from <src> and add them to the file system of the container at the path <dest>. The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the <dest> path. Another Dockerfile instruction that has a similar function as COPY, is ADD. However, COPY is preferred because it is simpler. You can use ADD for some unique functions like downloading external resources or extracting .tar files into the image. You can explore that option further by checking out the Docker documentation.

Lastly, you can use the CMD instruction to run the application contained by your image. The command, in this case, would be node app.js.

There are other instructions that can be included in the Dockerfile. Reviewing them briefly now could prove helpful for you later on. The RUN command, for example, allows you to run commands to set up your application and you can use it to install packages. An example of that is RUN npm install. We can expose a specific port to the world outside the container we are building by using EXPOSE <port>.

Before writing all these commands, you should know some essential knowledge about Dockerfiles. Every command you write in the Dockerfile creates a layer, and each layer is cached and reused. Invalidating the cache of a single layer invalidates all the subsequent layers below it. For example, invalidation occurs after command change. Something to note is that Docker likes to keep the layers immutable. So, if you add a file in one layer and remove it in the next one, the image still contains that file on the first layer. It’s just that now the container doesn’t have access to it anymore.

Two things to keep in mind is that the fewer layers in a Dockerfile, the better. To change the inner layers in Docker images, Docker must remove all the layers above it first. Think about it like this: you’ll cry less if you have fewer layers to peel off an onion. Also, the most general steps and the longest steps should come first in your Dockerfile (the inner layers), while the specific ones should come later (outer layers).

Build an image from a Dockerfile

Now that you have a better understanding of the contents of the Dockerfile, let’s go ahead and build an image. First, make sure your path is inside the new directory that you made. Using ls command should only show you two files: app.js and Dockerfile. You can build the image by using the docker build -t medium. command. We tag it medium by using the -t flag and we target the current directory. (Note the dot at the end of the following command.)

>docker build -t medium .
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM node:8
8: Pulling from library/node
54f7e8ac135a: Pull complete
d6341e30912f: Pull complete
087a57faf949: Pull complete
5d71636fb824: Pull complete
0c1db9598990: Pull complete
89669bc2deb2: Pull complete
3b96ee2ed0b3: Pull complete
df3df33f8e3c: Pull complete
Digest: sha256:dd2381fe1f68df03a058094097886cd96b24a47724ff5a588b90921f13e875b7
Status: Downloaded newer image for node:8
---> 3b7ecd51ffe5
Step 2/3 : COPY app.js /app.js
---> 63633b2cf6e7
Step 3/3 : CMD ["node", "app.js"]
---> Running in 9ced576fdb46
Removing intermediate container 9ced576fdb46
---> 91c37fa82fe5
Successfully built 91c37fa82fe5
Successfully tagged medium:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

You can use the run command again to run the built image, or you can use docker push to push it to a registry and pull it again on another computer from the registry using docker pull.

Congratulations! Now you know how to make a proper Dockerfile, build an image from that Dockerfile, and run it. These skills will help you send young Appy out into the world. If you’re feeling adventurous, you can check out the Docker docs and keep going. Good luck!



A previous version of this post was published on Medium.

Amro Moustafa

Get your enterprise apps ready for the cloud

APIs, microservices, and containers are becoming standard for running enterprise applications in the cloud, but many companies have existing monolithic applications running in their data centers. It’s difficult to figure out what to move, where to move it, and when. In the coming weeks of September, IBM is hosting two free conferences that will include a range of developer-focused talks and labs about refactoring or moving your enterprise Java applications to containers with Kubernetes in the cloud:

Here is a selection of what you can learn…

Best practices for writing microservices (Emily Jiang)

If you create microservices, you might be wondering whether there are best practices. Yes, the Twelve-Factor App is the widely adopted methodology. The Twelve-Factor App aims to clarify the boundary between application and infrastructure; minimize divergence between development and production; and enable your microservices to scale up or down without significant changes to tooling, architecture, or development practices.

But the Twelve-Factor App methodology just defines the theory; there is no known implementation. Emily will demonstrate how MicroProfile and Kubernetes can implement the 12 factors.

For example, one of the 12 factors is to externalize the configuration of microservices. MicroProfile Config enables you to externalize configuration so that when you change the configuration of a microservice, you don’t need to repackage it. Similarly, MicroProfile Config can help with port binding (another of the 12 factors) so that microservices can communicate with each other when deployed to the cloud. You can specify new port numbers in Kubernetes ConfigMap, and MicroProfile Config gives the correct information to the deployed microservices.

To find out more, come to Emily’s talk “On-stage hacking: Build 12-Factor microservices in an hour” on Sep. 24, at 11am, at the Application Modernization Technology Conference, in which she will create two microservices and deploy them to Minikube, and demonstrate the 12 factors. She will also present “A modern & smart way to build cloud-native microservices” on Sep. 9, at 3:45pm, at the European Application, Platform & Modernization Hursley Summit.

Build cloud-native applications with Eclipse Codewind (Tim Deboer)

As enterprises move toward microservices and cloud-native development, there are a host of new technologies and skills that developers need to learn. Eclipse Codewind helps bridge this gap by providing support for building cloud-native applications directly in the developer’s IDE of choice. There’s no learning curve because the tools work just like the local development tools that they’re used to using. Codewind supports Visual Studio Code (VS Code) and Eclipse, and also provides a fully hosted development environment through the Eclipse Che IDE.

Codewind provides integrated support for Appsody and Kabanero collections, which enable developers to rapidly create applications in several languages based on predefined stacks that meet corporate standards. These applications are always run within containers, so that you know exactly how it’ll behave in production. But the behaviors that developers expect haven’t changed: code changes take immediate effect; debugging and console output work. Further along in the lifecycle, Codewind includes comprehensive tools for performance testing and benchmarking, and OpenAPI code generation for defining well-behaved REST interfaces.

To find out more, come to Tim’s talk “IBM Cloud Pak for Applications: Introducing Eclipse Codewind” on Sep. 9, at 4:40pm, at the European Application, Platform & Modernization Hursley Summit, or on Sep. 24, at 4:10pm, at the Application Modernization Technology Conference.

Get the latest on Jakarta EE 8 (Kevin Sutter)

We did it! As of 8:44pm on August 26, the ballot for the Jakarta EE 8 Full Platform Specification was submitted for approval! This milestone was the culmination of months of work by many people throughout the Community to prepare the specifications, APIs, Javadocs, and TCKs for the Jakarta EE 8 release.

Over the next couple of weeks, all of these specification ballots will conclude, and the various artifacts will be promoted for public consumption. The goal is to have all of Jakarta EE 8 ready to be announced at the JakartaOne Livestream Conference on Sep 10. Monitoring the progress of this effort is easy. Once the ballots are completed, the various artifacts will be promoted to their respective homes.

If you want to learn more about Jakarta EE (past, present, and future), come to Kevin’s “Jakarta for DummEEs!” session on Sep. 24, at 3:10pm, at the Application Modernization Technology Conference.

Register

Learn more, or register now:

The events are free, but seating is limited. We look forward to meeting you!

Laura Cowen

Build cloud-native apps faster for Kubernetes with Kabanero, a new open source project from IBM

As companies modernize their infrastructure and adopt a hybrid cloud strategy, they’re increasingly turning to Kubernetes and containers. Choosing the right technology for building cloud-native apps and gaining the knowledge you need to effectively adopt Kubernetes is difficult. On top of that, enabling architects, developers, and operations to work together easily, while having their individual requirements met, is an additional challenge when moving to cloud.

To lower the barrier of entry for developers to use Kubernetes and to bring together different disciplines, IBM created new open source projects that make it faster and easier for you to develop and deploy applications for Kubernetes.

Today at OSCON 2019, we are excited to announce the creation of three new open source projects — Kabanero, Appsody, and Codewind — that developers can use to build cloud-native apps faster for Kubernetes.

Kabanero: Create Kubernetes applications with the skills you have

Kabanero enables developers, architects, and operations to work together, faster. In a single solution, architects and operations can include their company’s standards for aspects like security and build pipelines into a customized stack that developers use. Kabanero gives enterprises the control they need for areas related to governance and compliance, while also meeting the developers’ need for agility and speed.

Kabanero brings together open source projects Knative, Istio, and Tekton, with new open projects Codewind, Appsody, and Razee into an end-to-end solution for you to architect, build, deploy, and manage the lifecycle of Kubernetes-based applications.

Kabanero takes the guess work out of Kubernetes and DevOps. With Kabanero, you don’t need to spend time mastering DevOps practices and Kubernetes infrastructure topics like networking, ingress, and security. Instead, Kabanero integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain. Our pre-built deployments to Kubernetes and Knative (using Operators and Helm charts) are built on best practices. So, developers can spend more time developing scalable applications and less time understanding infrastructure.

Appsody: Cloud-native application stacks and tools

Appsody is an open source project that simplifies the creation of cloud-native applications in containers. With Appsody, a developer can create a microservice which meets their organization’s standards and requirements in minutes.

Appsody gives you pre-configured stacks and templates for a growing set of popular open source runtimes and frameworks, providing a foundation on which to build applications for Kubernetes and Knative deployments. This allows developers to focus on their code, reducing the learning curve for cloud-native development and enabling rapid development for these cloud-native applications.

You can customize Appsody stacks to meet your specific development requirements and to control and configure the included technologies. If you customize a stack, you have a single point of control from which you can roll out those changes to all applications built from them.

Kabanero incorporates Appsody stacks and templates into its overarching framework.

Codewind: IDE integration for cloud-native development

IBM made the first major contribution to Codewind, a new open source project managed by the Eclipse Foundation.

Codewind provides extensions to popular integrated development environments (IDEs) like VS Code, Eclipse, and Eclipse Che (with more planned), so you can use the workflow and IDE you already know to build applications in containers. Essentially, Codewind enables you to develop in containers without knowing you are developing in containers.

With Codewind, you can rapidly iterate, debug, and performance test apps inside containers, just like when they run in production. Codewind supports multiple project template types and embraces a community of choices. Kabanero and Appsody will use Codewind to provide an integrated IDE experience.

Razee: Multi-cluster continuous delivery tooling for Kubernetes

In addition, we recently announced Razee, which provides multi-cluster continuous delivery tooling for Kubernetes. This project focuses on management of Kubernetes at scale and is another open source technology Kabanero will use for progressing applications from development, test, and production clusters.

Why Kabanero?

There is nothing like Kabanero in the market today. While there are open source projects that address individual aspects of what Kabanero addresses, no other open source project provides an integrated experience from the creation of a containerized cloud-native application through its production lifecycle on Kubernetes.

By using Kabanero, your development team can build applications that are ready to be deployed onto Kubernetes without first becoming experts in containers and Kubernetes. This lowers the barrier of entry for developers as their organization moves from legacy infrastructure to more modern infrastructure on their journey to cloud.

Get involved

Check out each project’s GitHub repo to learn more, try them out, and get involved in their communities. We’d love to work with you to make it easier to build and scale containerized applications.

Nate Ziemann

Microsurvival Part 2: Divide and containerize

Note: If you didn’t already read part one, go there first for the beginning of young Appy’s story.

You know Appy, I was always fascinated by the term “Divide and Conquer” (or divide et impera if you like fancy talk). It is such a great idea that it is used in politics and in computer science. You don’t see these two fields mentioned in the same sentence too often, do you? Well, the concept of breaking up big headaches into smaller headaches can apply to a lot of things, whether it be armies, factions, algorithms, or Hawaiian pizza.

Last time we talked, I mentioned how you were monolithic while growing up, and you then were divided into processes that were put in containers and became easier to manage. Anyway, do you remember where we stopped? How container isolation is possible?

Mechanisms to make isolation happen

I see… Well, container isolation is easy, but you need to pay attention though. Container isolation of processes is possible due to two mechanisms: Linux namespaces and Linux control groups (cgroups). I want to start with Linux namespaces but before that, I want to be sure about something. You do have knowledge about Linux systems, right? Wow, that is a lot of coughing… I guess that’s a “no” then. I will just mention the relevant information.

Typically, a Linux system has one single namespace, and all the resources belong to that namespace, including file systems, network interfaces, process IDs, and user IDs. Now if we run one of your processes, we run it inside one of these namespaces. The process is only able to see the resources inside the same namespace. Easy, right?

Now it can get a bit complex, because we have different kinds of namespaces like:

  • Mount (mnt)
  • Process ID (pid)
  • Network (net)
  • Inter-process communication (ipc)
  • UTS
  • User ID (user)

Each of these namespaces isolate a specific group of resources, so a process belongs to one namespace of each kind. Your parents will probably tell you more about what kind of resources they would isolate and how, but I’ll give you a small example, if we give each of your processes a different UTS namespace, it will be as if these different processes are running on different machines because they see different local host names!

How cool is that? Yeah, I know you want to learn more about them, but for now, I think this is enough to give you an idea of how they would isolate processes running in containers.

Okay, Appy, now to complete the container isolation, we need to limit the amount of system resources that each container can consume. This is where cgroups, a Linux kernel feature, comes in play. It limits the processes’ resource usage, whether CPU, memory, or network bandwidth. A process can’t use more than the configured amount so it cannot hog other processes’ resources.

How about it? I told you that it’s going to be easy to understand container technologies. They have been around for some time now.

Enter Docker

Containers are not new, but they became more famous when Docker was introduced. Docker simplified the whole process of packaging the application with all its libraries, dependencies, and a whole OS file system that the application runs on. All that in a small, package that can be moved to any machine running Docker to provision the application.

Well, not any machine. There are some limitations. For example, if we containerize one of your applications built for x86 architecture, we can’t expect a machine with an ARM architecture to run that application just because it also runs Docker. We might need a virtual machine to solve that problem.

Hmm… We still have some time before I head out to work, but I will keep it short and tell you about the main concepts of Docker for now. We have images, registries, and containers. Images are where we can package one of your applications with its environment, and other metadata. We build the image and run it on the same computer or we can push — upload — it to a Registry. Registries are like repositories that allow us to store our Docker images and easily share them with other people or computers. We can also pull — download — the image from the registry on another computer. Docker containers are just normal containers but based on a Docker image, and it will run on the host running Docker. Of course, it will be isolated from the other containers — or processes — and the host machine.

Here’s a picture I made that shows the Docker image, the registry, and the container: Docker image, registry, and container architecture diagram

Until next time

Okay, I really need to leave now, but let me know what your parents think of this. I will talk to you about Kubernetes and an example that your parents can try out on the IBM Cloud Kubernetes Service sometime later. Until then, stay stable!


A previous version of this post was published on Medium.

Amro Moustafa