Microservice Builder, which was released in June, is an end-to-end user experience for rapidly developing and deploying containerized Java microservices. We’ve made a number of improvements to the scalability and the flexibility of the Microservice Builder pipeline since the March beta.

The pipeline is Jenkins, in a customized Docker image. The beta version had the GitHub integration that we wanted but lacked support for distributed builds and included a fixed level of Maven, Docker, and kubectl. The solution to that lack of scalability and flexibility was to make better use of Kubernetes.

Jenkins must run multiple builds at once. A basic Jenkins installation has a pool of executors to delegate individual builds to. These executors run as threads inside the main process and can consume a lot of memory and CPU. A single Jenkins instance can suffer if it has ‘too few’ or ‘too many’ executors: There’s no correct number. Instead Jenkins should be scaled using its master/slave support. A single Jenkins master should run no builds itself but instead delegate to one or more slave nodes, running as separate processes. Microservice Builder’s Jenkins runs in a Docker container on Kubernetes: How best should a master/slave Jenkins topology run in that environment?

It turns out that there’s a Kubernetes plugin for Jenkins that has an answer: Each job runs in a new slave, dynamically provisioned by Kubernetes. Each slave only lasts as long as the job it’s running. The plugin is not simple to configure but there is a Helm chart that does this so well that we re-based the Microservice Builder pipeline on to that chart.

The Jenkins Kubernetes plugin gives us more than just scalability: It provides a way to remove all the extra utilities like Maven, Docker, and kubectl from the master Jenkins image. Each slave is a Kubernetes pod, a collection of multiple Docker containers. The Jenkins Kubernetes plugin provides a syntax for running build steps in different containers–and handles the work of mounting the build directory into each of them. Our standard pipeline comprises a number of steps:

  1. Check the code out with Git.
  2. Build the code: mvn clean package
  3. Run docker build to put the packaged output into a container.
  4. Use kubectl apply to deploy the result to Kubernetes.

The Docker images required for the Maven, Docker, and kubectl steps are all to be specified in the Jenkinsfile. They are conveniently separated from the master image and can be defined on a per-Git repository basis. The final step was to extract all the new code into a shared library allowing us to compress our standard Jenkinsfile to:

@Library('MicroserviceBuilder') _
microserviceBuilderPipeline {
  image = 'microservice-vote'

You can provide additional parameters to override the default images. For example, you could change the default Maven from Java 8 to Java 7 by specifying the following line in the Jenkinsfile:

mavenImage = 'maven:3.5.0-jdk-7'

The default Jenkinsfile specifies that microserviceBuilderPipeline.groovy should be obtained from the MicroserviceBuilder library. We’ve made this a parameter on our pipeline Helm chart. By default, the library points to the microservicebuilder.lib repo. The Groovy is in the vars/ directory but the entire library can be replaced using the Pipeline.Template.RepositoryUrl parameter on our chart. For more details, see the Knowledge Center. Customers needing to run builds in environments disconnected from the Internet can use this to fork our library into their in-house source control, for example.

Overall, the Jenkins Kubernetes plugin and its associated Helm chart have greatly helped us make our Microservice Builder pipeline scale with Kubernetes, and flexible enough to deal with a broad range of customer scenarios. Let us know what you think in the comments below!

Join The Discussion

Your email address will not be published. Required fields are marked *