Applications are moving to the cloud. It’s time for developer tools to move, too

The cloud developer landscape is changing rapidly. Every day, there are new tools, new patterns, new technologies, and new frameworks for developers to learn and use. In cloud-native development, cloud architectural patterns like microservices require that developers rethink how they develop applications. Testing environments are more complex. Requirements for consistency in production environments and even basic setup and configuration for developer environments can be time-consuming operations. Developers need better tools to keep up with this quickly changing landscape.

That’s why we’ve joined a new working group at the Eclipse Foundation — the Eclipse Cloud Development Tools Working Group — whose goal is to accelerate the creation of those cloud-based developer tools. This is a vendor-neutral working group with members from a broad set of companies who work together to define standards for creating and using cloud-based developer tools.

We are working together to:

  • Define de-facto standards to drive broad adoption of cloud IDEs and container-based development tools
  • Enable an ecosystem for extenders and developer tool providers via these standards
  • Integrate with key enablers for cloud native development, CI, and test automation

Why do standards matter?

While standards may sound counter to rapid innovation, they are key enablers of extensibility, and interoperability. There are de-facto standards emerging for cloud-based tools in workspace definitions, extensions for languages support, tracing, and debugging. Our work group focuses on getting developers to adopt these standards. In turn, this will make the cloud-based developer tools interoperable with other cloud technologies. I believe that once we establish cloud development tools standards, it will enable a marketplace ecosystem for extensions which in turn benefits users and our customers.

Cloud native is a new way for developers to think

Developers are always trying to develop applications faster. Cloud-native tools, running in the cloud, will give developers new capabilities that leverage and exploit cloud capabilities from the very start of their development process. In turn, this lets developers test, build, monitor, and deploy applications faster in an environment that mirrors their production systems. This high fidelity development environment will enable productivity, so developers can focus on their work and innovate faster.

Some use cases where I can see how cloud-native developer tools will speed and improve development include:

  • Simpler setup and installation of development dependencies
  • Accessible, easy-to-use tools for A/B testing, always-on monitoring, and testing experimental aspects of development
  • Browser-based development to lower the barriers of entry for developers working in the cloud

The way that this will enhance how developers can get started and quickly create, test, monitor, and deploy applications is hard to overstate.

An example of cloud-native tools that we’ll champion in this group

One of the Eclipse projects that I’m excited to see championed through this new workgroup is Eclipse Codewind. This tool is an IDE extension that bundles performance and monitoring tools and enables you to develop in containers within your own IDE. You can make changes against all of your apps using the simple extension and instantly see how those changes perform in your development cluster. Tools like Codewind will help you develop better-performing, error-prone applications faster than ever.

The working group is just getting started, and their are a lot of great things we are going to accomplish. The participants are from leading companies and their developers work in many exciting projects at Eclipse, so working together on standards will benefit all of our companies.

Get involved

If you are interested in promoting interoperative tools that run in the cloud, standards that allow those tools to be extended into any cloud, and an ecosystem to support the adoption of the standards and cloud-native hosted tools, view our Charter and ECD Working Group Participation Agreement (WPGA), or join the ECD Tools mailing list.

If you’re a developer who wants to enhance cloud-native development tools, check out the projects at the Eclipse Foundation. I’d say that for cloud tools as well as other projects, there a a bunch of great projects doing innovative things in open source at Eclipse. It’s a great way to work, and a great group of developers driving key innovations.

John Duimovich

Welcome to the IBM Sterling open developer platform

As you work across supply chains, it becomes immediately obvious that no two supply chains are the same. They consist of a system of systems that span value-added network (VAN) services, purpose-built applications, BPMs and RPAs, and myriad data formats with unique data mappings. So, what happens when you are charged with integrating across these unique supply chains to form a network of networks; when data, access, and processes are disjointed; and participants want to keep everything within their four walls?

This is the challenge for developers and system integrators as we enter the era of the multi-enterprise business network. The arrangements made between trading partners now need to be done in a way that unlocks the data to controlled parties so that supply chains can provide end-to-end visibility, allowing corrective actions to be taken before disruption occurs. To craft these unique, self-correcting supply chains, developers need an open platform of purpose-built services. They need access to the right data and AI to solve problems. They need the right tools and technologies to help customize and configure a solution and even reach outside of the supply chain to solve issues connected with other parts of the business. And the platform must be able to take advantage of all the systems and processes they have in place today, while helping them bridge to future technologies.

That is the promise of the IBM Sterling supply chain open platform suite, unveiled today. Here at the IBM Sterling developer hub, you’ll discover:

  • A set of composable and extensible, purpose-built business services that are accessible through various surface areas, such as APIs, graphs, and event systems.
  • Extensible AI, through orchestrated business agents, where you can teach an AI agent how to reason and take action against your unique set of networks and applications that form your supply chain.
  • Foundational services that allow you to manage and govern access to your supply chain.

But it doesn’t stop at IBM Sterling. With an open platform, you can bring in other IBM technologies, such as IBM Cloud Pak™ for Integration to reach any number of systems, including IoT and weather signals, and data for crafting track and trace solutions. Use IBM Cloud Pak for Data to build your own AI pipelines from the IBM Sterling data. And incorporate IBM Blockchain technology to build secure and transparent solutions for trading partners. Coupled with Red Hat® OpenShift®, you can run your supply chain virtually anywhere you choose.

Need to interconnect with other technologies that aren’t IBM? No problem. The IBM Sterling platform is truly open. Your data and insights are yours, so you should be able to use them throughout your business as you see fit.

In the coming months, you will see the IBM Sterling developer hub grow with activity. This area is unique in that you can access code, patterns, articles, and tools that span technologies and industries, so you can build and customize your supply-chain solutions. Check it out and continue to come back as it evolves and expands over time.

Stephen Kenna

Developing for the edge

With the advent of 5G and the evolution of Internet of Things systems, we are seeing an explosion in use cases of edge computing. But what is edge computing? How can edge computing be beneficial for developers? What are the challenges that developers face?

In this blog post, I recount a conversation that I had with Dennis Lauwers, Distinguished Engineer Hybrid Cloud Europe, and Eric Cattoir, Client Tech Professional for IBM Cloud in Benelux.

What is edge computing?

Eric: “Edge computing is a kind of real-time computing. This means that you’re processing your data right at the time that it’s being collected by your device. You don’t send the data first via the cloud but instead process it on the device itself. Devices have more and more compute power, which makes it possible to process the data locally…at the edge.”

Dennis: “Plenty of use cases benefit from edge computing. For example, think about face recognition at border controls. This task involves a massive amount of data with thousands of people who cross borders every hour. It would take too much time to first send the data to the cloud to process it. When you analyze the data right on the device, there’s no latency. And, the data that you want to back up can be safely stored in the cloud.”

What are the challenges to face while coding for the edge?

Eric: “The programming itself is in line with traditional development. You use the same languages and you go through all the familiar DevOps phases. The challenge? That’s in the diversity of the devices. Often the processor technology on IoT devices is different from what you’re using on your PC. And, the processors even probably differ between the devices. How do you manage this? As a developer, you are being asked to write consistent and secure code that can be seamlessly copied to all devices.”

How are you helping developers with this challenge?

Dennis: “We help to enable developers on edge computing. You first build and test your code locally, and after it’s all working fine, you distribute it to your other devices. This involves building a multi-cluster environment. In this way, you’re also prepared when there’s a new device to onboard: in just a few clicks, it’s operational.”

When developers would like to know more, where do you suggest them to start?

Eric: “For a general introduction to edge computing, you can read this blog, “What’s edge computing and how can it transform your business?“. Or, you can watch this video of Rob High, IBM Fellow and CTO, talk about “the basic concepts and key use cases for edge computing.”

If you would like to experiment with the IBM Edge Computing offering, Ryan Anderson wrote an extensive blog on design patterns and recipes related to edge computing.

Will you be speaking about edge computing at Devoxx Belgium?

Dennis: “Yes, that’s right. In our session, we will look at how you can set up a Kubernetes-based DevOps solution for developing these complex applications that consist of components that run on a mixture of central cloud systems and edge devices. We will also show how you can manage an environment with a large number of edge devices and control aspects like security and integrity.

The use case will show how you can develop applications using some basic hardware (Raspberry Pi computers or other ARM-based computing devices) like running visual recognition on the edge in real time on a multitude of devices. We will be leveraging the open source Horizon software.”

Are you coming to Devoxx Belgium? Join our booth for a quick lab and get your limited edition IBM Developer swag!

Stephanie Cleijpool

A brief history of Kubernetes, OpenShift, and IBM

The recent introduction of Red Hat® OpenShift® as a choice on IBM Cloud sparked my curiosity about its origins, and why it is so popular with developers. Many of the developers I sat beside at talks, or bumped into at lunch, at a recent KubeCon Conference, mentioned how they used OpenShift. I heard from developers with financial institutions running analytics on transactions and with retailers creating new experiences for their customers.

OpenShift is a hybrid-cloud, enterprise Kubernetes application platform. IBM Cloud now offers it as a hosted solution or an on-premises platform as a service (PaaS). It is built around containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux.

With the growth of cloud computing, OpenShift became one of the most popular development and deployment platforms, earning respect based on merit. As cloud development becomes more “normal” for us, it is interesting to consider where OpenShift fits, as another tool from the toolbox for creating the right solution. It might mix with legacy on-premises software, cloud functions, Cloud Foundry, or bare metal options.

In this blog post, my colleague Olaph Wagoner and I step back in time to understand where OpenShift came from, and we look forward to where it might be going in the world of enterprise application development with Kubernetes.

The following graphic shows a timeline of OpenShift, IBM, and Kubernetes:

OpenShift, IBM, and Kubernetes timeline

Early OpenShift: 2011-2013

OpenShift was first launched in 2011 and relied on Linux containers to deploy and run user applications, as Joe Fernandes describes in Why Red Hat Chose Kubernetes for OpenShift.

When OpenShift was born in 2011, it relied on Linux containers to deploy and run user applications. OpenShift V1 and V2 used Red Hat’s own platform-specific container runtime environment and container orchestration engine as the foundation.

However, the story of OpenShift began sometime before its launch. Some of the origins of OpenShift come from the acquisition of Makara, announced in November of 2010. That acquisition provided software as an abstraction layer on top of systems and included runtime environments for PHP and Java applications, Tomcat or JBoss application servers, and Apache web servers.

Early OpenShift used “gears”, which were a proprietary type of container technology. OpenShift nodes included some kind of containerization. The gear metaphor was based on what was contained. OpenShift called the isolated clusters gears: something capable of producing work without tearing down the entire mechanism. An individual gear was associated with a user. To make templates out of those gears, OpenShift used cartridges, which were acquired from Makara.

OpenShift itself was not open source until 2012. In June 2013, V2 went public, with changes to the cartridge format.

Docker changes everything

Docker was started as a project by a company called dotCloud, made available as open source in March 2013. It popularized containers with elegant tools that enable people to build and transfer existing skills into the platform.

Red Hat was an early adopter of Docker, announcing a collaboration in September 2013. IBM forged its own strategic partnership with Docker in December 2014. Docker is one of the essential container technologies that multiple IBM engineers have been contributing code to since the early days of the project.

Kubernetes

Kubernetes surfaced from work at Google in 2014, and became the standard way of managing containers.

Although originally designed by Google, it is now an open source project maintained by the Cloud Native Computing Foundation (CNCF), with significant open source contributions from Red Hat and IBM.

According to kubernetes.io, Kubernetes aims to provide “a system for automating deployment, scaling, and operations of application containers” across clusters of hosts. It works with a range of container tools, including Docker.

With containers, you can move into modular application design where a database is independent, and you can scale applications without scaling your machines.

Kubernetes is another open source project that IBM was an early contributor to. In the following graphic you can see the percentage of IBM’s contribution to Docker, Kubernetes, and Istio in the context of the top 5 orgs to contribute to each of those container related projects. It highlights the importance of container technology for IBM, as well as some of the volume of open source work.

Some of IBM's contributions to open source container technology

OpenShift V3.0: open and standard

Red Hat announced an intent to use Docker in OpenShift V3 in August 2014. Under the covers, the jump from V2 to V3 was quite substantial. OpenShift went from using gears and cartridges to containers and images. To orchestrate those images, V3 introduced using Kubernetes.

The developer world was warming to the attraction of Kubernetes too, for some of the following reasons:

  • Kubernetes pods allow you to deploy one or multiple containers as a single atomic unit.

  • Services can access a group of pods at a fixed address and can link those services together using integrated IP and DNS-based service discovery.

  • Replication controllers ensure that the desired number of pods is always running and use labels to identify pods and other Kubernetes objects.

  • A powerful networking model enables managing containers across multiple hosts.

  • The ability to orchestrate storage allows you to run both stateless and stateful services in containers.

  • Simplified orchestration models quickly allow applications to get running without the need for complex two-tier schedulers.

  • An architecture understood that the needs of developers and operators were different and took both of those requirements into consideration, eliminating the need to compromise either of these important functions.

OpenShift introduced powerful user interfaces for rapidly creating and deploying apps with Source-To-Image and pipelines technologies. These layers on top of Kubernetes simplify and draw in new developer audiences.

IBM was already committing code to the key open source components OpenShift is built on. The following graphic shows a timeline of OpenShift with Kubernetes:

OpenShift and Kubernetes timeline

OpenShift V4.0 and the future

Red Hat clearly proved to be at the forefront of container technology, second only to Google in contributions to CNCF projects. Another recent accomplishment of Red Hat I want to mention is the the acquisition of CoreOS in January of 2018. The CoreOS flagship product was a lightweight Linux operating system designed to run containerized applications, and Red Hat is making available in V4 of OpenShift as “Red Hat Enterprise Linux CoreOS”.

And that’s just one of many exciting developments coming in V4. As shown in the previous timeline graphic, OpenShift Service Mesh will combine the monitoring capability of Istio with the display power of Jaeger and Kiali. Knative serverless capabilities are included, as well as Kubernetes operators to facilitate the automation of application management.

The paths join up here, also. IBM is a big contributor of open source code to Istio, Knative, and Tekton. These technologies are the pathways of container-based, enterprise development in the coming decade.

OpenShift V4.0 has only recently been announced. And Red Hat OpenShift on IBM Cloud™ is a new collaboration that combines Red Hat OpenShift and IBM Cloud Kubernetes Service. For other highlights, review the previous timeline graphic.

Some conclusions

Researching the origins and history of OpenShift was interesting. Using OpenShift as a lens recognizes that in terms of software development, this decade really is the decade of the container.

It is impressive how much energy, focus, and drive Red Hat put into creating a compelling container platform by layering significantly, progressing the same technologies that IBM has shown interest in, and dedicating engineering resources to over the past decade.

We’re looking forward to learning and building with all of these cloud technologies in the years ahead.

Anton McConville
Olaph Wagoner

Developer relations down the stack

IBM recently closed the acquisition of Red Hat for $34 billion, underscoring the huge and growing importance of hybrid cloud infrastructure. My colleague Marek Sadowski has become a subject matter expert in containers, Kubernetes, and server-side Swift, although he started out as a full stack developer advocate, a robotics startup founder, and an entrepreneur.

Marek Sadowski presenting

Marek has 20 years of enterprise consulting experience throughout the United States, Europe, Japan, Middle East, and Africa. During his time at NASA, he pioneered research on virtual reality goggles for the system to control robots on Mars. After founding a robotics startup, Marek came to work at IBM. I talked to him about his experience in DevOps advocacy.

Marek Sadowski presenting in a classroom

Q: One of your focus areas in developer relations (DevRel) is containers. How is advocating for a DevOps technology different than advocating for an API or application?

Good question. When working with containers, engineers think more in terms of the plumbing and ideas of DevOps and the ease of expanding your infrastructure footprint. In contrast, when you talk about APIs, you try to make application development the center of gravity for the discussion.

When discussing APIs with developers, you talk about how one could consume the API in a robust way. Let’s take the IBM Watson API as an example. Our team will talk about how you can create and run SDKs for developers to consume APIs in their own language, such as Swift (for mobile) or Java (for enterprise). You look at the consumer of your API and discuss how you can produce the API, protect yourself, and do the billing.

Getting back to containers, you speak more about plumbing of the cloud when discussing container technology. How do you manage containers? Expand them? Manage their workloads? Deliver and test new versions?

It quickly becomes apparent that these are two separate concepts. Containerization deals with how your back end is working and proper maintenance of your application, which attracts people from a DevOps background. When you talk about APIs, that’s a completely different story. Your thought paradigm changes to be the point of view of the consumer. How does the consumer find the API? How can developers consume the API?

I speak at conferences on both subjects areas. I’ve found that people who develop applications are more interested in the look, feel, and developer experience of the application. Whereas, with containers, it’s more about back end, load balancing, and seeing issues from a system administrator’s perspective.

Q: Many people are familiar with DevRel with a focus on software engineers, but DevOps is a different community entirely. How do you focus on that community?

There is a division. Everybody is interested in new things like Kubernetes and Docker, but not too many want to perfect their skills to the point that it’s their daily job. So many developers want to know how to spin up a container and a service inside the container, put it in their resume, and be done with it. Developers may be interested because it’s fashionable or it’s a buzzword. However, you can find a lot of people who are running services in containers and have specific questions: sysadmins who want to monitor containers and assure security, load balancing, and other aspects of administration. It’s a completely different audience from developers who consume APIs and create a cool web application. They are two different communities and you have to give each community different content.

For example, in a hackathon, it’s very difficult to create large deployments in containers. It’s about an optimization of development and operations more than application coding.

Marek Sadowski with other IBMers

Q: How have you had to change your approach to DevRel when moving to DevOps advocacy?

Previously, when I ran workshops focused on application developers, they usually had a few goals: understand our API, consume data from API endpoints, and create a simple “Hello World!” type of application. Developers in these workshops ask questions about high-level ways of architecting applications, for example with Watson, in mobile applications or web applications, or a chain of processes.

On the contrary, when I speak about DevOps and containers, developers in the audience want to spin up the services, see how they scale up and scale down, investigate how the services behave when something is failing, and how to ameliorate security issues. It’s a completely different approach. They are not interested in building something new; they want to perfect their approach to deployment.

Here’s an analogy I can give to people new to this field. It’s like inviting a painter and a plumber to a party. They both do similar things, yet the painter wants to make a painting that you can hang on the wall, and the plumber will rarely speak about the type of piping he’s using inside your walls. Both are doing something in your house, but the painter is thinking about the people they will attract and the paint (our APIs) to ensure a pleasant viewing experience. Whereas, the plumber just wants to get the job done and never touch it again. The plumber wants to make changes as rarely as possible and focus on stability, while the painter wants to create more new paintings. They have different approaches based on their different goals.

Q: You also give talks on Swift, specifically on the server side. Most people know Swift from the iOS development side, so why is it useful on the server? How do you get developers to think of it as a server language?

Server-side Swift is a relatively new development. I compare the current state of server-side Swift to where Java was 24 years ago. In 1996, I started writing a server-side application using Java. It was a novel concept at that time! The same thing is happening now with Swift, as developers are moving the Swift language to the server. There are a lot of reasons why. One of the simplest is that you write in the same language on the server as you do for your mobile app, and in that way you can use the same data constructs, thought processes, and personnel resources on both systems. You don’t need different systems or frameworks to talk to the database or the cloud.

Every mobile app nowadays asks you to connect to the internet for AI, messaging, and social media. Even simple games allow you to exchange information or have a conversation with people all over the world. If your app and back end are written in one language like Swift, it makes these data exchanges simple and transparent.

Some people say Swift is a fashionable language to learn. Since you have the option to write apps in Java or JavaScript, you can also write them in Swift. Apple made Swift open source, similar to the way Sun Microsystems opened up Java. You can now write applications in the cloud or on any platform. For example, OpenWhisk allows you to write event-based Swift functions in the cloud without any DevOps code.

With Swift, developers are attracted to the beauty of the language and the ability to write one language from mobile to cloud to make your application better and easier to maintain. You can enjoy writing in your language of choice and expand the capabilities of the environment you love. If you are an iOS developer, maybe you can become a full-stack developer. Developers love the story that they can become something more and participate in the full stack development process.

Marek Sadowski at a meetup

Q: How did you get into developer relations?

I had just come to the United States from Poland as the founder of a startup, and the purpose of the move was to expand my company. They say that 99% of startups don’t succeed right away, and founders often need to bootstrap while in an existing job. I was told that working in the cloud is the key factor in a lot of industries, but I had little exposure to those technologies. On the other hand, I had built up skills talking to investors, and as an entrepreneur, I was able to understand what was important to startups. I also had a robust background in Java development and different IT technologies; I had a career as an architect supporting banks and other EMEA enterprises as a Java professional, demonstrating systems to customers.

There was an opening for a mobile-first developer advocate, and despite having no mobile or cloud experience, I convinced the interviewer that I was the perfect candidate due to my ease of speaking with developers and presenting technical subjects in an accessible manner. I enjoy explaining complex topics in a simple way through demos and example projects.

My hiring manager asked me to build a small mobile app as an employment test, which connected to IBM Cloud to exchange information between the user and a back end. I enjoyed the task and found I was good at it! After two years, I migrated to more cloud technologies and more IBM APIs. Eventually, I started to find interest in Kubernetes and containers, and realized containers are a field with amazing growth potential.

I must say, the thing that attracted me the most to DevRel was the opportunity to learn and convey new technologies to developers out there, and use my talent for explaining complex things in a straightforward manner.

Marek Sadowski snowboarding

To get in touch with Marek, feel free to reach out on any of the channels listed on his IBM Developer profile page or see him speak at an upcoming IBM Developer SF Meetup.

Dave Nugent

Kubernetes, you’ve reignited my passion for microservices!

Since 2014, microservices have expanded to become a widely adopted, powerful application architecture. The growth of cloud platforms has played a major role in microservices adoption, since applications that are built to run on those platforms follow a set of cloud-native guidelines. Although there’s no single set of rules that define cloud-native, a pattern is emerging, one that relies heavily on microservices.

However, this growth is not without its challenges. Although microservices have proven to be an effective way to develop and manage applications in production, they come with certain drawbacks. In this post, I’ll outline some modern application architectures, show the advantages and disadvantages of microservices, and describe some community-driven solutions that tackle those disadvantages.

Microservices in modern application architectures

Although modern apps encompass many different architectures, I see two ends of the spectrum: traditional on-prem apps and cloud-native apps.

Traditional applications

tradition on-prem apps In the past, when a company set out to develop an application, they would purchase hardware and then use a platform standard, like Java EE, to create the application to reside on that hardware. Java EE is quite powerful; it was used to create complex and efficient – but also clunky and monolithic – applications. To this day, these applications continue to chug along and power a large number of the back ends for applications we still use. However, this approach is rapidly losing traction in favor of cloud-native applications.

Cloud-native applications

On the other end of the spectrum lies cloud-native applications, which are often built using a microservices architecture. This new era of app development enables developers to choose the right language for the problem at hand. For example, developers might want to use Node.js to handle intensive IO operations through APIs but continue to use Java to handle computations of large numbers. This approach separates requirements and enables cross-functional teams to build each part of the stack using the preferred technology. With separate microservices making up each part of an application, scaling now becomes much easier and efficient.

So why doesn’t everyone just use microservices to build new cloud-native applications? Because it’s incredibly difficult to refactor an existing traditional application to become microservice-based and truly cloud-native.

The solution: Hybrid applications

The solution comes in the form of hybrid architectures. Instead of following a lengthy process to rewrite the whole stack, you can continue to use the established traditional application while breaking off pieces to take advantage of cloud-native concepts.

hybrid architecture Let’s imagine a scenario: a traditional application is running into delays whenever the UI is accessed because the service that handles database access is becoming a bottleneck. Instead of scaling out the entire application, the developers choose to migrate the database access code into a new app based on Node.js, which performs well at handling large numbers of API calls asynchronously. They host this application in a public cloud and open secure access to the on-prem database. Finally, they scale up this individual service whenever they anticipate a high load on the application, allowing them to save big on server expenses. These developers are now effectively working on a hybrid application stack.

Eventually, more and more pieces are broken out of the monolith into their own microservices. This makes it easier for the team to develop future projects in a truly cloud-native fashion. For example, if a new mobile application needs to be developed and all the required components have already been refactored into microservices, they can proceed using a cloud-native approach.

A love/hate relationship

I love the microservices approach for a lot of reasons, a few of which I’ve just outlined. The major advantages that come with microservices include:

Choice

You can choose the right tool for the job:

  • Node.js for simple API servers and asynchronous logic
  • Java for computing large numbers or maintaining type-safe data
  • Go, Python, Ruby, etc. for your specific needs

Then you can implement a non-restrictive technology stack — application-to-application communication is universal when using APIs or message queues.

Complexity

Complexity allows you to:

  • Assign responsibility of each microservice individuals or teams
  • Have agile development with cross-functional teams addressing each microservice:
    • Teams are composed of developers, testers, and devops engineers
  • Track failures easily since components are separate and easily identifiable

Scaling

  • Separate components enable effective, cost-efficient scaling of individual pieces to respond to load

Deployments

  • Reduces risk of failure when pushing changes to production because each microservice can be redeployed without having to push the entire stack
  • Allows for team-specific development cadences — different teams can deploy at different rates, whether weekly, bi-weekly, or monthly

These are all sound reasons to love microservices, but these same categories are also the reason why I started to hate working with microservices, especially in production. Each of these advantages comes with an unfortunate set of corresponding issues. Let me explain:

Choice

  • Each additional language that your stack uses comes with a different technology stack (for example, NPM for Node.js, Maven for Java, etc.)
  • There are custom build processes and test infrastructure for each microservice

Complexity

  • As microservices grow, operations can become a nightmare
  • Each microservice (especially with different languages) has different memory, processing, and storage requirements
  • CI workflows for each microservice is different and redeploying the full stack can be complex (particularly when moving to new geographies)

Scaling

  • There are many scaling policies to manage because each piece of the stack might have different scaling rules (CPU usage, garbage collection management, API calls per second, and so on)
  • Multiple load balancers are required because each microservice is scaled individually but still needs consistent communication with other services
  • Logging and analytics streams from multiple microservices must be managed in a consistent way

Deployments

  • There are multiple CI/CD pipelines to manage for each microservice
  • Each team requires build expertise since requirements differ greatly
  • Changes that span multiple microservices need to have coordinated deployments so upgrades can happen without causing downtime

These issues honestly made me start to re-think why I loved microservices in the first place. The approach was supposed to make things easier but my team and I were spending so much time on operations and fiddling with the various technology stacks that I was starting to think we should have stuck with a single language and platform.

Community-driven solutions

I wasn’t the only one running into these issues; the community identified these same problems and started working towards solutions. Let’s talk about the solutions that are available today to tackle the issues in each category.

Tackling choice

Although language choice seems like an obvious advantage, having to deal with the nuances of different languages can be a lot to manage. Luckily, there’s a really straight-forward solution to this: Docker.

Docker enables developers to neatly “box up” their applications into Docker containers. A Docker container comes with everything you need to run a microservice — not just the code and runtime but the system libraries as well. This enables you to put all the nuances and complexities of an application within the container. When it comes to execution, a Docker container behaves in a standardized way, no matter what language powers the actual source. Basically, a Docker container for a Java app can be dealt with a similar fashion as a Docker container for a Node.js app. No more issues with custom build processes; operations engineers can now work cross-functionally across the teams managing each microservice.

Tackling complexity and scaling

This is a big one: Docker was around for quite some time before a production-capable orchestration solution with first-class Docker support was adopted. But it finally happened with Kubernetes.

Docker

Kubernetes provides the tools for automating deployment, scaling, and managing Docker containers. It addresses various requirements that tie closely with the microservice downfalls I outlined earlier. Although there’s too many features to talk about in this post, a few key ones include:

alt

  • Load balancers for internal and external facing microservices
  • Easy-to-use DNS management for microservice-to-microservice communication, particularly through API calls
  • Auto health management with restart and retry policies for deployment and unhandled failures
  • Rolling and blue-green deployments to ensure high availability

Kubernetes is effectively tackling almost all the issues with microservices that were troubling me and my team. There’s just one problem left — and although Kubernetes provides the tools for automating deployment, it’s not always easy to do.

Tackling deployments

The final piece of the puzzle comes with Helm, a tool for managing preconfigured Kubernetes-based deployments. Although Helm calls itself the “Kubernetes Package Manager,” it shouldn’t be mistaken for something like NPM, the Node.js package manager.

Helm provides “charts” that tell Kubernetes how to deploy a set of containers, along with features to enable rollbacks, provide repeatable app installs, and simplify updates to a running Kubernetes cluster. What Helm doesn’t do is provide a registry for hosting your Docker containers, but just the charts themselves. There is a community repository, but it’s quite easy to set up a private repository for hosting your Helm charts as well. The charts are configured to tell a Kubernetes cluster which registry to pull images from.

Putting the pieces together

With their powers combined, Kubenetes, Docker, and Helm have tackled all the issues that I ran into over the years with microservices, without sacrificing the advantages. Although it seems like we’re adding even more technology to the stack, once you’ve configured these tools they’ll greatly streamline your team’s development and management workflow.

Once you’ve deployed a microservices web application to Kubernetes, you can expect it to look something like this:

kubernetes architecture diagram

A user accesses the web application through an Ingress load-balancer provided by Kubernetes. It gets routed to the web application served by Node.js, which uses two other microservices on the back end, a Java Microprofile.io service and a Python service.

This is just one example of how a microservice-based web application might look within a Kubernetes cluster. To get get started with deploying an application to Kubernetes, check out these resources:

In the next part of this blog series, I’ll tackle how Istio helps developers manage their microservices in production. I hope you’ve enjoyed reading about my experience with microservices. I’d love to hear about your own experiences — feel free to reach out to me directly on Twitter at @Sai_Vennam if you have any questions or comments.

Sai Vennam

IBM joins the GraphQL Foundation to push for open source adoption

IBM is excited to be a founding member of the GraphQL Foundation, which is hosted by the Linux Foundation. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. It was open sourced in 2015.

We have been following the development of GraphQL in the last few years and started embracing this new technology in our products – Supply Chain and IBM Cloud, to name just two. We use GraphQL successfully to build IBM Cloud consoles, which need to query various backends and catalogs to retrieve available services, subscriptions, or service instances for display in dashboards.

GraphQL provides an outstanding experience for developers, which fosters their ability to innovate. We see great potential for GraphQL to serve highly diverse consumer requirements, which is a common challenge for public APIs. We also think that GraphQL can play an essential role within organizations to consolidate access to data that is increasingly distributed across microservices.

At IBM Research, we recently open sourced OASGraph, a library that processes a Swagger or OpenAPI Specification (OAS) defining REST endpoints and automatically produces a GraphQL interface around that API, ready to be used. We are receiving great community support, and we are now starting to support generating GraphQL interfaces from multiple OAS.

Going forward, we see a number of opportunities to make GraphQL enterprise-ready. In particular, we have been working on GraphQL API Management (read our recent blog post). GraphQL queries may hide a lot of complexity and even pose threats to backend systems if they are ill-structured or excessive in resource demands. A GraphQL API may also be part of a subscription and call for rate limits which are likely to be substantially different from the usual REST rate limits of #calls/unit of time.

As a member of this new foundation, we look forward to working with the community to increase language support for GraphQL, particularly in the area of GraphQL validation for C++, and to evolve the thinking around GraphQL API Management.

Erik Wittern is a GraphQL enthusiast, he is one of the maintainers of OASGraph, and will represent IBM in the GraphQL Foundation technical board.

References

Erik Wittern

Lessons from Think 2019: co-creating to change the world

After attending a cross-section of the wide range of sessions and events at Think 2019 conference in San Francisco last week, I came home thinking about a common theme: co-creation.

How you develop, not just what you develop

Ginni Rometty, IBM Chairman, President, and CEO, kicked off the conference on Feb. 12 with the chairman’s address Building Cognitive Enterprises. One of the topics she highlighted was “chapter 2 of the cloud,” which centers around hybrid cloud and a multi-cloud, open, secure, and consistently managed environment.

“I have to tell you a couple of things we’ve learned. On one hand, the ‘what’ you’re doing is really important, but I think you’re going to find the ‘how’ is almost more important,” Rometty said. “This is going to be an era of co-creation.”

Jim Whitehurst, CEO of Red Hat, was one of the guests who joined Rometty for that keynote talk, and his observations focused on a similar theme. “I think that one of the most extraordinary things that’s happened over the last decade is this growth of what I’ll say is user-driven innovation,” he said.

“We see that happening at scale where companies like Facebook and Twitter and LinkedIn and Google and others have large staffs of technology people that look to solve their own problems in massive data centers, and the byproduct of that is some really interesting code in open source.” He said Red Hat recognizes that “the byproduct is a phenomenal thing, but it was never built with end use in mind.”

“Kubernetes is phenomenal. People are talking about it, but the way it was originally built, and until recently you couldn’t run a stateful application, which is 99.9 percent of all applications out there,” Whitehurst said. “So we worked together to drive into the roadmaps of Kubernetes the ability to run those applications.”

You can watch the discussion here.

We’re all in it together

At a “Shaping the Cloud Native Future” session on Feb. 13, Abby Kearns, executive director of the Cloud Foundry Foundation, encouraged companies evaluating the future not to think of transformation like a caterpillar into a butterfly, which seems to happen magically. Instead she said to think of a whitewater kayaker using mastery to navigate through waters that are constantly changing.

“I’m deeply passionate about open source, where a majority of the innovation is happening today,” she said. She encouraged developers to get involved by contributing to open source that they use, because “it does not become great without you.”

She pointed to the the history of Cloud Foundry and recommended Michael Maximilien’s blog post on the topic. And she urged developers to “build for the future” and be responsive as they navigate through a quickly changing, interating, and evolving environment.

“Look at the cloud native landscape,” Kearns said. “We’re all in it together. We’re all focusing on the horizon and all trying to stay afloat.”

Think 2019 in San Francisco

Even a panel that focused more on HR efforts than coding followed the theme of collaboration for better results. The Intersectionality, Marketplace Strategy, and the Future of Inclusion panel discussion on Feb. 14 covered how teams that actively include various perspectives and backgrounds can create optimal outcomes together.

“Everyone has a role to play,” said David Galloreese, Chief HR officer for Wells Fargo.

“We all have levels of privilege that we can use to support other people’s voices that have not been heard,” said Jennifer Brown, consultant and author of “Inclusion: Diversity, the New Workplace, and the Will to Change.”

The powerful panel discussion that Rometty led on Feb. 14, Open Source: The Cornerstone to Innovation and Future for Enterprise, included both the business benefits and technology benefits of open source, and centered on co-creation.

“You can’t get this pace of innovation without a collective development effort,” said Jim Zemlin, Executive Director of the Linux Foundation.

Kearns also participated in the panel. “If you contribute the time, you get to dictate the future of the open source project,” she said. “Without contribution, the future of that technology is at risk.”

For more insights from this panel, see Think 2019 recap: Open source leaders answer top questions.

Real-world coding examples

As the cloud computing editor for IBM Developer, I sought out ways that developers are co-creating in serverlesess and cloud areas.

In his tech talk Developers Reclaim Their Time with Serverless, Carlos Santana, IBM Senior Technical Staff Member and Architect for IBM Cloud Functions, showed a serverless-computing stack with an Orchestrator built on Kubernetes, the next layer of Containers built on Knative, and the top layer of Functions built on OpenWhisk. A service mesh, built on Istio, is also part of the architecture.

Julian Friedman, an Open Source Development Lead at IBM, is the project lead for Cloud Foundy’s low-level container engine (“Garden”) and the Eirini project, which allows Kubernetes to be used as the container scheduler in Cloud Foundry. Friedman held a tech talk “Cube Your Enthusiasm: Explore bringing Cloud Foundry and Kubernetes together with Eirini,” where he described several approaches to bringing both open source projects together for developing on the cloud.

Friedman described the power of Kubernetes but alluded to the Spiderman comic books by adding “With great power comes great responsibility.” And not all developers want to spend all their time on working with Kubernetes, he said. “Instead, I think we should focus on the developer experience.”

Carl Swanson, Product Manager for IBM Cloud Foundation Services, and Gili Mendel, an IBM Senior Technical Staff Member, expressed a similar advantage to Cloud Foundry in their “Build Secure Cloud-Native Solutions Rapidly Using IBM Cloud Foundry Enterprise Environment” Think Tank session.

“As a developer, I don’t have to deal with the specifics,” Mendel said. He and Swanson described the way that Cloud Foundry Enterprise Environment is built on both the Cloud Foundry and Kubernetes open source projects. They compared using Cloud Foundry Enterprise Environment instead of working directly with Kubernetes as driving a car instead of building it. “Today I don’t carry a tool box in the back of my car anymore,” Mendel said.

For more details about the Eirini project, check out a previous blog post from Friedman. For more information about Cloud Foundry, see the Cloud Foundry page on IBM Developer and Cloud Foundry Enterprise Environment.

You have the power to change the world

The power of co-creation can help communities both recover from disasters and plan to be more resilient for future disasters.

At Think 2019, I met Pedro Cruz, who created DroneAid as part of a Puerto Rico Call for Code 2018 hackathon. This year he is collaborating with the Call for Code 2018 winning team Project OWL to incorporate his code to use drones with that project, which provides an offline communication infrastructure to connect first responders with people who need help.

One of the Project OWL team members, Bryan Knouse, joined Chelsea Clinton, vice chair of the Clinton Foundation and IBM Senior Vice Present Bob Lord for the Igniting the next generation of innovation to change the world discussion on Feb. 14.

“We look at at open source as a distribution strategy,” Knouse said. “We can open our source to any developer in the world – all 20 million of them.”

Think 2019 in San Francisco

Clinton described the Clinton Global Initiative University, which brings together young people from around the world and connects them with experts to discuss and develop innovative solutions to pressing global challenges. A new partnership with IBM Code and Response™ commits to inspiring university students to develop solutions for disaster response and resilience challenges.

“To fight this global challenge, we need a global effort, and that’s diversity from the start,” Knouse said.

IBM Senior Technical Staff Member and Master Inventor Daniel Krook, talked about putting past solutions from challenges into action in the “You Have the Power to Change the World: Code and Response” session on Feb. 13.

“When we started with Call for Code, we always wanted a long-term sustainable way to implement solutions,” Krook said. “Now the IBM Corporate Service Corps is already engaged with Code and Response.”

There are opportunities all around us to co-create. You can check out the Code and Response page and consider joining the Call for Code 2019 challenge. If you are interested in contributing content to IBM Developer about serverless or Cloud Foundry, or other cloud areas, contact me.

Amy Reinholds