A brief history of Kubernetes, OpenShift, and IBM

The recent introduction of Red Hat® OpenShift® as a choice on IBM Cloud sparked my curiosity about its origins, and why it is so popular with developers. Many of the developers I sat beside at talks, or bumped into at lunch, at a recent KubeCon Conference, mentioned how they used OpenShift. I heard from developers with financial institutions running analytics on transactions and with retailers creating new experiences for their customers.

OpenShift is a hybrid-cloud, enterprise Kubernetes application platform. IBM Cloud now offers it as a hosted solution or an on-premises platform as a service (PaaS). It is built around containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux.

With the growth of cloud computing, OpenShift became one of the most popular development and deployment platforms, earning respect based on merit. As cloud development becomes more “normal” for us, it is interesting to consider where OpenShift fits, as another tool from the toolbox for creating the right solution. It might mix with legacy on-premises software, cloud functions, Cloud Foundry, or bare metal options.

In this blog post, my colleague Olaph Wagoner and I step back in time to understand where OpenShift came from, and we look forward to where it might be going in the world of enterprise application development with Kubernetes.

The following graphic shows a timeline of OpenShift, IBM, and Kubernetes:

OpenShift, IBM, and Kubernetes timeline

Early OpenShift: 2011-2013

OpenShift was first launched in 2011 and relied on Linux containers to deploy and run user applications, as Joe Fernandes describes in Why Red Hat Chose Kubernetes for OpenShift.

When OpenShift was born in 2011, it relied on Linux containers to deploy and run user applications. OpenShift V1 and V2 used Red Hat’s own platform-specific container runtime environment and container orchestration engine as the foundation.

However, the story of OpenShift began sometime before its launch. Some of the origins of OpenShift come from the acquisition of Makara, announced in November of 2010. That acquisition provided software as an abstraction layer on top of systems and included runtime environments for PHP and Java applications, Tomcat or JBoss application servers, and Apache web servers.

Early OpenShift used “gears”, which were a proprietary type of container technology. OpenShift nodes included some kind of containerization. The gear metaphor was based on what was contained. OpenShift called the isolated clusters gears: something capable of producing work without tearing down the entire mechanism. An individual gear was associated with a user. To make templates out of those gears, OpenShift used cartridges, which were acquired from Makara.

OpenShift itself was not open source until 2012. In June 2013, V2 went public, with changes to the cartridge format.

Docker changes everything

Docker was started as a project by a company called dotCloud, made available as open source in March 2013. It popularized containers with elegant tools that enable people to build and transfer existing skills into the platform.

Red Hat was an early adopter of Docker, announcing a collaboration in September 2013. IBM forged its own strategic partnership with Docker in December 2014. Docker is one of the essential container technologies that multiple IBM engineers have been contributing code to since the early days of the project.

Kubernetes

Kubernetes surfaced from work at Google in 2014, and became the standard way of managing containers.

Although originally designed by Google, it is now an open source project maintained by the Cloud Native Computing Foundation (CNCF), with significant open source contributions from Red Hat and IBM.

According to kubernetes.io, Kubernetes aims to provide “a system for automating deployment, scaling, and operations of application containers” across clusters of hosts. It works with a range of container tools, including Docker.

With containers, you can move into modular application design where a database is independent, and you can scale applications without scaling your machines.

Kubernetes is another open source project that IBM was an early contributor to. In the following graphic you can see the percentage of IBM’s contribution to Docker, Kubernetes, and Istio in the context of the top 5 orgs to contribute to each of those container related projects. It highlights the importance of container technology for IBM, as well as some of the volume of open source work.

Some of IBM's contributions to open source container technology

OpenShift V3.0: open and standard

Red Hat announced an intent to use Docker in OpenShift V3 in August 2014. Under the covers, the jump from V2 to V3 was quite substantial. OpenShift went from using gears and cartridges to containers and images. To orchestrate those images, V3 introduced using Kubernetes.

The developer world was warming to the attraction of Kubernetes too, for some of the following reasons:

  • Kubernetes pods allow you to deploy one or multiple containers as a single atomic unit.

  • Services can access a group of pods at a fixed address and can link those services together using integrated IP and DNS-based service discovery.

  • Replication controllers ensure that the desired number of pods is always running and use labels to identify pods and other Kubernetes objects.

  • A powerful networking model enables managing containers across multiple hosts.

  • The ability to orchestrate storage allows you to run both stateless and stateful services in containers.

  • Simplified orchestration models quickly allow applications to get running without the need for complex two-tier schedulers.

  • An architecture understood that the needs of developers and operators were different and took both of those requirements into consideration, eliminating the need to compromise either of these important functions.

OpenShift introduced powerful user interfaces for rapidly creating and deploying apps with Source-To-Image and pipelines technologies. These layers on top of Kubernetes simplify and draw in new developer audiences.

IBM was already committing code to the key open source components OpenShift is built on. The following graphic shows a timeline of OpenShift with Kubernetes:

OpenShift and Kubernetes timeline

OpenShift V4.0 and the future

Red Hat clearly proved to be at the forefront of container technology, second only to Google in contributions to CNCF projects. Another recent accomplishment of Red Hat I want to mention is the the acquisition of CoreOS in January of 2018. The CoreOS flagship product was a lightweight Linux operating system designed to run containerized applications, and Red Hat is making available in V4 of OpenShift as “Red Hat Enterprise Linux CoreOS”.

And that’s just one of many exciting developments coming in V4. As shown in the previous timeline graphic, OpenShift Service Mesh will combine the monitoring capability of Istio with the display power of Jaeger and Kiali. Knative serverless capabilities are included, as well as Kubernetes operators to facilitate the automation of application management.

The paths join up here, also. IBM is a big contributor of open source code to Istio, Knative, and Tekton. These technologies are the pathways of container-based, enterprise development in the coming decade.

OpenShift V4.0 has only recently been announced. And Red Hat OpenShift on IBM Cloud™ is a new collaboration that combines Red Hat OpenShift and IBM Cloud Kubernetes Service. For other highlights, review the previous timeline graphic.

Some conclusions

Researching the origins and history of OpenShift was interesting. Using OpenShift as a lens recognizes that in terms of software development, this decade really is the decade of the container.

It is impressive how much energy, focus, and drive Red Hat put into creating a compelling container platform by layering significantly, progressing the same technologies that IBM has shown interest in, and dedicating engineering resources to over the past decade.

We’re looking forward to learning and building with all of these cloud technologies in the years ahead.

Anton McConville
Olaph Wagoner

IBM Cloud Hyper Protect Services: Protect your organization from internal AND external threats

As a developer, you probably understand how important data security is — and this holds true whether you are a founder of the next great tech startup or part of a large enterprise team. Barely a month goes by without a high-profile story in the news about a data breach at a major company, and those are just the ones that were discovered and worth reporting on. Regardless of company size, data protection is as relevant now as it’s ever been. More recently, we’ve even heard of creative compromises where organizations believed their sensitive data was secure, but since they didn’t secure data they didn’t believe was sensitive, they found themselves vulnerable to attack. Even beyond specific incidents, many organizations are bound by compliance requirements (such as PCI DSS, GDPR, and HIPAA). As more and more countries evaluate and implement their own requirements, conversations around keeping sensitive data secure continue to evolve. Why rely on policy when you can rely on technology?

This is where a solution that provides data-at-rest and data-in-flight protection can help developers easily build applications with highly sensitive data. To meet this need, IBM Cloud offers a suite of services collectively known as IBM Cloud Hyper Protect Services, which are powered by LinuxONE. These services give users complete authority over sensitive data and associated workloads (even cloud admins have no access!) while providing unmatched scale and performance; this allows customers to build mission-critical applications that require a quick time to market and rapid expansion.

IBM Cloud Hyper Protect Services consists of four services, a combination of PaaS and SaaS, which are covered briefly in this video. The services are:

  • Hyper Protect Crypto Services
  • Hyper Protect DBaaS
  • Hyper Protect Virtual Servers
  • Hyper Protect Containers

Hyper Protect Crypto Services

Hardware-driven cryptography allows you to manage and keep your own keys for cloud data encryption, protected by a dedicated hardware security module (HSM) that meets FIPS 140-2 Level 4 certification — the only one in the industry that meets such standards! This means that even if an attacker has physical access to the data center where your cloud service resides, tamper-resistant hardware keeps your data protected.

Learn more about Hyper Protect Crypto Services by visiting the IBM Cloud documentation.

Hyper Protect DBaaS

Taking Database as a Service (DBaaS) one step further to add encryption, we offer two solutions: Hyper Protect DBaaS for PostgreSQL and Hyper Protect DBaaS for MongoDB EA.

Learn more about Hyper Protect DBaaS by visiting the IBM Cloud documentation.

You can also check out the IBM Developer tutorial Protect cloud-based data with an encrypted database.

Hyper Protect Virtual Servers

Provide your SSH key and you’re up and running with a secured Virtual Server, backed by isolation powered by the IBM LinuxONE Enterprise Server.

Learn more about Hyper Protect Virtual Servers by visiting the IBM Cloud documentation.

Hyper Protect Containers

Containers are incredibly popular today, and Hyper Protect Containers can help you build, test, and deploy a secure microservices-driven environment.

Under the hood

Looking for more technical details on how Hyper Protect works under the hood? Visit this tech talk by Chris Poole to learn precisely how the infrastructure is configured to provide maximum security, including how encryption and isolation are used to secure your data.

Conclusion

With more and more data being collected, stored, and shared, it is incumbent upon developers and data scientists to create technical solutions that automate data protection and security, especially individual-level, personally identifiable information. When your data is collected, stored, and shared, how would you like it to be managed? We assume you want the highest standards possible. That is how we want our data managed! Managing data security by policy alone is no longer good enough. As the world continues to become even more data driven, the future is all about data, innovation, and technical solutions like IBM Cloud Hyper Protect Services.

If you don’t have an IBM Cloud account yet, you can get started with these services today by visiting the IBM Cloud registration page.

Elizabeth K. Joseph
Melissa Sassi

Create a solution in response to wildfires while keeping sensitive data safe

Our planet has entered an era in which natural disasters and humanitarian crises are inevitable occurrences on every continent. In 2018 alone, there were major disasters from the deadliest wildfire in California’s modern history, Hurricane Maria in Puerto Rico and Hurricane Michael in Florida, tsunamis in Indonesia, and a severe flood in Nigeria.

Dealing with the complexity of community needs and coordinating layers of emergency response simply cannot be solved with a single solution, but rather with a portfolio of integrated solutions. These combined solutions can work together to identify disasters and victims, respond to requests for help, cope with the aftermath of a disaster, and finally begin infrastructure and community recovery. Without a vision of integrated solutions between smart technologies and artificial intelligence (AI), disaster responders will face disjointed individual solutions that collectively may not address the grand challenge. In other words, the whole may be less than the sum of its parts. The challenge of this year’s Call for Code global challenge is how we can build a portfolio of smart technology, internet of things (IoT), and AI to build the capacity and to unify disparate information sources, all while protecting sensitive data and keeping information secure.

Using smart technology to leverage big data

In the context of disaster recovery, smart technologies enable the leveraging of big data, which often involves sensitive data. All smart technologies rely on integrating newly generated and mapped data with existing data. For instance, after a wildfire is under control, the need emerges for rapid mapping of affected areas and the extent of damage, often at individual household levels. AI and machine learning (ML) can bridge this knowledge gap by coalescing data from autonomous mapping drones with textual, visual, and geoinformation gathered from active or passive crowd-sourcing. The challenge is to develop technologies for rapid mapping while keeping sensitive data protected in the cloud. Different data gathering techniques yield heterogenous data, but smart technologies use AI and ML to assess reliability and quantify uncertainties, without putting sensitive data at risk.

Applying smart technology to identify wildfire triggers

Before the disastrous Camp Fire, Noah Diffenbaugh, professor of earth science at Stanford University, predicted an increase in wildfire risks due to trends from higher temperatures and a drier climate (The Independent, United Kingdom, July 31, 2018). In the case of wildfires, the question is whether we can proactively detect a nascent wildfire trigger before it is out of control. Smart technologies, drones, machine vision, AI, and ML integrate and frame data to proactively identify hazards. It is also possible to better integrate the data obtained from these technologies from human activity data, such as social media information, aerial drone detection, and video integrations. With development, AI, ML, and data analytics can map potential hazards to enable proactive mitigation, all while keeping personally identifiable data protected.

See what one of the runner-up teams did with wildfires and machine learning from last year’s Call for Code challenge.

Smart tech to coordinate efforts of government, NGO, and volunteer responders

Responders may be professionals and volunteers; employees of governmental agencies or humanitarian organizations; civilians or members of the military. A disaster triggers aid from multiple organizations, but often with suboptimal coordination among efforts and stakeholder groups. In this complex layering of stakeholder groups, personal and smart technologies facilitate coordination of logistics by sorting out “who does what.” Smart platforms must also be protected during such turmoil. Smart technologies can serve as active hubs during the emergency and in its aftermath, allow first responders to log in to learn of the most critical needs to provide evacuation assistance, household needs, water, or road clearing. Automatic net notifications have significantly improved coordination of international humanitarian crises – all powered by cloud technologies, such as those offered via IBM. Development of AI and ML-based systems could aid in identifying responders and resources, mapping them to actual needs on the ground, all while supplying technical solutions for coordinating this information in real time. Imagine the power of such technological advances and how coordination could be significantly improved.

Reduce inequalities in aid to vulnerable populations

In addition to the complexities outlined above, there is another factor to consider. Vulnerable populations are often overlooked by high-end solutions and natural disasters often leave vulnerable populations even more exposed. The elderly, disabled, or those with very low incomes are less likely to have easy access to technologies, such as mobile phones or connection to the Internet, which can hinder how smart technologies can help them during a chaotic time. According to a recent UN report and this journal publication from Christina Muñoz and Eric Tate, those who were in one of those vulnerable categories were also not provided an equal distribution of aid, whether it be financial or otherwise. For such populations, technology has the potential to serve as an empowering force to reduce social inequality, all while still securing sensitive data. For instance, by monitoring and analyzing social media, responders can detect signs of distress among the elderly, low-income households, and disabled persons. Or, if access to technology is not so readily available, technologies like Project Owl’s “ducks” allow anyone to log into the OWL emergency network and connect to provide feedback on their status.

Technologies for disaster relief should be human-centric, not necessarily technology-centric, with solutions focused on identifying and prioritizing efforts to lessen hardships on distressed populations, while not exploiting the need to protect personally identifiable information. Technology has the power to lessen the gap between satisfied and unmet needs of these population segments.

As disaster responses increasingly rely on emerging technologies, it is essential to remember that the core of the response is data, and often involves the most sensitive data available at individual levels. Foremost in the advancement of disaster relief technology must be the protection of data. Losing critical data during events would exacerbate vulnerabilities and deteriorate disaster response. With proper data security and encryption, smart technologies enable a more intelligent disaster response to respond quicker to disasters and rapidly map disaster scenarios.

IBM Cloud Hyper Protect Services

IBM Cloud Hyper Protect Services can help you create and implement highly secure AI, ML, and data analytics solutions that empower developers and data scientists like you to build cloud applications using a portfolio of cloud services powered by IBM LinuxONE.

Learn how IBM Cloud Hyper Protect Services keep sensitive data safe.

Enter your submission

Are you interested in addressing natural disasters through technology? IBM is a founding partner in Call for Code, a global coding competition that asks software developers, data scientists, and technologists to build sustainable solutions that address natural disaster preparedness, response, and recovery.

With submissions now open, Call for Code 2019 is asking you to accept the challenge to innovate like Ali Mostafavidarani, Ph.D.

Ali Mostafavi is an Assistant Professor in the Zachry Department of Civil Engineering at Texas A&M University. Ali Mostafavi supervises the Urban Resilience, Networks, and Informatics Lab. His research focuses on analyzing, modeling, and improving network dynamics in the nexus of humans, disasters, and the built environment to foster convergence knowledge of resilient communities. His research also focuses on integrating human and machine intelligence for smart disaster response through artificial intelligence. His review on social media providing critical information during a disaster touches on how to effectively acquire disaster situational awareness information, support self-organized peer-to-peer help activities, and enable organizations to hear from the public. He has received various awards and honors such as the NSF CAREER Award and Early-Career Research Fellowship from the National Academies Gulf Research Program.

This year’s challenge is specifically focused on healthcare, access to medical health records, the vulnerable, and more. Read the CTO’s letter to developers to understand this year’s focus.

Answer the call and start building today.

Further reading

For more information, visit developer.ibm.com/callforcode.

Ali Mostafavidarani, Ph.D.
Melissa Sassi

Troubleshooting the cert-manager service for Kubernetes

What is cert-manager?

Before we can start troubleshooting issues, first we need to discuss the software that we’re using. Cert-manager is the next step in the kube-lego project, which handles provisioning of TLS certificates for Kubernetes. Basically, it takes away the manual work of requesting a cert, configuring the cert, and installing the cert. Instead of working directly with Nginx, we can describe what we want configured. Then, the rest is taken care of automatically with ingress resources and the ingress controller. Cert-manager configures new Kubernetes resource types that can be used to configure cerficiates – Certs and Issuers. There are two kinds of issuers, ClusterIssuer and Issuer, which have different scopes. A ClusterIssuer manages certificates for the entire cluster. However, in this example we are using an Issuer, which controls only a single namespace.

For a more detailed overview of cert-manager, check out their GitHub project page: https://github.com/jetstack/cert-manager

Where are the examples running?

For this blog’s troubleshooting demo, I’m using the IBM Cloud Kubernetes Service (IKS). IKS is IBM’s Kubernetes offering. It provides a great version of Kubernetes, which makes it great for testing deployments and new features or projects that extend K8s. For the purposes of this blog, I am using a paid-tier version of IKS simply because the free tier doesn’t allow us to port 80/443. The free tier is also limiting in that we can’t use load balancers. (For full details and pricing plans, visit https://cloud.ibm.com/kubernetes/catalog/cluster.)

The new cert-manager project supports more ingress controllers. Kube-Lego was limited in supporting different ingress controllers. The biggest difference that I can see between Kube-Lego and Cert-Manager is how the ingress resources are configured. In Kube-Lego there would be at least two ingress resources per domain, which would break certain ingress controllers as they were not expecting more than one resource per dns record.

Setup

The application was deployed by using HTTP validation. There are example services and applications that can be used if you want to try this out in the example. Troubleshooting assumes the steps in the documentation have already been followed.

Troubleshooting

Most of the common issues that we see with Kubernetes seem to come from slow DNS resolution. If you are configuring an A record for your domain around the same time as deployment, then you might run into issues when Let’s Encrypt attempts to verify the domain. If the domain is not resolving yet, then we can assume that the challenge file is not reachable. Verify that you can resolve the DNS record before you attempt to set this up.

Great, but what does that mean and why do I care? We need the resolution to work because Let’s Encrypt is going to issue a challenge to make sure that the domain actually exists and that it wants to be configured by Let’s Encrypt. Basically there’s a challenge file that needs to exist in a specific location and is being served on port 80. If it exists, then Let’s Encrypt will progress. If the DNS is not configured correctly, or it hasn’t resolved, then Let’s Encrypt is unable to resolve the domain and will also fail at finding the challenge file.

Since we are using IKS, we’ll already be set up with an ingress controller and ingress resource by default. When setting up DNS, we want to use the IP address that is associated with the ingress controller and load balancer service that was configured. Where can we find this valuable information? It’s going to be in the kube-system namespace.

kubectl get svc -n kube-system |grep -i "public"

We see output similar to:

public-crf3df42c3c8a142c8a3e0ee73ed4e58e2-alb1   LoadBalancer   172.21.39.18     169.61.23.142   80:31337/TCP,443:31615/TCP   106d

We need to pull out the public IP (in this case it’s 169.61.23.142) and use that to set up an A record for the hostname we are using. The great part about having an ingress controller that’s already configured on the cluster is that we can manage multiple domains. In this demo, I set up multiple domains to point to the same IP address and then used ingress resources + cert-manager + the ingress controller to manage traffic resolution based on the hostname. When the DNS record finally resolves, you can move along and attempt a deployment.

lp.mpetason.com has address 169.61.23.142

First, we need to see which ingress resources were created with the command below.

kubectl get ingress

Note: If we are checking in a different namespace, then we need to append -n NAMESPACE_NAME.

Find the name of the resource that was recently created and then describe it.

kubectl describe ingress INGRESS_NAME

Check for valuable information in Events. Normally, we’ll see something like “failed to apply ingress resource” in the message field, and if we check the “Reason” field we’ll actually get a useful error message. This is great for sysadmins and developers since it means that they get useful information without having to look at log files on an actual server.

Events:
  Type     Reason             Age   From                                                             Message
  ----     ------             ----  ----                                                             -------
  Warning  TLSSecretNotFound  3s    public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-bl976  Failed to apply ingress resource.
  Warning  TLSSecretNotFound  3s    public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-25nhq  Failed to apply ingress resource.

After we figure out that the TLS Secret might be missing, we need to see what the expected resource is named. We name the secret in our ingress resource, so let’s check there first.

kubectl get ingress INGRESS_NAME -o yaml

We use the output option to specify yaml so we can read the configuration file. You can also use describe instead of using “get” with “-o yaml”, we’ll just see the output in a different format. In our case, our secret name is lp-mpetason-com-tls1.

tls:
  - hosts:
    - lp.mpetason.com
    secretName: lp-mpetason-com-tls1

Check for configured secrets to see if the secret is configured, or if it has a different name for some reason. If we are having trouble with our deployment, then it may not have been created. In order to get the file created we would need for the Issuer and the Cert to finish getting configured.

kubectl get issuer
kubectl describe issuer ISSUER_NAME

We should be able to find the error message in Events. Most of the error messages about the Issuer are related to the acme endpoint. There might be other issues that can come up, however I haven’t seen them enough to help troubleshoot – yet. For the most part, you can try to resolve the issues you see in the Event info or Status.

If our issuer is working without issues, we will see something like the following:

Status:
  Acme:
    Uri:  https://acme-v01.api.letsencrypt.org/acme/reg/<NUMBERS>
  Conditions:
    Last Transition Time:  2018-06-14T18:12:24Z
    Message:               The ACME account was registered with the ACME server
    Reason:                ACMEAccountRegistered
    Status:                True
    Type:                  Ready
Events:                    <none>

As of this post, we should probably use acme-v02 instead. If you run into errors about the version, go ahead and change it.

Next, we need to take a look at the cert and see what the status is.

kubectl get cert
kubectl describe cert CERT_NAME

Here we can run into a few other issues, such as rate limiting, if we tried to register a lot in a short period.

Normally, if the issuer is working and DNS is resolving, we should be able to get a cert. After we confirm that we have a cert via the Describe on the cert, we’ll need to take a look at secrets to verify that it was created.

kubectl get secret

If the secret exists we can go back over to the ingress resource to see if the ingress controller was able to load our cert.

Warning  TLSSecretNotFound  26m   public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-bl976  Failed to apply ingress resource.
  Warning  TLSSecretNotFound  26m   public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-25nhq  Failed to apply ingress resource.
  Normal   Success            11s   public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-25nhq  Successfully applied ingress resource.
  Normal   Success            11s   public-cr0ba8157fd1a6454ca7ba3125b9b44ff6-alb1-5895555f68-bl976  Successfully applied ingress resource.

Success! Now we can hit the site and verify that https:// works.

Mike Peterson

Cloud, data, and AI for the greater good

Technology drives significant and rapid change. Computers allow the digitization of information, the internet enables the transmission of that information at the speed of light, and artificial intelligence is finding connections between disparate information. Such impact comes with awesome responsibility, and developers around the world shoulder that responsibility as they build next-gen applications.

Cloud technology has made it easier to spin up technology rather than have to go through hours of manual installation and configuration steps. With sensitive data stored in the cloud, new struggles emerge. Unauthorized access has become more prominent, and the staggering cost of data breaches is very real. The solution is simple in concept but difficult in execution: Encrypt all data.

This is why developers turn to solutions like Hyper Protect which builds encryption around their applications. By using one of these services, developers can devote their attention to the applications by relying on built-in encryption at rest and in flight. Best yet, all of this is available in the cloud for easy deployment. This is a perfect way to combine enterprise-grade encryption, cloud, data, and artificial intelligence to aid in community and individual health and well-being.

For more details, the video Ensuring your customers’ data privacy with applications secured on IBM Z provides a great overview of this problem and its solution.

Some related blogs and tutorials:

For more information about using APIs exposed on IBM Z, check out:

For more information about running a Linux workload on IBM LinuxONE, read:

Matthew Cousens

Protect sensitive data when disaster strikes and every second counts

Ensuring sensitive data is secure is top of mind for everyone, particularly those who work with sensitive health data. Hyper Protect cloud services built on IBM LinuxONE take security to the next level. The DBaaS service brings inherent data encryption both at rest and in flight without any application changes, and unlike other DBaaS cloud services, it ensures that you are the only one with access to your data. The Crypto service allows you to have complete control of encryption key management where cloud admins have no access to the keys.

In the aftermath of disaster, you don’t have time to think about security and scalability — you need to build it into your application from the start! Using hyper-secure data at rest, you can enable first responders and relief organizations to safely and securely collect and use personal data about those impacted. This also ensures that the application will scale and perform when you need it most, during unplanned usage peaks. Plus, it maximizes performance and throughput when seconds count.

Applications deployed in the aftermath of a disaster need to be reliable and trusted, and should scale and perform as the situation demands. You can easily use IBM’s Hyper Protect DBaaS as the backend data store to keep sensitive data fully protected and secure. This unique DBaaS allows data to be stored in a highly secured enterprise cloud service, and is a perfect match for workloads with sensitive data. It allows you to retain your data in a fully encrypted client database without the need for specialized skills.

Hyper Protect DBaaS gives you the ability to provision, manage, maintain, and monitor multiple database types like MongoDB through standardized APIs. It also protects against threats of data breach and data manipulation by leveraging LinuxONE pervasive encryption, scalability, performance, and IBM Secure Service Container technology behind the scenes. Developers who use Hyper Protect DBaaS as their backend data store can ensure that individual health and community well-being data will not be further compromised by a data breach.

Hyper Protect DBaaS is easy to use, fully secure, and can be up and running in the matter of minutes. Protect the sensitive data you are collecting without requiring specialized security skills by using Hyper Protect DBaaS. To start quickly and easily, see our how-to, “Quickly create a hyper-secure database.”

Rebecca Gott

Develop secure, scalable apps to respond effectively to public health emergencies

Because of the failed rains in 2016 and 2017, more than 3.1 million people in East Africa are facing severe food insecurity. Over 388,000 children under the age of five are at risk of dying from a lethal combination of severe malnourishment and deadly diseases. More than a million people have been displaced and many of them have ended up in internally displaced person (IDP) camps around cities and towns. Many of these camps are difficult to reach because of distance, conflict, or political barriers.

Non-profit organizations such as Oxfam are working in the most affected areas providing support in the areas of food security, water, sanitation, and protection. Innovative scalable mobile-based technology solutions can play an important role in the current response, but they require high security and scalability. Applications deployed in these affected areas need to be reliable and trusted, and should scale and perform as the situations demand.

To build mobile-based apps that utilize innovative back-ends to address natural disasters such as drought, developers should use open, cutting-edge hardware and software that provides the highest levels of security. Open source is now the dominant method for creating cloud-native software, with Docker at the center of most container-based innovations. This developer code pattern demonstrates how a modern back-end development ecosystem is ideal for situations like this, and why emphasis on security, maturity, and high performance is so important to achieving reliability, scalability, and trust in natural disaster preparedness and relief.

Learn more and get started.

Mohammad Abdirashid

Speed access to financial assistance for community well being

Giving people access to their funds to ensure community well-being before and after disaster strikes is critical to survival and recovery. You can connect and integrate your applications to banking enterprise IT infrastructures and systems using APIs to validate credit ratings, get balances, access customer information, and more. Ninety-two of the world’s top 100 banks rely on IBM Z to securely handle enormous numbers of transactions each day. Developers can leverage APIs to create applications that use vast amounts of mainframe data without requiring mainframe skills.

Utilize the code pattern Create financial applications using APIs on mainframe to emulate calling an enterprise banking system for customer information, bank account information, credit ratings, and more. These APIs access a banking system that uses IBM CICS, IBM Db2, IBM Machine Learning, and more. These are extremely common applications that many financial institutions are running today to process credit card, ATM, and other applications that require high I/O processing, the highest level of security, and high availability — applications where minutes can mean millions.

Take your application to the next level by enabling it to help increase efficiency and effectiveness of programs that provide relief and critical needs to disaster-stricken areas!

Meredith Stowell