A developer’s journey from attending a Call for Code hackathon to open sourcing drone tech as one of Code and Response’s first projects

On September 20, 2017, Hurricane Maria struck my home, Puerto Rico. After surviving the record-breaking Category 5 storm and being personally affected by its aftermath, I decided I was going to make it my mission to create technology that could help mitigate the impact hurricanes have on our island.

Inspired by Call for Code

Can you imagine trying to plan relief efforts for more than three million people? People in rural areas, including a community in Humacao, Puerto Rico, suffered the most. The people in this community were frustrated that help was promised but never came. So the community came together and painted “water” and “food” on the ground as an SOS, in hope that helicopters and planes would see their message. For me, it was sad and frustrating to see that the reality outside of the metro area was different. Lives were at risk.

Fast-forward to August 2018. Less than a year after the hurricane hit, I attended the Call for Code Puerto Rico Hackathon in Bayamón, Puerto Rico. I was intrigued by this global challenge that asks developers to create sustainable solutions to help communities prepare for, respond to, and recover from natural disasters.

The SOS messages after the Hurricane inspired me to develop DroneAid, a tool that uses visual recognition to detect and count SOS icons on the ground from drone streams overhead, and then automatically plots the emergency needs captured via video on a map for first responders. I thought that drones could be the perfect solution for rapidly assessing damages from the air and they could help with capturing images that could then be processed by AI computer vision systems. At first, I thought of using OCR (optical character recognition) technologies to detect letters. The problem with this approach is that everyone has different handwriting. If we want this to work in other languages, it will be very complex.

After a few hours of coding, I pivoted and decided to simplify the visual recognition to work with a standard set of icons. These icons could be drawn with spray paint, chalk, or even placed on mats. Drones could detect those icons and communicate to first responders on a community’s specific victims needs for food, water, and medicine. I coded the first iteration of DroneAid at that hackathon and won first place. This achievement pushed me to keep going. In fact, I joined IBM as a full-time developer advocate.

DroneAid is so much more than a piece of throwaway code from a hackathon. It’s evolved into an open source project that I am excited to announce today. I’m thrilled that IBM is committed to applying our solution through Code and Response, the company’s unique $25 million program dedicated to the creation and deployment of solutions powered by open source technology to tackle the world’s biggest challenges.

Open sourcing DroneAid through Code and Response

DroneAid leverages a subset of standardized icons released by the United Nations. These symbols can either be provided in a disaster preparedness kit ahead of time or recreated manually with materials someone may have on hand. A drone can survey an area for these icons placed on the ground by individuals, families, or communities to indicate various needs. As DroneAid detects and counts these images, they are plotted on a map in a web dashboard. This information is then used to prioritize the response of local authorities or organizations that can provide help.

From a technical point of view, that means that a visual recognition AI model is trained on the standardized icons so that it knows how to detect them in a variety of conditions (I.e. whether they are distorted, faded, or in low light conditions). IBM’s cloud annotations tool makes it straightforward to train AI using IBM Cloud object storage. This model is applied to a live stream of images coming from the drone as it surveys the area. Each video frame is analyzed to see if any images exist. If they are, their location is captured and they are counted. Finally, this information is plotted on a map indicating the location and number of people in need.

The system can be run locally by following the steps in the source code repository, starting with a simple Tello drone example. Any drone that can capture a video stream can be used since the machine learning model leverages Tensorflow.js in the browser. This way we can capture the stream from any drone and apply inference to that stream. This architecture can then be applied to larger drones, different visual recognition types, and additional alerting systems.

architecture diagram for droneaid

Calling all developers to collaborate in the DroneAid open source community

It’s been quite a journey so far and I feel like we’re just getting started. Let’s unite to help reduce loss of life, get victims what they need in a timely manner and help reduce the overall effects a natural disaster will have on a community.

Our team decided to open source DroneAid because I feel it’s important to make this technology available to as many people as possible. The standardized icon approach can be used around the world in many natural disaster scenarios (i.e., hurricanes, tsunamis, earthquakes, and wildfires) and having developers contribute by training the software on an ongoing basis can help increase our efficiency and expand how the symbols can be used together. We built the foundation for developers to create new applications and envision using this technology to deploy and control a fleet of drones as soon as a natural disaster hits.

Now that you understand how DroneAid can be applied, join us and contribute here: https://github.com/code-and-response/droneaid

Pedro Cruz

Developing for the edge

With the advent of 5G and the evolution of Internet of Things systems, we are seeing an explosion in use cases of edge computing. But what is edge computing? How can edge computing be beneficial for developers? What are the challenges that developers face?

In this blog post, I recount a conversation that I had with Dennis Lauwers, Distinguished Engineer Hybrid Cloud Europe, and Eric Cattoir, Client Tech Professional for IBM Cloud in Benelux.

What is edge computing?

Eric: “Edge computing is a kind of real-time computing. This means that you’re processing your data right at the time that it’s being collected by your device. You don’t send the data first via the cloud but instead process it on the device itself. Devices have more and more compute power, which makes it possible to process the data locally…at the edge.”

Dennis: “Plenty of use cases benefit from edge computing. For example, think about face recognition at border controls. This task involves a massive amount of data with thousands of people who cross borders every hour. It would take too much time to first send the data to the cloud to process it. When you analyze the data right on the device, there’s no latency. And, the data that you want to back up can be safely stored in the cloud.”

What are the challenges to face while coding for the edge?

Eric: “The programming itself is in line with traditional development. You use the same languages and you go through all the familiar DevOps phases. The challenge? That’s in the diversity of the devices. Often the processor technology on IoT devices is different from what you’re using on your PC. And, the processors even probably differ between the devices. How do you manage this? As a developer, you are being asked to write consistent and secure code that can be seamlessly copied to all devices.”

How are you helping developers with this challenge?

Dennis: “We help to enable developers on edge computing. You first build and test your code locally, and after it’s all working fine, you distribute it to your other devices. This involves building a multi-cluster environment. In this way, you’re also prepared when there’s a new device to onboard: in just a few clicks, it’s operational.”

When developers would like to know more, where do you suggest them to start?

Eric: “For a general introduction to edge computing, you can read this blog, “What’s edge computing and how can it transform your business?“. Or, you can watch this video of Rob High, IBM Fellow and CTO, talk about “the basic concepts and key use cases for edge computing.”

If you would like to experiment with the IBM Edge Computing offering, Ryan Anderson wrote an extensive blog on design patterns and recipes related to edge computing.

Will you be speaking about edge computing at Devoxx Belgium?

Dennis: “Yes, that’s right. In our session, we will look at how you can set up a Kubernetes-based DevOps solution for developing these complex applications that consist of components that run on a mixture of central cloud systems and edge devices. We will also show how you can manage an environment with a large number of edge devices and control aspects like security and integrity.

The use case will show how you can develop applications using some basic hardware (Raspberry Pi computers or other ARM-based computing devices) like running visual recognition on the edge in real time on a multitude of devices. We will be leveraging the open source Horizon software.”

Are you coming to Devoxx Belgium? Join our booth for a quick lab and get your limited edition IBM Developer swag!

Stephanie Cleijpool

Introducing Node-RED 1.0

6 years after it was originally open sourced by IBM, we’re excited to see Node-RED reach the major milestone of its 1.0 release. This release reflects the maturity of the Node-RED project whose community has continued to grow from strength to strength with over 2 million downloads, 2200 third-party add-on nodes available, and more and more companies adopting it as part of their own products and services.

What is Node-RED?

Node-RED is a low-code programming environment for event-driven applications. It uses flow-based programming to let you draw a visual representation of how messages should flow through the application.

It’s ideally suited to run on devices such as the Raspberry Pi for creating IoT solutions, as well as in the cloud for any event-driven type workload, such as providing REST APIs and integrations between systems.

Node-RED embodies a “low code” style of application development, where developers can quickly create meaningful applications without having to write reams of code. The term low code was coined by the Forrester Research company in a report published in 2014, but it clearly embodies a style of development that goes back further than that.

Three key benefits of low-code application development, all of which are seen first-hand with Node-RED, are:

  • It reduces the time taken to create a working application.

  • It is accessible to a wide range of developers and non-developers alike.

  • The visual nature helps users to see their application.

You can find out some more about the background and philosophy of Node-RED’s low-code approach to application development in this previous blog post.

What does 1.0 bring?

This release brings a number of useful feature enhancements that you can read about on the nodered.org blog. In this blog, I wanted to highlighting some of the bigger changes.

While the emphasis is on stability, the Node-RED project has taken the opportunity of a major version change to make some updates that weren’t suitable for smaller maintenance releases.

Asynchronous by default

For end users, the main change is that flows are now fully asynchronous, which allows for fairer handling of messages across multiple flows. It also unlocks a number of exciting features that are further on in the roadmap, including the ability to pause and debug flows as one would with a traditional code debugger.

It is possible that some existing flows have been written to take advantage of the sometimes-synchronous, sometimes-asynchronous nature of the current runtime. So this change does have the potential to affect existing flow behavior.

The Node-RED project has done a lot of work to minimize any potential impact and have written a number of blog posts to help users understand the changes: Making flows asynchronous Cloning messages in a flow

Overhauled CSS

The current Node-RED editor had CSS classnames dating back to the very first day of its development 6 years ago. It has evolved over time without a lot of consistency. This made it hard to produce custom themes or to embed the editor into another page without a lot of tedious work.

With this release, the editor’s entire CSS has been completely overhauled to ensure consistency and ease of use. The Node-RED project has also provided tooling to help produce custom themes and there’s already a ubiquitous dark theme available from the community.

Docker images

The Node-RED Docker images are a popular way of using the Node-RED project. However they were built on base images that are no longer maintained. This has meant, among other issues, that we’ve not had an image suitable for the Raspberry Pi with the current 10.x version of the Node.js runtime for a while now.

Thanks to the community, the Docker images have been completely redesigned, with proper multi-architecture images now available.

New Look for the Node-RED Flow Library

The Node-RED Flow Library is a place where all third-party contributed nodes are listed. It’s also a place where users can share useful flows that they have created. With over 2200 contributed nodes and 1000+ flows, there’s a lot of great stuff in the library. The challenge is often finding what you’re looking for.

To coincide with the 1.0 release, the flow library has had a make over and a new feature added: the ability for users to create and share collections of things. This is a way to help bring some order and curation to the flow library. For example, there is a collection of extra nodes for the Node-RED Dashboard project.

Getting started with Node-RED

If this 1.0 release of Node-RED has caught your interested to find out more, you have a number of choices. You can follow the Node-RED project documentation for installing it on your local machine or a device like a Raspberry Pi. Alternatively, you will find Node-RED in the IBM Cloud catalog as one of the example starter applications.

You can also find many more articles, tutorials, and code patterns featuring Node-RED on IBM Developer.

Nick O’Leary

Live coding event: Using Node-RED and AI to analyze social media

Natural disasters and social media

With social media so prevalent today, people all over the world can connect instantly online through various social media platforms. And though there are negative aspects to social media, it has proved to be helpful after a natural disaster. People can mark themselves as safe, connect with others to let them know where they are, and also raise awareness and funds for charitable donations.

When building your Call for Code submission, it is obviously useful for you to know important weather data. But consider also how technology today can be improved upon when it comes to social networks and other forms of communication through multimedia.

Mark your calendar for a live coding tutorial on May 14

With less than two months until Call for Code submissions close on July 29, we have an exciting live coding event coming up next week around social media. In this live tutorial, John Walicki (STSM and CTO of Global Developer Advocacy) will demonstrate how Node-RED flows paired with AI can analyze social media during and after a natural disaster. Walicki will walk you through the steps to create a sample application that uses Node-RED, Watson, and Twitter to look at how survivors of a natural disaster use social media to post their sentiments, pictures of locations and damages, and also how they can get connected to first responders.

After a disaster, these methods could be useful to further understand how natural disasters affect mental health and whether social media helped first responders locate survivors. Images shared on social media by individuals could also prove useful in determining specific types of terrain or infrastructure impacted by natural disasters.

The livestream is over now, but you can still catch the replay here! And make sure to bookmark IBM Developer’s Twitch channel to catch future streams!

Resources

IBM Developer Staff

Building Call for Code Apps with IoT and Node-RED

Welcome to the Call for Code Technology Mini-Series where I’ll identify and talk about one of the six core technology focus areas within Call for Code. You’ll learn about that technology, how to best utilize it on IBM Cloud, and where to find the best resources to fuel your innovation. First things first: If you haven’t already, accept the Call for Code challenge and join our community.

In Part 1, I’m going to talk about IoT and Node-RED, and I’ll explain how those two technologies can be easily tied together on IBM Cloud using the Watson™ IoT Platform.

IoT explained

The Internet of Things (IoT), is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes, and environments. Those “connected” things are used to gather information, send information back, or both. IoT allows businesses and people to be more connected to the world around them and to do more meaningful, higher-level work. Simply put, it means taking all the things in the world and connecting them to the internet. If you are new to IoT, check out this article by Callum McClelland in IoT for All that explains the basics of IoT and why it’s important: “What is IoT? – A Simple Explanation of the Internet of Things.”

A key way that IoT can help mitigate natural disasters is through sensor data. Collecting and analyzing this data can allow communities to take corrective or preventative action automatically. One of my favorite examples from the Call for Code 2018 Global Competition was Project Lali, which used IoT to measure temperature data in areas vulnerable to wildfires, send that data to IBM Cloud, and run the sensor data through various Watson services, like Watson Machine Learning and Watson Studio to predict fire intensity and behavior.

IoT devices can come in many shapes and sizes, starting with smaller development boards like a Raspberry Pi, Particle Photon, or an Onion Omega. Lots of mainstream consumer electronic devices that provide daily utility to users are also IoT devices, such as Amazon Echo, Google Home, Nest thermostats, and Ring doorbells. Even the car you drive and your smart refrigerator at home that tracks food inventory and alerts you when you’re running low on milk are — you guessed it — IoT devices!

Get started with IoT

Whether you’re a professional software developer or a beginner just starting out, one of the easiest ways to work with IoT devices is with Node-RED. Node-RED is a flow-based visual programming editor that lets you wire together nodes. These flows give you all the power of traditional programming, but within a simple, easy-to-use interface. The best thing about Node-RED is that it runs perfectly on IBM Cloud and is included as one of our starter kits.

If you don’t already have an IBM Cloud account, the first step is signing up for an IBM Cloud account, which takes less than 2 minutes. Just ensure that you use a valid email address, as you will have to confirm your email address before you can create any services. Once you’re signed in, continue reading to see our featured example.

The tools you need

While many IoT devices can work with the Watson IoT Platform, I’m going to specifically focus on the Onion Omega series of devices in this blog post. If you’re not familiar with the Onion series of devices, definitely check them out. For less than $50, you can get an Omega2+, a dock, and an OLED screen, and have a ton of fun with that device. The best resource for getting started with the Omega device comes from none other than John Walicki, the CTO of IoT Developer Advocacy. He has written a complete guide on using Onion devices with Node-RED on the IBM Cloud. Follow this detailed guide for a great introduction into IoT device setup and usage, service creation in IBM Cloud, and a complete Node-RED solution.

Looking for more IoT ideas? Check out our Code and Response™ IoT code patterns. Code patterns are one-stop-shop open source solutions, written by IBMers, that include detailed information, architectural flow diagrams, complete instructions, and a GitHub link to the code used in the patterns. Want to build some more cool things on IBM Cloud not centered around IoT? Check out our huge section of other code patterns.

Using the Watson IoT Platform without a device

Don’t have access to the Onion device or other IoT devices? No worries! You can still use the Watson IoT Platform’s simulated devices that can generate data for use within the platform. Take a look at these resources:

We’ve just learned what IoT and IoT devices are, the power and ease of use that Node-RED provides, how simple Watson IoT Platform is to work with, and how all of those things weave through a fantastic exercise.

I hope you’ve found this blog post useful. I’ll be back soon with Part 2, where we’ll talk about artificial intelligence and what Watson can do for you.

In the meantime, follow my work in GitHub.

Derek Teay

Keep sensitive data safe and withstand the rigors of natural disasters

I recently learned about an IoT device from a Florida-based HealthTech company, Think Anew – a team on the forefront of healthcare innovation. Think Anew’s BOOMBOX device empowers nursing home and long-term care staff to prepare for and recover from floods, hurricanes, and other natural disasters.

Don Glidewell, CEO of Think Anew, and his team of tech-for-good advocates, including Stacey Yoakum, Vice President of Health Informatics, focus on disaster preparedness, resilience, and response within the healthcare industry.

Panama City, Florida was nearly destroyed by Hurricane Michael last October. The St. Andrews Bay Skilled Nursing and Rehabilitation Center was one of many facilities that were impacted by lack of connectivity, water, and electricity. With 70 percent of communications down in the panhandle, its staff lost access to clinical records and vital operational software. St. Andrews Bay staff partnered with the Think Anew team to deploy its Multi-Link CarrierSat and micro-grid enabled IoT device, BOOMBOX, by providing access to the Center’s sensitive data and productivity tools.

St. Andrews Bay was the only location in the area with a working internet connection, which allowed staff to accurately disburse vital medication to its residents, maintain its strict data privacy protocols, manage electronic records, and handle Medicare billing while other facilities relocated residents to alternate locations. Technology served as the lifeline to ensure the safety of nearly 100 elderly residents.

“The horrid conditions left by Hurricane Michael created significant challenges for many Florida residents, but none more so than seniors in skilled care facilities going without everyday necessities and at risk for medical care issues due to the lack of communications”, said Florida State Senator Dennis Baxley, who sits on the State of Florida Senate Healthcare Committee.

Tech leaders like Glidewell rely on passion and personal stories. Glidewell’s superpower is empowering those who serve, which stemmed from his mother’s lifelong career as a caregiver in a nursing home. His understanding of the struggles of long-term care staff comes from his mother’s experience. As a first responder during Hurricane Katrina in 2005, Glidewell recognized the significant pressure that nurses were under to keep patients safe and medicated accurately, all while maintaining patient confidentiality and privacy.

With sensitive data now frequently stored in the cloud, the struggles that St. Andrews experienced during the hurricane is a problem that other facilities will face during natural disasters.

The tech used to support relief teams during and after a disaster must be scalable, resilient, and secure. Seconds count and quick action saves lives. It’s not enough to respond to a disaster; healthcare facilities need to be prepared before disasters occur, including malicious attempts to exploit the situation.

IBM Cloud Hyper Protect Services can help you with IoT innovations that withstand the rigors of disaster, thus allowing disaster response and recovery developers and data scientists like you to build secure cloud applications by using a portfolio of cloud services powered by IBM LinuxONE.

Learn how IBM Cloud Hyper Protect Services keep sensitive data safe.

Are you interested in addressing natural disasters through technology? IBM is a founding partner in Call for Code, a global coding competition that asks software developers, data scientists, and technologists to build sustainable solutions that address natural disaster preparedness, response, and recovery.

With submissions now open, Call for Code 2019 is asking you to accept the challenge to innovate like Don Gladwell. This year’s challenge is specifically focused on healthcare, access to medical health records, the vulnerable, and more. Read about the focus from the CTO’s letter to developers.

Extreme weather is here to stay and the time for innovation is at an all time high. What will you create? For more information, visit developer.ibm.com/callforcode.

Melissa Sassi

How open source software is eating the world

Since the advent of modern computing, the ability of passionate programmers to collaborate on open source code has led to some of the most important software breakthroughs. Today, open source software is more important than ever. In this timeline of major open source milestones, you’ll find that many innovations – such as web browsers, databases and smartphones – are built on decades of open source contributions.

Timeline of open source through the decades

timeline of open source through the decades

IBM Developer staff

A recap of IBM Developer UnConference Europe

On January 24 2019, we welcomed almost 100 developers in Zurich for a day filled with labs, breakout sessions, and walk-in demos. The content of our IBM Developer UnConference reflected the major discussions facing our community: managing multi-cloud environments – both public and private, Kubernetes, IoT, and analytics.

What’s an IBM UnConference?

An UnConference aims to be the antithesis of a conference, typically avoiding ticket prices and sponsored sessions, and focusing on the informal transfer of knowledge and information within the community.

The day began with a warm welcome from Youri Boehler, who introduced our guests and newcomers to the Developer UnConference, our exchange and learning platform for developers, non-developers, data scientists, managers, and all those interested in topics such as analytics, AI, big data, Blockchain, cloud computing, data science, DevOps, hacking, machine learning, open source, quantum computing, research and other emerging technologies. In addition, we had plenty of IBM Labs, and were happy to host partners such as Hortonworks, Hilscher and Red Hat.

Conference image

UnConference labs

From Slack bots with Watson Assistant to private cloud on Kubernetes, there was something for everyone. Not surprisingly, the best visited Labs were around IoT and edge analytics. Why? Firstly, edge analytics is a really hot topic within IoT. And secondly, our speaker Romeo Kienzler, has such a unique and engaging way of giving labs and talks that we definitely needed more chairs! Since there were so many interesting topics to discuss, and so much information to share, Romeo extended his lab and started explaining the basics of encryption. And speaking of encryption, we also have to mention the talk from our IBM Research colleague, Vadim Lyubashevsky, who addressed the question how to apply encryption in the era of quantum computer. How do we achieve quantum-safe cryptography, allowing for secure communication? Vadim emphasized that you do not need quantum to defend against quantum. The key takeaway: Don’t worry, be happy!

Lab image

We offered additional sessions on a wide range of topics, including the lifecycle of data science, digital business automation for multi-cloud, and agile integration architecture for containerisation. Talks explored PaaS from a business perspective, since apps are prolific and application releases are becoming increasingly frequent. We discussed how to adapt to this pace and embrace acceleration. Tips included an open culture, automating as much as you can, and getting the right platform. The lab from Hortonworks presented end-to-end production pipelines on Kafka, with a scalable, microservices architecture.

lab image 2

Over the course of the UnConference, we built up a holistic view of multi-cloud environments. Our guest lab from Red Hat explored DevOps with OpenShift, focusing on building services with Spring Boot and Java EE services. We also had a panel discussion on multi-cloud management, vital for business agility and reducing technology redundancy. IBMers Georg Ember and Thomas Müller shared more insight into multi-cloud management. At the end of the day, we turned our attention to a topic which concerns us all: our environment. IBMer Geraldine Lüdi was joined by two climate experts Kai Landwehr, from myclimate and Lia Flury, from CLIMEWORKS to speak about impacts of CO2, the kind of projects their organizations are investing in, and what a difference we can make as individuals.

lab image 3

Have you ever been to a Developer UnConference?

If you missed this UnConference, don’t worry. We are thrilled to host our next IBM Developer UnConference in Switzerland on June 20, 2019. Please mark your calendar, and we hope to see you there!

Miriam Oglesby