LoopBack earns the ‘Best in API Middleware’ award

LoopBack won the 2019 API Award for the “Best in API Middleware” category. LoopBack is a highly extensible, open source Node.js framework based on Express that enables you to quickly create dynamic end-to-end REST APIs and connect to backend systems such as databases and SOAP or REST services.

The 2019 API Awards celebrate the technical innovation, adoption, and reception in the API and Microservices industries and use by a global developer community. The 2019 API Awards will be presented at the 2019 API Awards Ceremony during the first day of API World 2019 (Oct 8-10, 2019, San Jose Convention Center), the world’s largest API and Microservices conference and expo — the largest event for the API economy — in its 8th year, with over 3,500 attendees.


The 2019 API Awards received hundreds of nominations, and the Advisory Board to the API Awards selected LoopBack based on three criteria:

  • attracting notable attention and awareness in the API industry
  • general regard and use by the developer and engineering community
  • being a leader in its sector for innovation

“IBM is a shining example of the API technologies now empowering developers and engineers to build upon the backbone of the multi-trillion-dollar market for API-driven products and services. Today’s cloud-based software and hardware increasingly runs on an open ecosystem of API-centric architecture, and IBM’s win here at the 2019 API Awards is evidence of their leading role in the growth of the API Economy,” said Jonathan Pasky, executive producer and co-founder of DevNetwork, producer of API Word and the 2019 API Awards.

The LoopBack team is thrilled that all of their hard work on LoopBack is being recognized by the larger Node.js community.

“We’re thrilled and honored to receive the Best in API Middleware 2019 award from API World,” said Raymond Feng, co-creator and architect for LoopBack. “It’s indeed a great recognition and validation of the LoopBack framework, team and community.”

Six and half years ago, the team created LoopBack at StrongLoop with the goal to help fellow developers kick off their API journey with the ideal Node.js platform. With the support of the fantastic Node.js ecosystem, the team built on top of open source modules such as Express and made it incredibly simple to create REST APIs out of existing datasources and services.

The StrongLoop team’s bet on open APIs and Node.js was right. The project and community have grown significantly.

The StrongLoop team joined with IBM API Connect team in 2015 to better position LoopBack as a strategic open source project. LoopBack 4 is the second generation of the framework. Version 4 incorporates what the team has learned with new standards and technologies such as TypeScript, OpenAPI, GraphQL, and cloud-native microservices to build a foundation for even greater flexibility, extensibility, composablity, and scalability.

“More and more features are shipped and being built by us and the community. The LoopBack team strive to bring best practices and tools. We love Github stars. It’s simply rewarding to create something valuable for the open source community!” says Feng.

Read the original announcement by API:World.

Next steps

You can help shape the future of LoopBack with your support and engagement. Work with us tomake LoopBack even better and meaningful for your API creation

Call for Code: The Weather Company and you

Each year, it seems the number of extreme weather and natural disasters that occur set a new record. The frequency of these events continue to increase each year. And the impact they have on people and the amount of damage they cause are escalating.

Severe and devastating weather is not going away. It is only going to get worse, according to the National Climate Assessment. Last year, extreme weather events affected over 60 million people, according to the Centre for Research on the Epidemiology of Disasters.

major flooding

Call for Code

Call for Code is a challenge to the world’s developers.It aims to get developers thinking about natural disasters and driving positive change by creating solutions that lessen the impacts from a natural disaster. Developers are called upon to use their skills to help save lives.

Call for Code submissions can tackle aspects, such as:

  • Improving preparedness before an event.
  • Offering relief during severe weather.
  • Supporting recovery after natural disasters.

Last year’s winning team, Project Owl, developed an off-line communication network that connects disaster victims and first responders once connectivity is lost.

The Weather Company

We cannot control the weather, but we can better understand it. In doing so, we can use data and analytics to get ahead of an occurrence and reduce the impact severe weather can have on our communities. We can better predict, prepare, and recover from extreme weather and natural disasters events with the right knowledge!

Solving problems where weather is involved, or more specifically any application that takes into account weather conditions, means we need access to the weather data itself.

Global weather data access is back

Once again, The Weather Company® is opening its Weather Data APIs to developers who are participating in the Call for Code Global Challenge. It is free of charge for developers to use as they’re building their submissions.

Over 25 of The Weather Company’s critical and popular API endpoints are available. The endpoints span multiple weather data packages:

  • Core Weather Data Package: Includes the most essential weather APIs, such as current conditions, forecasts, and radar and satellite data.
  • Enhanced Current Conditions Weather Data Package: Includes one of the highest-resolution weather observation networks in the field, based on over 200,000 personal weather stations (PWSs).
  • Enhanced Forecast Weather Data Package: Built by using our leading-edge model forecasting; expertise from 200 meteorologists and related scientists; and our network of observations, radar, and satellite assimilation and modelling capabilities. The package delivers the most accurate forecasts at 500-square-meter resolution globally.
  • Severe Weather Data Package: Includes forecasted, real-time, and trailing estimates of severe weather data. Weather conditions, such as hail, lightning, severe wind, tornadoes, and more are in this package.

Developers will be able to quickly add weather data, such as severe weather alerts, tropical storm forecasts, power disruption index, and more into their innovative and potentially life-saving solutions.

Getting started with the Weather APIs

To access and use the Weather APIs, you must first register for an API Key. Registration is free and will be available while the Call for Code Challenge is taking place. After you submit your registration form, you will receive your API key at the email address that you registered with.

You can review the API documentation for detailed information on each of the available endpoints. The documentation can also help you determine which endpoints are right for your application.

Using the API requires you to make a GET request to the appropriate endpoint. The request must include the API key and any parameters required by the endpoint. For example, getting the short-range “Fifteen Minute Forecast” for Raleigh, NC (latitude: 35.843686, longitude: -78.78548) would look something like this (using curl):

curl -X GET "https://api.weather.com/v1/geocode/35.843686/-78.78548/forecast/fifteenminute.json?units=m&language=en-US&apiKey=WXYZ" -H  "accept: application/json" -H  "Accept-Encoding: gzip"

Where WXYZ is the API key you received from registration.

Learning by doing

GitHub repositories with sample applications that use the API endpoints are available online for you to gain inspiration from and see them in action. The repositories also provide a starting point for creating an application. The sample code demonstrates how you can query and process responses from the following endpoints:

There are Node.js and Python implementations of the sample application that are available for you to use on GitHub. You can run the sample code to see how to incorporate the endpoints into an application or examine the code to learn how you could go about consuming any of the other available endpoints. Instructions on how to obtain and run the sample codes are in both repositories.

For a quick look at how easy it is, here are the basic steps to run the samples locally with Node.js or Python:

  1. Clone the desired GitHub repository (Node.js and Python).
  2. Create the WEATHER_API_KEY environment variable and set it to your API key.
  3. Install dependencies using npm (for Node.js) or pip (for Python).
  4. Launch app.js (for Node.js) or app.py (for Python).

Here’s how to deploy the samples to IBM Cloud by using the IBM Cloud Developer Tools:

  1. Clone the desired GitHub repository (Node.js and Python).
  2. Update the WEATHER_API_KEY variable in the manifest.yml.
  3. Push the application/repository to IBM Cloud using ibmcloud cf push.
  4. Start the IBM Cloud application.

(Don’t have an IBM Cloud account yet? Register here for your free account.)

The README for both sample code implementations include more detailed steps:

How you and other developers can make a difference

Have a solution for a more effective severe weather preparedness system? Want to mitigate health concerns in the aftermath of a natural disaster? Whatever your idea may be, join the cause.

Interested in building a solution that can have an immediate and lasting impact? It is not too late for you to accept the challenge. Submission deadlines for Call for Code 2019 is July 29, 2019.

You can make a difference!

Raj Singh contributed to this blog.

IBM joins the GraphQL Foundation to push for open source adoption

IBM is excited to be a founding member of the GraphQL Foundation, which is hosted by the Linux Foundation. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. It was open sourced in 2015.

We have been following the development of GraphQL in the last few years and started embracing this new technology in our products – Supply Chain and IBM Cloud, to name just two. We use GraphQL successfully to build IBM Cloud consoles, which need to query various backends and catalogs to retrieve available services, subscriptions, or service instances for display in dashboards.

GraphQL provides an outstanding experience for developers, which fosters their ability to innovate. We see great potential for GraphQL to serve highly diverse consumer requirements, which is a common challenge for public APIs. We also think that GraphQL can play an essential role within organizations to consolidate access to data that is increasingly distributed across microservices.

At IBM Research, we recently open sourced OASGraph, a library that processes a Swagger or OpenAPI Specification (OAS) defining REST endpoints and automatically produces a GraphQL interface around that API, ready to be used. We are receiving great community support, and we are now starting to support generating GraphQL interfaces from multiple OAS.

Going forward, we see a number of opportunities to make GraphQL enterprise-ready. In particular, we have been working on GraphQL API Management (read our recent blog post). GraphQL queries may hide a lot of complexity and even pose threats to backend systems if they are ill-structured or excessive in resource demands. A GraphQL API may also be part of a subscription and call for rate limits which are likely to be substantially different from the usual REST rate limits of #calls/unit of time.

As a member of this new foundation, we look forward to working with the community to increase language support for GraphQL, particularly in the area of GraphQL validation for C++, and to evolve the thinking around GraphQL API Management.

Erik Wittern is a GraphQL enthusiast, he is one of the maintainers of OASGraph, and will represent IBM in the GraphQL Foundation technical board.


Test APIs to create successful blockchains

It’s exciting to develop with blockchain technology, but you need to consider an important challenge. After a block is created, you can only append it and add data to a chain. You can’t remove or modify it.

When you are creating APIs that will interact with blockchain, you need to watch out for an error that could start an automatic call over a large number of transactions. A mistake in your API could be very costly. If you do work through a permissioned structure like Hyperledger Fabric, key stakeholders can overwrite transactions if needed. However, that approach takes time so that the process remains secure.

The best thing to do is prevent problems with your APIs before you implement them.

Arashdeep Salh, the Founder of AMA Advisory Group, and a current student at the University of Toronto, captures this concern well in a blog post where he explains how APIs and API testing help create successful blockchain implementations. Read it here: What do blockchain and API testing have in common?

Then, the next time you have APIs that you need to test, try out the free API Connect Test and Monitor.

Guide to feeding your hunger with APIs

If you’re in a new city, there’s nothing worse than being hungry and not knowing what restaurants are close, so you can figure out where to eat.

In a series of blog posts, I show you a no-code way to use API Connect Test and Monitor to create this list and get it up and running. What’s API Connect Test and Monitor? Glad you asked. It’s a no-code, no-charge (that means FREE) alternative to Postman.

In Part 1, Appetizer: How to Never Go Hungry with APIs , I cover the prerequisites, introduce you to the Zomato API, and show you how to make your first API call with no code required. Specifically, you use Zomato’s Cities API to pinpoint the city that you’re searching for information about.

And, yes, we skip the main course to get to the sweet stuff faster.

In Part 2, Dessert: How to Never Go Hungry with APIs, learn how to generate an integration test and chain a series of APIs that Zomato offers together, not just the Cities API. If you follow along to both blog posts, you’ll have a handy list of dim sum recipes to eat at in Toronto.

My journey to creating my first web application

I’m a details person, so I wanted a way to log the total number of days I worked, when I worked remotely, and when I took holiday or vacation days. I created a command-line Python application that saved data to a CSV file and used it to log my days. While this worked for my needs, my manager suggested that I modify my application and scale it to work on the cloud as a web application. This blog post details how I researched, planned and implemented this web application.

The planning phase: Ask the right questions

Before I could containerize the application using Docker and deploy the app to the cloud, I needed to create a working, local version of my application. Since Python is the foundation of this application, I wanted to build on top of that with web-based API management. I began to research and understand how Flask, a micro web framework, works and how I could make it work with my command-line application.

As I began to tweak parts of my existing application to use with Flask, it prompted some questions:

  1. Using CSV to store data is not appropriate on the cloud. How will I store data on the cloud?
  2. What API endpoints should I create? How should a user be able to interact with this application?

With these questions in hand, I decided to meet with my manager to get another opinion and see what insight he could offer.

Choosing the right data storage

For data storage, my manager suggested I look into Cloudant, MongoDB, and MySQL and explore which storage would best support what I was trying to achieve. I quickly removed Cloudant from the running because I couldn’t find any documentation that showed it was supported by Python.

With my choices down to MongoDB and MySQL, I had to determine whether a relational database was something that was necessary for this application. The data related to the number of remote days worked helped me make my decision.

The way I stored the data in the CSV file included three pieces of data:

  • the total number of days that I worked remotely
  • each location where I worked
  • the number of days worked remotely in each location

Since the locations would differ for each user, I couldn’t come up with a schema in a relational database to handle all locations appropriately. This is where I saw the value and flexibility of a non-relational database. With that, I decided to use MongoDB. And with no prior experience using MongoDB, I began my research into learning how to use and incorporate it with my application.

Broadening the scope of my work

While I know how I use this application, I asked myself how other users may potentially use this application and added a space to record office days worked and sick days used.

I began to write an API.md file that outlined the API endpoints my application would have and the responses from these API calls. All of the endpoints were for one user to interact with the application, similarly to how I use the application. When I showed my manager my progress, he suggested that I expand and scale the application so that multiple users on a team could use it. The best way to do that would be through a cloud-based application.

I went back and reviewed my application, database structure, and API.md file and tried to see how I could expand and scale what I have worked on to include this capability. I then ran into this problem: Each user should only be able to access data associated with themselves. How can I make the application secure and capable of achieving this?

Creating user account information

In order to limit a user’s access to only his or her data, I decided that users should have accounts for logging into the application. To achieve this, I created another database with user account information and updated the API.md file with how I wanted to modify the application to handle this.

I then began researching how I could incorporate logging in and user sessions. I first looked into sessions from Flask. I ran into some blockers with getting sessions to hold user data so the program would know the user was logged in. I switched gears and decided to incorporate session info in MongoDB which ended up working as intended. With the application now having the capability to handle multiple users, a new question now came to light: I can easily create new users in MongoDB, but how can I easily allow others to create new users?

I have the luxury of creating new users myself in MongoDB, but actual users of this cloud-based application won’t have this luxury. In fact, when an instance of this application is created, there won’t be any users at all. I needed to figure out how to handle initializing this application and onboarding of new users. I decided to create an endpoint to allow for creating new users.

Adding security and encryption

Once I finished the user end points, I moved on to adding a layer of security by encrypting the passwords when storing them in the database. After including this layer, I was satisfied with the application for the time being, so I started the process of moving the application to Docker containers and Kubernetes.

Next steps after deployment

After successfully containerizing the application in Docker, adding the containers to Kubernetes, and moving the application to IBM Cloud, I finally had my local application running on the cloud. It was definitely a rewarding feeling making successful API calls to the cloud.

Now that I had a working version of my application on the cloud, I started asking myself questions about next steps:

  1. Now that this application is running successfully, how should I test this application? What kind of a test bed should I create?
  2. How can I improve this application? What new features should I add?
  3. This application currently returns JSON objects, what kind of a web GUI should I build on top of this?

Testing the application

For testing this application, I decided on creating Python unit tests where I would be testing each API end point. I researched how to automate the test bed. Eventually, I was able to automate the initializing of a test client for the Flask application and a Docker container for the Mongo database. I then had a working automated test bed.

A tweak to the file structure

Once I had my application and test bed where I wanted them, I asked for feedback so I could improve my application. One person suggested that I reorganize the application file structure so each file would have its own “microservice” rather than having all the files in one main file. This suggestion made sense, because typically applications are not just in one file and are spread out. This change also makes the code less intimidating and easier to follow and understand. In addition to making the application easier to understand, I also included a Swagger file for those who use this application to know how the APIs are supposed to work.

Designing the GUI

Once my restructuring was completed and the application was running where it was supposed to, I started to design the GUI. I wanted to implement design thinking principles into the process of creating the GUI.

I went to my manager to help me with this process for a couple reasons: He has more experience than me with UI/UX design, and I wanted someone else’s opinion and perspective on the UI and functionality of the application. During our design thinking sessions, we both realized that this application had a lot of potential and there were so many directions where we could take this application. At the end of the day, we both realized that we needed to focus on the original goal, which was to get a working GUI for the existing functionality. As a result, we then designed wire frames for the GUI.

After creating wire frames, it was time to bring the GUI to life. I researched different JavaScript frameworks that I could use to create this GUI. I compared Angular, React, and Vue. Ultimately, I decided to use React because it is popular in the industry and seemed straightforward to learn. Once I learned React and finished the GUI, I deployed it alongside my backend application on IBM Cloud.

Incorporating feedback

With all parts of the application now successfully running on the IBM Cloud, I demoed the application to my manager and colleagues. The working GUI help them better see and understand the application and my work. It was mentioned that my database schema could be modified to allow for individual entries associated with a specific date to the work log rather than increasing the totals for each day type for a given year. This would allow for more flexibility, better logging of work log data, and also incorporate a calendar component to the front end. I decided it would be best to modify the database schema to incorporate these changes. I then made all of the other resulting necessary changes to the rest of the application.

User testing

With all of the updates now working as intended, I conducted usability testing with some users and recorded their feedback. The feedback was in respect to the design, functionality, and ideas the users had for future improvements and additions.

With all of this feedback in hand, I made improvements to the GUI and functionality of the UI. Having some time to spare, I decided to implement a few of the smaller ideas that some of the users recommended which included Slack integration.


Throughout this process I not only had to learn how to use Flask, MongoDB, Kubernetes, React, and other various components of how to make and structure an application, but also how to get all of the different technology to work together. This was definitely a rewarding experience, and I hope you enjoyed reading this and want to use this application!

7 ways to create secure APIs from the start

There’s a lot to think about when you’re creating an API: you have to understand what your users need, decide which datastore (or datastores!) is the right fit, create documentation (with an OpenAPI spec, of course,) set up automated testing … the list goes on!

One of the most important ways you can create a successful API is to design it with security in mind from the start. In this blog post, I share 7 things you can do to ensure the APIs you are designing are secure.

1. Separate your security layer from your business logic

One of the most important things you can do is separate your security layer from your business logic. Think of your security as the frosting on your API cake: it covers it completely and exists between each layer as well. If your security is lumpy, baked into different endpoints or the underlying functions, it’s easy to overlook (and leave vulnerable) important parts of your API. Using an API gateway (such as IBM API Connect or Kong) is a straightforward way to create a security layer for your API.

2. Authenticate your users

This might seem simple, but you need to authenticate users properly before they can use the API call. Once the user is authenticated, check out their permissions profile to ensure they’re authorized to make the call.

3. Validate your parameters

An often-overlooked line of defense is parameter validation. Validating your parameters, using strict whitelists when you can, is key to stopping a lot of attacks. Accept only the values you allow, and nothing else. Most languages have robust libraries or modules for creating validation rules (check out Validate.js for JavaScript).

4. Use trusted, proven security algorithms

While you’re checking out validation libraries, remember that using proven, peer-reviewed standards is a pillar of secure coding. Implementing your own encryption, authentication, or communication protocols can (and often does) lead to disastrous outcomes (not to mention being a waste of resources).

5. Store internal data securely

Validation helps for external data, but you should also make sure important internal data—especially sensitive code, keys, or passwords—are always stored properly. Don’t rely on security by obscurity: they will be discovered. Similarly, make sure your debug and test code is removed in production. Attackers love to find debugging endpoints, special calls used for troubleshooting, or other backdoors, and will exploit them. What would happen if someone with bad intent read all your source code? What else could they do or access?

6. Use simple calls

Even legitimate access to your API can cause significant trouble if calls are too powerful. Make your calls as weak as possible—in a good way!—by limiting their functionality to the bare minimum required to get the job done. RemoveFile() is a much safer call than callOSCommand(). Can someone DoS your application by sending huge inputs that overflow your memory or disk, or by requesting a file export or a database backup a few hundred times in a row (a real-life example!)?

7. Implement Transport Layer Security (TLS)

Remember that authentication and authorization aren’t secure if your communications aren’t secure. Use TLS for all communications and block downgrades to unsecured communications (or older, vulnerable versions of SSL/TLS). Consider implementing certificate pinning to prevent MitM attacks.

These are just a few of the things to think about when you’re designing security for your API. In addition, you should always spend some time threat modeling. What does it mean to “threat model”? Think like an attacker, and imagine all the ways they could use your API to hurt your application and the system it is running on.

APIs are the plumbing of modern software development: you can keep yours running smoothly and safely by building in API security at every step of your development process.

Special thanks to Erin McKean for helping me write this blog post.