Node.js 13 just landed and Node 12 is promoted to LTS

Node.js 13 was released today, and Node.js 12 was promoted to Long Term Support (LTS). In this blog post, learn what’s new in 13 and why you should start thinking about migrating to Node.js 12.

Quick review of the LTS process

Every October, we see the results of the Node.js release process. Users can expect and plan for a new current release every April and October, with the latest even-numbered release being promoted to LTS in October.

Our customers seem to like the predictable timetable for quality releases. The adoption curves reflect that usage is shifting towards the next LTS release over time, so we’re excited to see that people appreciate this routine. For September, the download numbers by platform for the LTS versions were as follows:

  • 8.x (April 2017 – LTS ) : 12777685
  • 10.x (April 2018 – LTS ) : 17773295
  • 12.x (April 2019 – CURRENT) : 3916577

This confirms that people are shifting towards the most recent LTS versions along with advance use of the next LTS version.

At IBM, our strategy is to focus on aspects we believe are important to our customers, including:

  • Stable and predictable releases
  • Platform support
  • Security
  • Diagnostics
  • Performance
  • Code quality and safety net
  • Key features

The new Node.js releases continue to deliver in these areas. Let’s look at some of the most interesting aspects of the Node.js 12 and 13 releases.

Node.js 12 promoted to LTS and what it means for you

With the promotion of version 12 to LTS, it’s now suitable for use in production, and you should consider how it fits into your migration plan. Node.js 12.x brings with it improved diagnostics, security, and performance. For a reminder of what’s there you can check out:

What’s in Node.js 13 and why use it?

Node.js 13 is the next current release. Although it won’t be promoted to LTS and we don’t recommend its use in production, it’s still a good idea to test your applications and build your packages on this version periodically. Doing so ensures that you can experiment with the latest features — and when version 14 comes our, there will be less for you to catch up on in terms of what’s new.

A couple of the key features and changes in version 13 that I’d like to call out:

  • Updated version of V8 (Verion 7.8) – A new version of the V8 JavaScript engine brings improved performance and new language features.
  • Full ICU is enabled by default – Node.js now supports a larger set of languages by default. Curious abourt langauge support in Node? Watch this video wehre Steven Loomis and I talk about internationalization.

The Node.js approach for releases and backporting changes means that, many times, you don’t need to wait for a new major release to access new features. In fact, at the announce of a new release, there might not be as much that is truly “new”.

I still think that it’s a good time to call out some of the notable features that became available or were promoted to “stable” in the timeframe leading up to that release. For version 13, that includes:

  • Workers API is now stable. It is stable in both 12 and 13 (that was not the case when 12.x was released).

New features in action

Worker API example

Lets start out by trying out the Workers API. The following is a simple example of using the API;

const workers = require('worker_threads');

if (workers.isMainThread) {
  const worker = new workers.Worker(__filename, {
    workerData: 41
  });
  worker.on('message', (response) => {
    console.log(response);
  });
} else {
  workers.parentPort.postMessage(workers.workerData + 1 )
}

Running the example:

bash-4.2$ node test.js
42

This example uses the approach of a single file for both the parent and child which may be familiar if you have used fork. That is accomplished by passing __filename to the creation of the Worker. While convenient, this isn’t necessary. You can provide a different file that contains the code for the worker.

The example starts by checking if the code is running on the main thread with isMainThread. If so, it starts up the worker, passing it the number 41 and then waits for a response from the worker. The wait for the response is asynchronous, so the main thread can continue working without blocking the main thread.

When the worker starts it takes the value passed (41), increments it by 1 and sends a response message with the result. All of this computation takes place on the worker thread instead of the Node.js main event loop thread. In this case, the computation does not take long. But, if you have something more complex to do — for example, doing some kind of complex pattern matching or otherwise computationally intensive work — it could take much longer to execute.

Once the event triggered by the message sent by the worker is processed on the main thread, it simply prints out the result.

This is a very simple example to show how easily you can move work off the main thread. The worker API provides different ways that data can be exchanged between the main thread and the worker. They key thing to understand is that the main thread and each worker has it’s own enviroment including the Object heap so you can’t just reference objects between the two. Instead, you need to share data through one of the APIs provided.

I’ll also note that it it’s important to use workers in the right places. Node.js does a great job handling concurrent requests without Workers, so, most often, you should only introduce them where it helps you avoid blocking the main event loop.

Full ICU by default

The next feature I’d like to show is full ICU by default. Node.js has supported full ICU for some time, but there were extra steps you had to take to enable the data required for the full set of languages. In Node.js 13, full ICU support is bundled by default.

What this means is that the following code will return the requested values instead of defaulting to English:

console.log(new Intl.DateTimeFormat('es',{month:'long'}).format(new Date(9E8)));
console.log(new Date(0).toLocaleString("el",{month:"long"}));
console.log(new Date(157177E7).toLocaleString("zh",{year:"numeric",day:"numeric",month:"long"}));

where we are trying to work in the Spanish, Greek, and Chinese locales(es = spanish, el = greek, zh = chinese).

So, on Node.js 13, it would look like:

bash-4.2$ node test2.js
enero
Δεκεμβρίου
2019年10月22日

Whereas, Node.js 12 returns:

bash-4.2$ node test2.js
January
December
October 22, 2019

Including full ICU by default makes it easier for our customers who operate across 170 different countries to run and supports applications in their native locales.

Thanks to our great team

In closing I’d like to thank the release team including Michaël Zasso and IBMer Bethany Griggs for all of their hard work in getting 13.x out and 12.x promoted to LTS. Bethany was the releaser for 13.x and Michaël handled the promotion of 12.x to LTS. I’d also like to thank the supporting cast from the build working group and, of course, the individual contributors as well.

How you can contribute to Node.js

Committing code and participating in the work of the Node.js organization is not the only way to contribute to the continued sucess of Node.js. As a Node.js package developer or end user, I’d ask that you help out by testing early and often and provide feedback to the Node.js project.

The regular cycle of majors released as Current every 6 months (April and October) provides a good opportunity to test out releases in advance so that when a release is promoted to LTS, there are no surprises. We need your help in doing that testing. Once a release is in LTS, I ask that you consider testing your applications and packages regularly on the LTS versions in order to ensure a good experience for end users migrating to those versions.

Learn more

If you’d like to read more about this release, check out the blog post announcing the Node.js 13 release.

You can also:

Michael Dawson

IBM Cloud Pak for Applications eases digital transformation for developers

The IBM Cloud Pak for Applications is the first of five Cloud Paks that IBM released in response to our clients’ need for tighter integration across IBM’s portfolio to enable your journey to cloud. Cloud Pak for Applications is a secure, complete, containerized set of capabilities for developers, solution architects, and operators that addresses the entire application lifecycle: architecture, development, management, and DevOps.

Most organizations still need to move 80% of their infrastructure to the cloud. To accelerate this modernization journey, Cloud Pak for Apps connects the agility and DevOps of cloud-based modern application development to the functionality and dependability of existing Java-based workloads. By leveraging Red Hat OpenShift, the IBM Cloud Pak for Applications provides a hybrid, multicloud foundation built on open standards, freeing workloads and data to run anywhere.

To help developers get started, we’re excited to announce the Cloud Pak for Applications Developer Hub, written by developers, for developers. This hub offers everything you need to migrate existing Java-based applications, as well as to build new and modify existing cloud-native applications.

Access step-by-step tutorials, code patterns, community, and ongoing education and support from IBM’s subject matter experts in application migration and modernization.

We’ve also created learning paths where we curated lists of content based on whether you just want introductory content, are a developer, or are a solution architect. Each path begins with introductory material, and continues through intermediate, advanced, and expert-level sets of content.

The best part is, you can — and should — start now! The technology is rapidly evolving, so we’ll continuously update the learning paths to keep you up to date and ease your transformation journey.

Willie Tejada

Scaffold and deploy a scalable web application in an enterprise Kubernetes environment

Deploying your application to a container, or multiple containers, is just the first step. When a cloud-native system becomes more established, it’s even more important to manage, track, redeploy, and repair the software and architecture.

You can choose from various techniques to help platforms provision, test, deploy, scale, and run your containers efficiently across multiple hosts and operating environments, to perform automatic health checks, and to ensure high availability. Eventually, these approaches transform an app idea into an enterprise solution.

The code patterns, tutorials, videos, and articles on IBM Developer about Red Hat OpenShift on IBM Cloud™ are a good place to start considering ways to use an enterprise Kubernetes environment with worker nodes that come installed with the Red Hat OpenShift on IBM Cloud Container Platform orchestration software. With Red Hat OpenShift on IBM Cloud, you can use IBM Cloud Kubernetes Service for your cluster infrastructure environment and the OpenShift platform tools and catalog that run on Red Hat Enterprise Linux for deploying your apps.

As you move forward in exploring how to work with combined Red Hat OpenShift on IBM Cloud capabilities, you will want to know how to scaffold a web application (both Node.js and Express), run it locally in a Docker container, push the scaffolded code to a private Git repository, and then deploy it. You can follow the details in the Scalable web application on OpenShift tutorial in the Red Hat OpenShift on IBM Cloud documentation.

Consider a few tips: You can expose the app on an OpenShift route, which directs ingress traffic to applications deployed on the cluster, a simplified approach. You can bind a custom domain in OpenShift with one command, instead of defining an Ingress Kubernetes service in YAML and applying it. Also, you can monitor the health of the environment scale the application. For example, if your production app is experiencing an unexpected spike in traffic, the container platform automatically scales to handle the new workload.

You can check out the architecture diagram at the Scalable web application on OpenShift tutorial and then try it for yourself.

Vidyasagar S Machupalli

Introduction to Eclipse Codewind: Build high-quality cloud-native applications faster

Eclipse Codewind is an open source project that makes it easier for developers to create cloud-native applications within their favorite IDE. Codewind initially supports Visual Studio Code, Eclipse IDE and Eclipse Che. We’re working on adding support for additional editors in the coming months.

Easy to get started

Once you’ve installed Codewind, you can use common templates to quickly start using popular frameworks including Express (Node.js), Spring Boot (Java), Open Liberty (Java), and Kitura (SwiftLang). If you want to develop in other runtimes and frameworks, you can do that as well! Codewind enables you to bring your own templates to expand support to meet your own needs.

Containerized from the start

When you’re creating an application, Codewind immediately syncs and builds your application within its own container, pulling in application dependencies as appropriate. The best part? You don’t have to leave your editor to use dependent tools.

Auto-rebuild capabilities ensure that changes you make to your application are immediately reflected in your container, which results in quick feedback on your code changes. Applications that you build using Codewind come with health endpoints and metrics so that you can make sure your microservices are responding like you expect them to.

In addition, Codewind’s built-in performance tooling generates load on your microservice endpoint. This enables you to watch the metrics to compare changes between application levels and to identify hot spots that indicate potential application bottlenecks.

Kabanero connection

Codewind is used within Kabanero, an open source project that brings together foundational open source technologies into a modern microservices-based framework. Kabanero uses Codewind to provide an integrated IDE experience.

See Codewind in action

Tim deBoer introduces Eclipse Codewind, an open Eclipse open source project to extend common IDE’s and assist rapid micro service application development with containers.

Start your Codewind journey

Andy Watson

Introduction to Appsody: Developing containerized applications for the cloud just got easier

Appsody is an open source project that includes a set of tools and capabilities you can use to build cloud-native applications.

Using a powerful, intuitive CLI, you can develop applications in a continuous, containerized run, test, and debug environment and then build and deploy to Kubernetes.

A core component of Appsody is a set of pre-configured stacks and templates for a growing set of popular open source runtimes and frameworks, including Node.js, Eclipse Microprofile, Quarkus, Spring Boot, and more. These stacks act as a foundation on which to build applications for Kubernetes and Knative deployments.

Appsody Stacks support a range of development capabilities, from basic packaging of applications in a best-practice container image, to creating serverless Cloud Functions using domain specific APIs and libraries for building REST APIs. The stacks include cloud-native capabilities such as liveness and readiness checks, along with metrics and observability.

You can customize Appsody stacks to meet your specific development requirements and to control and configure the included technologies. If you customize a stack, you have a single point of control from which you can roll out those changes to all applications built from them.

See Appsody in action

The following video shows an overview of the Appsody CLI and workflow, using the Node.js Express stack to create, run, debug, test, build, and deploy a cloud-native Express.js application.

Get started

Ready to get started? Follow us on Medium, where we have a set of tutorials that shows you how to use Appsody. We’re constantly adding new content to this account. Alternatively, you can check out our Quick Start guide and build a Node.js app with Express.

Contribute to the project

We believe that the best place for this project to grow is in the open, and we weclome your involvement and contributions to the project and community. Check out our code of conduct to see how to work with us.

If you want to contribute to the project but don’t know where to start, please come chat with us in Slack. We’re happy to steer you in the right direction.

Currently we’re working with the owners of frameworks to make even more stacks available to developers. We’ve doubled the number of available stacks in the first month since we launched.

Kabanero connection

Appsody is used within Kabanero, an open source project that brings together foundational open source technologies into a modern microservices-based framework. Kabanero incorporates the Appsody stacks and templates into its overarching framework.

Continue your Appsody journey

David Harris

LoopBack earns the ‘Best in API Middleware’ award

LoopBack won the 2019 API Award for the “Best in API Middleware” category. LoopBack is a highly extensible, open source Node.js framework based on Express that enables you to quickly create dynamic end-to-end REST APIs and connect to backend systems such as databases and SOAP or REST services.

The 2019 API Awards celebrate the technical innovation, adoption, and reception in the API and Microservices industries and use by a global developer community. The 2019 API Awards will be presented at the 2019 API Awards Ceremony during the first day of API World 2019 (Oct 8-10, 2019, San Jose Convention Center), the world’s largest API and Microservices conference and expo — the largest event for the API economy — in its 8th year, with over 3,500 attendees.

https://developer.ibm.com/developer/blogs/loopback-best-api-middleware-award/images/loopback.jpg

The 2019 API Awards received hundreds of nominations, and the Advisory Board to the API Awards selected LoopBack based on three criteria:

  • attracting notable attention and awareness in the API industry
  • general regard and use by the developer and engineering community
  • being a leader in its sector for innovation

“IBM is a shining example of the API technologies now empowering developers and engineers to build upon the backbone of the multi-trillion-dollar market for API-driven products and services. Today’s cloud-based software and hardware increasingly runs on an open ecosystem of API-centric architecture, and IBM’s win here at the 2019 API Awards is evidence of their leading role in the growth of the API Economy,” said Jonathan Pasky, executive producer and co-founder of DevNetwork, producer of API Word and the 2019 API Awards.

The LoopBack team is thrilled that all of their hard work on LoopBack is being recognized by the larger Node.js community.

“We’re thrilled and honored to receive the Best in API Middleware 2019 award from API World,” said Raymond Feng, co-creator and architect for LoopBack. “It’s indeed a great recognition and validation of the LoopBack framework, team and community.”

Six and half years ago, the team created LoopBack at StrongLoop with the goal to help fellow developers kick off their API journey with the ideal Node.js platform. With the support of the fantastic Node.js ecosystem, the team built on top of open source modules such as Express and made it incredibly simple to create REST APIs out of existing datasources and services.

The StrongLoop team’s bet on open APIs and Node.js was right. The project and community have grown significantly.

The StrongLoop team joined with IBM API Connect team in 2015 to better position LoopBack as a strategic open source project. LoopBack 4 is the second generation of the framework. Version 4 incorporates what the team has learned with new standards and technologies such as TypeScript, OpenAPI, GraphQL, and cloud-native microservices to build a foundation for even greater flexibility, extensibility, composablity, and scalability.

“More and more features are shipped and being built by us and the community. The LoopBack team strive to bring best practices and tools. We love Github stars. It’s simply rewarding to create something valuable for the open source community!” says Feng.

Read the original announcement by API:World.

Next steps

You can help shape the future of LoopBack with your support and engagement. Work with us tomake LoopBack even better and meaningful for your API creation

IBM Developer staff

Where are my new models for NLP? They’re here!

You may have been wondering what the team at CODAIT has been up to. Perhaps you have been wondering, “Where are my new models on the IBM Model Asset eXchange?” Well, the team has been pretty busy with our newly announced Data Asset eXchange, a trusted source for curated open datasets that will integrate with IBM Cloud and AI services. In fact, we’re still hard at work behind the scenes on Data Asset eXchange, so stay tuned!

1

, 2, 3 … NLP!

But don’t worry, we have put together a few model morsels for the MAX community. Recently, deep learning for natural language processing (NLP) has emerged as a rapidly advancing area of machine learning research. MAX already has a few models in key areas of NLP (e.g., Text Sentiment Classifier, Named Entity Tagger, Review Text Generator, and Word Embedding Generator).

Today, we’re pleased to announce a new batch of models for natural language processing tasks:

  • Toxic Comment Classifier – This model detects whether a piece of text (typically a user comment on the Internet) contains various types of toxic content. Like our sentiment classifier, it is based on the state-of-the-art BERT architecture. Possible use case: Automated moderation of user comments on website articles or posts.
  • Text Summarizer – Using this model, a summary can be generated for a given piece of text. Possible use case: Generating automated summaries or headlines for news articles.
  • Chinese Phonetic Similarity Estimator – This model is able to estimate the phonetic distance between Chinese words and get similar sounding candidate words. Possible use case: Spellcheck functionality on social media.

The (code)pen is mightier than the sword

MAX pens on CodePen

To make it even simpler to experiment with what MAX models are capable of, we’ve created a collection of pens on CodePen that illustrate how to send image, audio, or video data to the model-serving microservices and how to visualize the prediction results using our new MAX visualization NPM package.

Coming soon

Visit the Model Asset eXchange and check out the latest models and enhancements. As always, we welcome your comments and suggestions that help us improve and better serve the ML/DL community.

And watch this space for exciting developments to come soon with the Model Asset eXchange, the Data Asset eXchange, and the powerful combination of the two!

CODAIT Team

OpenAPI-to-GraphQL version 1.0.0 released

We are happy to announce the release of version 1.0.0 of our open source library OpenAPI-to-GraphQL (originally published as “OASGraph” back in September 2018). OpenAPI-to-GraphQL allows you to leverage your existing REST API portfolio to build easy-to-use GraphQL interfaces.

Background

OpenAPI-to-GraphQL, as its name suggests, generates GraphQL wrappers for existing Web APIs described in the OpenAPI specification or Swagger. Thus, it makes it extremely easy and fast to generate GraphQL APIs, but also provides advanced configuration options to fine-tune them if need be.

In contrast to other libraries, OpenAPI-to-GraphQL relies on data definitions to generate easy-to-use GraphQL interfaces, it sanitizes and de-sanitizes parts of APIs incompatible with GraphQL, and makes makes use of OpenAPI 3.0.0 features like links to generate more usable GraphQL interfaces.

On the one hand, OpenAPI-to-GraphQL can be used as a command line interface (CLI), to instantly get started. On the other hand, it can be used as a library, allowing to integrate it with your backend code. See this video for an introductory demonstration.

The new version

Since its original release, we have received feedback, issues, and pull requests from users both within IBM and externally. Collectively, they helped our library to be more broadly applicable, provide more features, and be more robust. Some highlights of the new version 1.0.0 include:

  • Multi-OAS support allows to create a single GraphQL interface that is backed by multiple existing APIs. Inter-OAS links mean that the OpenAPI specifications can be used to define data links between these APIs.
  • A new option allows you to provide custom resolver functions, giving you full control to implement custom logic required to resolve parts of GraphQL queries (for example, to deal with unique authentication requirements, to access other systems apart from an API, to implement caching etc.).
  • Improved error handling through error extensions and more consistent warnings make it easier to understand if something goes wrong.
  • Improvements to the code base and development process (e.g., now enforcing consistent code style using Prettier), to make it easier for others to contribute to OpenAPI-to-GraphQL.

We are excited about the future of OpenAPI-to-GraphQL! As always, feel free to try out the library, reach out to us on on Twitter or open an issue.

Erik Wittern is a research staff member at the Thomas J. Watson Research Center who studies both the provision and consumption of web APIs from a developer’s perspective.

Erik Wittern