IBM Z Day on Nov. 21: Discover the ideal environment for modern, mission-critical workloads. Learn more

Node.js memory management in container environments

When running a Node.js application inside a docker container, traditional memory parameter tuning does not always work as expected. In this blog, we discuss the reasons why memory parameter tuning doesn’t always work in container-based Node applications and offer recommendations and best practices you can follow when you work with Node.js applications in container environments.

Summary of recommendations

When Node.js applications are running within containers with memory limits set (using the --memory option for docker or any other flags with your orchestration system), use the --max-old-space-size option to ensure that Node knows its limit and that the set value is smaller than the container limit.

When Node.js applications are running within containers, specify the memory capacity in accordance with the peak active memory usage of the application (if container memory can be adjusted).

Let’s explore in more detail.

Docker memory limit

By default, a container has no resource constraints and can use as much of the available memory resource as the host’s operating system (OS) allows. The docker run command has command line options to set limits on how much memory or CPU a container can use.

The docker-run command looks like this: docker run --memory <x><y> --interactive --tty <imagename> bash

where

  • x is the memory in y units
  • y can be b(bytes), k(kilobytes), m(megabytes), g(gigabytes)

For example: docker run --memory 1000000b --interactive --tty <imagename> bash sets the memory or CPU limit to 1,000,000 bytes.

To check the memory limits in bytes inside a container, issue the following command:

cat /sys/fs/cgroup/memory/memory.limit_in_bytes

So, using this value for the --max_old_space_size, let’s explore the container’s behavior.

“Old space” is the common heap section of V8’s managed heap (i.e. where the JavaScript objects live), and the --max-old-space-size flag controls its maximum size. For more information, please refer to About –max-old-space-size.

In general, when an application uses more memory than the container memory, the application terminates.

The following sample application pushes records at an interval of 10 milliseconds. This fast internal makes the heap grow without bounds, simulating a memory leak.

cat test-fatal-error.js

'use strict';
const list = [];
setInterval(()=> {
        const record = new MyRecord();
        list.push(record);
},10);
function MyRecord() {
        var x='hii';
        this.name = x.repeat(10000000);
        this.id = x.repeat(10000000);
        this.account = x.repeat(10000000);
}
setInterval(()=> {
        console.log(process.memoryUsage())
},100);

All of the sample programs illustrated in this blog are available in a Docker image that I pushed into Docker Hub. You may pull that image and run the programs as well. Use docker pull ravali1906/dockermemory to get the image.

Alternatively, you can dockerize the application, build the image, and run the image with a memory limit as shown below:

docker run --memory 512m --interactive --tty ravali1906/dockermemory bash

where ravali1906/dockermemory is the image name.

Next, run the application with memory greater than the container limit as:

$ node --max_old_space_size=1024 test-fatal-error.js

{ rss: 550498304,
  heapTotal: 1090719744,
  heapUsed: 1030627104,
  external: 8272 }

Killed

where

  • --max_old_space_size takes a value in megabytes
  • process.memoryUsage() gives result in bytes

The application is terminated at some point when the memory usage crosses certain threshold. But what are those thresholds? And what are the constraints? Let’s examine the constraints.

Expected behavior with –max-old-space-size within container constraints

By default, Node.js (up to 11.x ) uses a maximum heap size of 700MB and 1400MB on 32-bit and 64-bit platforms, respectively. For current defaults, see the reference mentioned at the end of blog.

So in theory, when setting --max-old-space-size with a memory limit greater than the container memory, the expectation is that the application will be terminated by the OOM killer.

In reality, this may not happen.

Actual behavior with –max-old-space-size within container constraints

Not all the memory that is specified by --max-old-space-size is made available to the program upfront.

Instead, the JavaScript heap is made to grow incrementally, in response to increasing demand.

The actual memory that is consumed by the application (in the form of objects in the JavaScript heap) is represented by heapUsed field in process.memoryUsage() API.

So, the modified expectation now is that if the actual heap size (resident object size) crosses the OOM-KILLER threshold (--memory flag in the container), then the container terminates the application.

In reality, this also may not happen.

When I profile a memory-intensive Node.js application under a container-constrained environment, I see two patterns:

  1. The OOM-KILLER kicks in at a much later point in time when the heapTotal and heapUsed values are way above the container constraints.
  2. The OOM-KILLER does not come into action at all!

Node.js behavior in container environment: Explanation

The important metric that the container keeps track of in a hosted application is resident set size (RSS).

It represents a portion of the application’s virtual memory.

Furthermore, it represents a portion of the application’s allocated memory.

Even further, it represents the portion of application’s allocated memory, that is currently active.

Not all allocated memory within an application may be active. This is because “allocated memory” isn’t necessarily allocated until the process actually starts using it. In addition, in response to memory demands from other processes, the operating system may swap out inactive or dormant parts of the application’s memory to be made available to those processes that need it, and to be later swapped in when the current application requires those back.

RSS memory reflects the amount of memory that is currently available and active in the application’s address space.

Proof

Example 1. An application that’s allocating buffer with memory

The following example, cat buffer_example.js, shows what it looks like to allocate buffer with memory:

cat buffer_example.js

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024)
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

To issue the docker memory limit while running the image, use this:

docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash

Now, run the application. You’ll see the following:

$ node buffer_example 2000
2000
16

The application is not terminated even though the memory is greater than the container limit. This is because the allocated memory is not accessed fully. The rss value is very low, and it hasn’t exceeded the container memory limit.

Example 2. An application where the buffer is being filled with data

In the following application, notice that the memory is being filled with data:

cat buffer_example_fill.js

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x')
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

To issue the docker memory limit while running the image, use this:

docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash

Now, run the application:

$ node buffer_example_fill.js 2000
2000
984

Even here the application is not getting terminated! Why? When the active memory reaches the container set limit, and when there is space in the swap space, some old memory pages are pushed to the swap space and are made available to the same process. By default, docker allocates an amount of swap space that is equal to the memory limit set through the --memory flag. With this arrangement in place, the process now has practically 2GB for its use – 1GB in the active memory and 1GB in the swap space. So in short, because of the swapping of own memory, the total rss is still within the container-enforced limit, and the application is able to survive.

Example 3. An application where the buffer is being filled with data but the container is prevented from using swap

The following settings prevent container from using swap:

cat buffer_example_fill.js

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x')
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

Issue the docker memory limit, swap limit, and swappiness while running the image as:

docker run --memory 1024m --memory-swap=1024m --memory-swappiness=0 --interactive --tty ravali1906/dockermemory bash

Now, run the application:

$ node buffer_example_fill.js 2000
Killed

Notice that Killed? When the value for --memory-swap is equal to the value for --memory, it is an indication to the container not to use any extra swap space. In addition, by default, the host kernel can swap out a percentage of anonymous pages used by a container, so pass value 0 to the --memory-swappiness flag to disable it. So, as there is no swapping occuring inside the container, rss has exceeded the container limit, killing the process at the right time.

Summary and recommendations

When you run your Node.js application with --max-old-space-size set to more than the container limit, it may seem like Node.js is not ‘respecting’ the container-enforced limit. But as you saw in the examples above, the obvious reason is that the application may not be accessing the full length of the JavaScript heap set using the flag.

Keep in mind that you can’t always expect your application to behave the same way when you use more memory than is available in the container. Why? Because the active memory (rss) of a process is influenced by a number of factors which are outside the control of your application, and can be highly load- and environment-dependent – such as the workload itself, level of concurrency in the system, operating system scheduler, garbage collection rate etc. In addition, these factors can change from run to run.

Recommendation for Node.js heap size (when you can control it, but can’t control the container size)

  • Run an empty Node.js app inside a container, and measure static rss usage (I got it around 20 MB for Node.js v10.x).
  • As Node.js has other memory regions in the heap pertinent (such as new_space, code_space etc.), account for an extra 20 MB assuming their default configurations. If you change their defaults, adjust this value accordingly.
  • Subtract this value (40 MB) from the memory available in the container. What’s left from that subtraction should be a reasonably safe value to have for JavaScript heap’s old space size.

Recommendation for container size (when you can control it, but not Node.js memory usage)

  • Run your application that covers peak workload scenarios.
  • Measure the growth in the rss field. I used the top command as well as process.memoryUsage() API to find that growth.
  • If no other active processes exist in the container, use this value as the container limit. You need to add 10% or more to that number to be on the safer side.

Footnote

Node.js 12.x addresses some of these inconsistencies by limiting the default heap size of Node.js well within the available limit, if running under container environments. However, for non-default heap settings [max_old_space_size], the above illustration and behavior still holds true, and you should take care when deciding your memory tuneable. In addition, knowing the default limits will let you tune better as the defaults are conservative.

For more information, refer to Configuring default heap dumps.

Ravali Yatham