The llnode plugin for lldb allows you to investigate Node.js problems using lldb. The gencore npm package exists to generate core dumps in situations where other tools may not work and simplify the collection of core dumps in a way that makes them suitable to move to another system for problem investigation.

In order to use llnode you need to obtain a core dump. Typically (on Linux) a core dump can be created in one of two ways:

  • Your program crashes, usually in C/C++ code, and the operating system creates a core dump.
  • You deliberately create a core dump by running gcore or attaching a debugger and saving a core dump.

If your program doesn’t crash and you want to create a core dump to use with llnode then the obvious approach is to use gcore to take a core dump from a running process. Unfortunately, especially in production environments, that may not work. In some environments (for example docker images) you may be unable to use gcore. Firstly gcore might not be installed in the docker image. Secondly it uses a system call, ptrace(), to attach to and take control of the target process. For security reasons ptrace() is restricted by default on current levels of Ubuntu and is generally blocked inside images running in docker.

The gencore module works around this problem by using fork() to create a child process which then crashes deliberately to create a core dump. The readme on the gencore page has more details about what it does. It also does some extra work to gather up the executable and libraries loaded by your Node.js application and package them up as a tar.gz file so they can be moved to another machine for diagnosis.

Once you have installed the gencore npm package and required it in your application you can use it by calling gencore.createCore() or gencore.collectCore(). The createCore() function simply creates a core dump, the collectCore() function creates a core dump and compresses it into a tar.gz file along with the Node.js executable and any libraries the process had loaded. It is intended to make it simple to move core dumps to other systems for analysis outside of your production environment.

As a side effect of creating the core dump by forking the main process the gencore module also minimises one of the problems with using gcore against the running process. Because gcore attaches to the running process and then writes all the memory that process is using to disk the process pauses completely while gcore is attached. Using gencore the process does not pause while the core dump is created so apart from running the gencore code your Node.js application continues. Once fork() has returned, which gencore does as soon as possible, your application continues. gencore does the rest of it’s work to create the tar.gz asynchronously like any other Node.js code would do. createCore() and collectCore() invoke a callback when complete and pass the file name they created.

The main disadvantage to the gencore method of creating a core dump is that when a process is forked all the memory is copied but only the running thread is cloned. All the other threads in the original process will be missing from the core dump. In the case of Node.js however this is not such a large problem – all the JavaScript work Node.js does happens on one thread and that is the thread that will invoke gencore. Hence the most interesting thread will always be in the core dump. Threads that have been created by libuv, v8 or native npm modules will be missing. Since a core dump is a copy of all the memory in use by a process the core file will also contain every object on the JavaScript heap and these can be accessed using the llnode plugin.

Example – Create a core dump with gencore

This example shows how you can create a core dump with gencore, open it in lldb and use llnode to retrieve the stack:

const gencore = require('gencore');
const http = require('http');
const port = 1338
const ip = ''

let core = null;

let server = http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  // Core dumps are large. We don't want to create one on every page refresh.  
  if(core == null) {
    core = " progress...";
    gencore.collectCore((error, name) => {
      core = name;
      console.log(`Core dump created at: ${core}`);
  res.end(`Core dump: ${core}\n`);
}).listen(port, ip);

console.log(`Server running at http://${ip}:${port}/`);

Output from curl

$ curl
Core dump: progress...
$ curl
Core dump: progress...
$ curl
Core dump: progress...
$ curl
Core dump: progress...
$ curl
Core dump: core_20170621.101338.2348.001.tar.gz

You can see from the output that if you run this and trigger a core dump by going to: the core dump is being generated asynchronously. Repeated http requests will give the “in progress” message until the callback passed to createCore() is invoked when the core is written and compressed. The http requests sent while that is happening are still being handled by Node.js.

Example – Open a core dump created with gencore

If you take the core dump created above and copy it somewhere else (another Linux machine or a Mac you use for development – lldb can open Linux core dumps on MacOS) you can extract the core dump and use it with lldb.

The tar.gz file will contain the core dump, in the top level directory of the tar and the executable and libraries that were loaded by your process beneath that. The executable and libraries will be under the same paths as they were on the machine the file was created on but underneath the top level directory of the tar.gz file rather than the root directory.

Collecting the executable and libraries is important because when you invoke lldb with a core dump you must give it the exact same node executable that was running your program. Using an executable from the same major or minor release but different patch level or one you built from the exact same source tree at a different time will not work correctly. It needs to be the exact same binary file. The debugger requires symbols and addresses to match exactly between the program code and the core dump.

Optionally, install llnode if you haven’t already:
Install llnode globally:

$ npm -g install llnode

or install it under node_modules in the current directory:

$ npm install llnode

The llnode npm installer will display a message giving you the location of the llnode plugin and the command to run to load it into lldb:

llnode plugin installed, load in lldb with:
(lldb) plugin load /path/to/dest/node_modules/llnode/llnode.dylib
or copy plugin to lldb system plugin directory, see

You can use the command as printed once lldb has started to load the plugin.

Example output:

$ tar -xf core_20170620.143848.3966.001.tar.gz
$ lldb core_20170620.143848.3966.001/path/to/extracted/node.js/bin/node -c core_20170620.143848.3966.001/core
(lldb) target create "core_20170620.143848.3966.001/path/to/extracted/node.js/bin/node" --core "core_20170620.143848.3966.001/core"
Core file '/path/to/cwd/core_20170620.143848.3966.001/core' (x86_64) was loaded.
(lldb) plugin load /path/to/dest/node_modules/llnode/llnode.dylib
(lldb) v8 bt
 * thread #1: tid = 3973, 0x00007fa8134184ff node, name = 'node', stop reason = signal SIGSEGV
    frame #4: 0x00007fa813998422 node`v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) + 290
    frame #5: 0x00007fa8139f63d2 node`v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::(anonymous namespace)::BuiltinArguments<(v8::internal::BuiltinExtraArguments)1>) + 402
    frame #6: 0x00007fa8139f6a4e node`v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) + 398
    frame #7: 0x000022ca4e4092a7 <exit>
    frame #8: 0x000022ca4e5bbd75 collectCore(this=0x0000233620bf5de9:<Object: Object>, 0x000029b40af0deb1:<function: gencore.collectCore at testscripts/corehttp.js:13:25>) at node_modules/gencore/index.js:84:21 fn=0x0000233620bf5c11
    frame #9: 0x000022ca4e5b2a9c (anonymous)(this=0x00003dab8f2dd109:<Object: Server>, 0x000029b40af08f09:<Object: IncomingMessage>, 0x000029b40af0ace1:<Object: ServerResponse>) at testscripts/corehttp.js:8:41 fn=0x00003dab8f2dd0a1
    frame #10: 0x000022ca4e521493 emitTwo(this=0x00001c1c9b604381:<undefined>, 0x00003dab8f2dd0a1:<function: (anonymous) at testscripts/corehttp.js:8:41>, 0x00001c1c9b6043c1:<true>, 0x00003dab8f2dd109:<Object: Server>, 0x000029b40af08f09:<Object: IncomingMessage>, 0x000029b40af0ace1:<Object: ServerResponse>) at events.js:104:17 fn=0x00000a3e4d43d571
    frame #11: 0x000022ca4e520efb emit(this=0x00003dab8f2dd109:<Object: Server>, 0x0000233620b29a31:<String: "request">) at events.js:136:44 fn=0x000003db3a1eaf49
    frame #12: 0x000022ca4e409895 <adaptor>
    frame #13: 0x000022ca4e5b08bf parserOnIncoming(this=0x000029b40af07641:<Object: HTTPParser>, 0x000029b40af08f09:<Object: IncomingMessage>, 0x00001c1c9b6043c1:<true>) at _http_server.js:460:28 fn=0x000029b40af068b9
    frame #14: 0x000022ca4e5ae3b8 parserOnHeadersComplete(this=0x000029b40af07641:<Object: HTTPParser>, <Smi: 1>, <Smi: 1>, 0x000029b40af08cc1:<Array: length=6>, <Smi: 1>, 0x0000233620b25d69:<String: "/">, 0x00001c1c9b604381:<undefined>, 0x00001c1c9b604381:<undefined>, 0x00001c1c9b604271:<false>, 0x00001c1c9b6043c1:<true>) at _http_common.js:45:33 fn=0x00003dab8f29e6d1
    frame #15: 0x000022ca4e43b7c3 <internal>
    frame #16: 0x000022ca4e42508f <entry>
    frame #17: 0x00007fa813c5ee44 node`v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) + 196
    frame #18: 0x00007fa81397ff49 node`v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 313
    frame #19: 0x00007fa81398e101 node`v8::Function::Call(v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 65
    frame #20: 0x00007fa814095e89 node`node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) + 329
    frame #21: 0x00007fa8140d9277 node`node::Parser::on_headers_complete(http_parser*) + 1095
    frame #22: 0x00007fa81430c41b node`http_parser_execute + 1803
    frame #23: 0x00007fa8140d85bd node`node::Parser::OnReadImpl(long, uv_buf_t const*, uv_handle_type, void*) + 157
    frame #24: 0x00007fa8140f0e9c node`node::StreamWrap::OnRead(uv_stream_s*, long, uv_buf_t const*) + 124
    frame #25: 0x00007fa81432815f node`uv__read(stream=<unavailable>) at stream.c:1192
    frame #26: 0x00007fa8143287d0 node`uv__stream_io(loop=<unavailable>, w=<unavailable>, events=<unavailable>) at stream.c:1259
    frame #27: 0x00007fa81432e1b0 node`uv__io_poll(loop=<unavailable>, timeout=<unavailable>) at linux-core.c:380
    frame #28: 0x00007fa81431e6c6 node`uv_run(loop=<unavailable>, mode=<unavailable>) at core.c:354
    frame #29: 0x00007fa8140b1be0 node`node::Start(int, char**) + 1264

You can see that we’ve got a full stack combining internal Node.js and V8 code with JavaScript code – without needing to stop the Node.js application!

From here you continue to use llnode as you normally would, finding out which version of node you were really running, investigating memory leaks or simply exploring it to understand your application.

Join The Discussion

Your email address will not be published. Required fields are marked *