Codementor Events

Intuition Engineering and the Chrome Heap Profile

Published Jul 12, 2017Last updated Jan 07, 2018
Intuition Engineering and the Chrome Heap Profile

A Tale of JavaScript Performance, Part 1

The purpose of this article series is to chronicle my exploration into JavaScript performance through my ongoing work creating a visualization tool for Chrome memory profiles. We’ll start with how the project came about and then take a deep dive into the file format that Chrome uses to represent a memory heap.

Intuition engineering

A few months ago, I had the privilege of listening to a fantastic talk by Casey Rosenthal on the subject of intuition engineering. Intuition engineering is the concept that humans are much better at understanding information that can be consumed without needing to think about it. Things like sights, sounds, and smells can be processed in a fraction of the time that it takes to read a block of text or interpret a table.

How quickly can you get a sense of what is happening on this application server from this:


AWS CloudWatch log viewer

Versus this:


New Relic Error Analytics dashboard

Clearly, having a visual representation of data at the proper level of abstraction makes it much easier to understand what is actually going on. Casey says that once you have been exposed to that abstraction for a while, you will develop an intuitive sense of what is “normal”. Variations from this baseline are immediately apparent and allow us to leverage our natural instinct for pattern recognition to give us insights into what might cause a change in behavior.

This makes intuition engineering an excellent platform for designing diagnostic interfaces, particularly ones that show data over time. By providing an interface that allows you to develop an intuitive baseline, you enable your users to quickly make deductions about divergences from the baseline.

A project is born

After learning about intuition engineering, I was itching to try it on a personal project. I was working on some JavaScript memory heap profiling at the time, attempting to diagnose the cause of an app’s large memory footprint and track down some memory leaks.

After spending a few days staring at this interface:


Chrome Memory Profile viewer

I thought that maybe I could leverage this idea of intuition engineering to build a better view of a heap snapshot. Just by presenting the data in a visual way we could get some quick insight into where our memory was being allocated and what was not getting cleaned up during GC runs. It might be hard to develop an intuitive baseline across applications, but within an application over time it should certainly be possible to come to a visual understanding that would let us quickly identify major events or problem areas.

Decoding the heap profile format

If I wanted to represent a heap profile visually, my first step needed to be to parse the exported Chrome heap profile format into something that I could visualize. The heap profile format is cleverly compressed JSON. It looks something like this:


Excerpt from a .heapprofile file

These files are very large, frequently hundreds of megabytes. The format itself took quite some time for me to puzzle through, but I eventually figured it out through a lot of trial and error and poking around the Chrome devtools.

Memory is represented as a graph, with nodes representing memory allocations and edges representing references to those memory locations. A sample consists of a timestamp and the last assigned edge during that period. Each line of the file represents one object of its type. The fields of the object are noted in the metadata as node_fields.

A node can determine its edges by its edge_count field. Starting with the first node, edges are assigned to nodes incrementally. So, the first node, which has an edge_count of 8 is referenced by the first 8 edges. The second node, edge_count 17, owns the next 17 edges, and so on.

Edge references to parent node are not done by ID but rather by index of the starting element of the node within the array. Similarly, time samples are marked by end index of the last edge allocated within the sample. It took me quite some time to figure this out, but the benefit is clear — it creates a unique integer key for fast array lookup. No need to parse the entire graph into a data structure in order to determine data about a single edge or node.

About nodes

There are three statistics of particular importance for any given node:

  1. Self size  — The size of the memory contained solely by the node itself. Very large in-memory objects will have large self sizes, like long strings, dictionaries, and functions with large bodies. Finding objects with unusually large self sizes is often a good first step to reducing the memory footprint of an app.
  2. Retained size  — The object’s self size plus the sizes of all of the objects that would be freed should the object be deleted. In math terms, any nodes that this node dominates (more on that in a second) are added to its retained size. Objects with very large retained sizes will free up a lot of memory if they are cleaned up, even if they do not have a particularly large self size.
  3. The identity of its retainers  — Any edges to a node give information for what is holding it in memory. This is most important to know once you have identified a set of nodes that are presenting an issue as it indicates what is keeping those nodes in memory. Retainers can also give you useful clues as to the true nature of a poorly-labeled node.

Finding self size and retainers are trivial — self size is given in the node definition, and by constructing the graph you can easily find a node’s retainers. Calculating a node’s retained size, however, is not so simple. To calculate it, we must first have the list of all nodes the node dominates.

Building a dominator tree

A node is considered dominated by node N if, while traversing to the root node, all paths must go through N. In memory terms, this means if you delete the reference from N to the node it will now have no references and be garbage collected on the next pass. In order to generate a list of dominated nodes from our graph, we need to create something called a dominator tree.

There is a handy-dandy algorithm for constructing such a tree, known as the Tarjan-Lengauer algorithm. It’s fairly technical, and I was a little hesitant to roll up my sleeves and implement it in JavaScript myself. I did a little thinking and a little digging, and realized that there was a fantastic open-source implementation of the algorithm right in front of me.

Three cheers for OSS

I knew that the Chrome devtools were constructing a dominator tree because they were kind enough to flash it briefly as one of the progress messages while loading a heap snapshot from file. A quick jaunt in to the Chromium source lead me to HeapSnapshot.js and friends. I realized — why should I be building my own parser for this file format if Chrome has one open-sourced and ready for production use?

I’m happy to report, it worked flawlessly. I was able to extract the HeapSnapshot module and spin it up in a browser. I had to shim a few things, but the devtools modules are designed to hang off a global object which makes them extremely easy to extract — just provide your own, correctly named global, ensure they have any utilities that they need, and they will do the rest.

A little jury-rigging later and I was able to feed it a heap profile file and get a fully inflated representation of the heap at the end. I had my data, primed and ready for whatever visualization method I wanted to throw at it!

Up next —Part 2: Choosing a visualization method

Discover and read more posts from Tom Lagier
get started
post comments1Reply
Qi Fan
7 years ago

I find it’s pretty difficult to use the heap snapshot since all DOM elements are inter-connected through tree/mesh built by jQuery and AngularJS.

It would be great if we can filter out these “background noise” either by pre-processing or as part of the visualization.