Codementor Events

JavaScript Frameworks, Performance Comparison

Published Sep 28, 2018Last updated Sep 29, 2018
JavaScript Frameworks, Performance Comparison

Read the full article, with all the result tables on Medium.

I thought it would be fun to use Stefan Krause’s benchmark tool and do some performance comparison between the most well-known front-end frameworks and UI libraries.

Please note this post is simply the accounts of my observations from running benchmarks using the aforementioned tool on my laptop. You can use this post as a reference, but you MUST always run your OWN benchmarks and make decisions based on YOUR project requirements. You should also look at other benchmarks like real-world apps benchmarks.

Now, with that out of the way, grab a cup of coffee and enjoy the show.

Arena

All the benchmarks were executed on my MacBook Pro, with the following specs:

  • MacBook Pro (Retina, 15-inch, Mid 2015)
  • Processor: 2.2 GHz Intel Core i7
  • Memory: 16 GB 1600 MHz DDR3
  • Graphics: Intel Iris Pro 1536 MB
  • Browser: Google Chrome, Version 69.0.3497.100

Teams*

In our benchmark competition we have two teams, the frameworks and the libraries team. In the frameworks team (Team 1) we have:

  • Angular* v6.1.0
  • Elm v0.19.0
  • Choo v6.13.0
  • AngularJS v1.7.4
  • Aurelia v1.3.0
  • Marionette v4.0.0 (no jQuery)
  • Mithril v1.1.1
  • Ember v3.0.0

In the UI libraries team (Team 2) we have:

  • React v16.5.2
  • Vue v2.5.17
  • Preact v8.3.1
  • Inferno v5.6.1
  • Svelete* v2.13.5
  • Bobril v8.11.2
  • Redom v3.13.1
  • Maquette v3.3.0

The Battles

In this ultimate championship, each team is going to initially compete in their own teams. Next, to make things more interesting, the winners of each team are going to compete against each other. And finally, the top performers will compete against the all-time champion, PlainJS a.k.a VanillaJS.

Also, it’s worth mentioning that each team is going to compete in the following categories:

  • DOM Manipulation
  • Startup Time
  • Memory Allocation

In order to have better results, each benchmark is carried out three times and the standard deviation for each result is recorded. Moreover, at the end of each teams’ battles, I will present a table summarizing the results for all the frameworks and libraries against the winner of the team.

At the end, for fun, I’ll also provide relative results for the most popular frameworks and libraries: Angular, Inferno, Preact, React, and Vue.

Team 1 Matches

The first round matches for Team 1 are listed below:

  • Angular vs Elm
  • AngularJS vs Choo
  • Marionette vs Mithril
  • Aurelia vs Ember

Team 1, Round 1 Winners:

  • Elm
  • Choo
  • Marionette
  • Aurelia

Team 1, Round 2 Winners:

  • Elm
  • Marionette

Team 1, Final Round

Marionette wins againt Elm.

Team 2 Matches

The first round of matches for Team 2 are listed below:

  • React vs Vue
  • Preact vs Inferno
  • Svelete vs Redom
  • Bobril vs Maquette

Team 2, Round 1 Winners

  • Vue
  • Inferno
  • Redom
  • Maquette

Team 2, Round 2 Winners

  • Inferno
  • Redom

Team 2, Winner

Redom wins against Inferno

Winners Battles, Redom vs Marionette

Redom overall wins over Marionette

Conclusion

Performance benchmarking is a hot topic and talking about it is challenging. In this post however, I attempted to provide a starting point for those who are interested in the topic. As mentioned before, you should always run your own benchmarks, create prototypes and evaluate options based on your project requirements.

Let me know what you think. Do you think these benchmark values are irrelevant in the real world Or do you think they do provide some insight? I would love to hear your thoughts.

Read the full article, with all the result tables on Medium.

Discover and read more posts from AJ Meyghani
get started