This site uses 3rd-party cookies for targeted advertising. If you do not agree to this, then please leave this site now. Otherwise please click on "Ok" to continue.
Ok
 
 

Node.js 0.11.15 and io.js 1.0.3 - Very much the same in performance

21-Jan-2015

This is going to be a dry and boring post. Seriously. If you don't want to waste your time, just go away. I mean it. Hmm... you're still here... well, don't say I didn't warn you... :)

In my previous post I compared the performance of Node.js 0.10.35 and io.js 1.0.2. Since both relased new versions - Node.js 0.11.15 and io.js 1.0.3 - just yesterday, I thought I should retest. (Hmm... both released new versions on the day after my blog-post... I'll just pretend I'm innocent... I probably am... Yes, I'm certain... And I'm gonna stick to that story, no matter how long "they" will interrogate me...)

What I tested

As before I tested using the Sieve of Eratosthenes, implemented using a regular array, typed-array or buffer.

There will be more numbers today. (You do remember that I told you this would be boring?) In the comments on Hacker-News about my article, there were some interesting tidbits that analyzed why some performance-data of my last test looked odd. I will be going into that further down in this post.

Just as I did before, I ran each test 7 times and then used the median time as my result. And again I was running the tests on an Intel i7-4771 3,5GHz using a 64-bit OpenSuse 13.2.

Here are the results:

Using the original test with no changes and no command-line arguments to Node/io.js:

Node.js 0.11.15 io.js 1.0.3
Buffer 4.991 5.009
Typed-Array 11.275 11.518
Regular Array 7.440 7.346
(Times are listed in seconds)

Boring as I told you. They are almost exactly the same. And almost exactly the same as the ones of io.js 1.0.2 in my previous test.

But there is a little bit more. I mentioned the comments on Hacker News. There was one commenter named "mraleph" who provided some interesting insight into why especially the typed-array test performed so badly on io.js.

Apparently the fact that I completely stopped using any Uint8Array after each of the 10,000 test-runs, caused the Uint8Array-class to be unloaded and recreated each time. So I added some code to my test to create another dummy Uint8Array before I started the tests, and which I kept allocated throughout the test. I already mentioned on Hacker News that that solved the slowdown of the typed-array test in io.js 1.0.2.

There was another suggestion from mraleph, to add the command-line argument --nouse-osr. OSR is a technique used to optimize a function while it is running. You can read an explanation of OSR here. mraleph suggested to disable this. And it brought another performance gain.

Using --nouse-osr had an impact not only on the typed-array test, but on the other two as well. So here comes a lot of data...

 

Keeping Uint8Array alive, but without adding --nouse-osr:

Node.js 0.11.15 io.js 1.0.3
Buffer 4.995 5.024
Typed-Array 5.501 5.510
Regular Array 7.443 7.375
(Times are listed in seconds)

Apart from the times for typed-array which are about cut in half, there is no significant difference. Buffer and regular array performed as before.

 

Next test. No extra Uint8Array to keep the class alive, but with the --nouse-osr parameter instead:

Node.js 0.11.15 io.js 1.0.3
Buffer 4.289 4.325
Typed-Array 4.997 5.052
Regular Array 6.683 6.731
(Times are listed in seconds)

As you can see this not only fixes the performance problem with typed-arrays, it gives an additional slight performance increase for all three test-cases. The increase is almost 15% for buffers and about 10% for typed-arrays and regular arrays.

 

Ok. Now let's see how performance will be with both keeping a Uint8Array allocated throughout the test, and with the --nouse-osr parameter:

Node.js 0.11.15 io.js 1.0.3
Buffer 4.275 4.342
Typed-Array 4.985 5.042
Regular Array 6.702 6.770
(Times are listed in seconds)

Those times are basically identical to the ones before.

In conclusion, if you add the --nouse-osr parameter, then you don't need to worry about keeping the Uint8Array allocated.

Another major takeaway from this test is that Node.js 0.11.15 and io.js 1.0.3 perform almost the same.

I see you read to the end. Despite my warning that this would be dry and boring. Congratulations! Reading through all these numbers officially makes you a geek. :)

Comments:

Author: (Unknown)
2015-01-22 00:06 UTC
 

Well, this has been quite informative. I decided to keep node.js to the stable version 0.10.35, and play with new things with io.js 1.0.3.

Author: (Unknown)
2015-01-22 09:56 UTC
 

FYI, the commenter named "mraleph" is Vyacheslav Yegorov, one of the brilliant minds behind Google's V8 JS engine that powers both Node.js and IO.js. He knows literally everything about inner workings of that beast, but what's even more impressive is that he finds time to respond to every question on internet that has words "V8" and "performance" in it.

Author: (Unknown)
2015-01-22 16:36 UTC
 

OSR is supposed to be an optimization that makes code go faster, no? Then why does disabling it speed things up?

Author: Michael Schöbel
2015-01-22 17:58 UTC
 

Good question. I'm *guessing* that in this specific case the time required to perform the optimization is longer than the execution-time it saves later on.

Author: (Unknown)
2015-01-23 19:42 UTC
 

I tried to explain on HN why not using OSR improves performance: OSR as it is implemented now impacts code quality depending on which loop OSR hits. Which in turn depends on heuristics that V8 uses. These heuristics are slightly different in newer V8. As a result of these changes V8 hits *inner* loop instead of *outer* loop. This leads to worse code.

Code that benefits from OSR is the code that contains a loop which a) can be well optimized b) runs long b) is run only few times in total. The Sieve benchmark is opposite of this and as a result it doesn't benefit from OSR - you get bigger penalty from producing worse code and no benefit from optimizing slightly earlier.

Not using OSR for Sieve also hides the other issue with mortality of typed array's hidden classes. I say "hides" not "fixes" because one can easily construct a benchmark where the mortality would still be an observable performance issue even if benchmark itself is run without an OSR: https://gist.github.com/mraleph/2942a14ef2a480e2a7a9

-- @mraleph

Author: (Unknown)
2015-01-29 14:32 UTC
 

This test is very interesting. Basically the result is that io.js is not really faster than node.js at the moment - at least for the benchmark you are presenting. What I would like to see is:

-> We know the speed, what about memory consumption?
-> We got an artificial benchmark. What about 'real' websites?

Especially the second question is interesting as the advantage of JS is the fast IO, not computation-power. So running a benchmark with lots of IO (say running a simple webserver serving a million requests / min with reading from different caches) would be really interesting. Maybe add graphs which display performance and memory per request count.

You want to comment on this blog-post? If yes, then simply enter your comment in the field below and click on "Submit".

Comments are moderated at the moment thanks to some $%&# who thought it would be funny to post total nonsense here.

 


Back to the Homepage

A Programmer's Diary