The main purpose of this benchmarking study is to compare the performance of JXcore with Node.js. But considering the recent popularity of Vert.x and some of the published benchmarking results indicating a superior performance over Node.js, we decided to include it in our comparison.

We didn’t want the benchmarking application to be about serving a static file. Realistically, neither Node.js nor Vert.x would be the ideal platform for this. NGINX can handle this much better. Since both Node.js and Vert.x are considered application development environments or frameworks, we wanted to concentrate on a scenario where there are server side operations instead of pushing the static files.

Our results showed that Vert.x is faster than Node.js consistent with similar studies done by the others (see http://www.cubrid.org/blog/dev-platform/inside-vertx-comparison-with-nodejs/). Results for Node.js clustered (CL) were comparable to Vert.x as expected.

Comparing both platforms on equal grounds may not be that easy since the number of available modules for Node.js is much larger than Vert.x. We didn’t want to include the effect of 3rd party code into the results of this benchmarking study. Hence we implemented the function below in order to steal the cpu time for aprox 0.12 milliseconds. (0.12 for node, 0.14 for Vert.x). It looks like V8 is faster but we’ll take this into consideration in presenting the final results.

var subm = function(){
    var a=0;
    for(var i=0;i<100;i++)
    {
        for(var x=0;x<1000;x++)
        {
            a+=x;
        }
    }
    return a;
};

JXcore, node.JS benchmark case:

var http = require('http');
http.globalAgent = false;
var z = 0;
http.createServer(function (req, res) {

    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end(subm(z++));

}).listen(8080);


Vert.x benchmark case:

var z = 0;
vertx.createHttpServer().requestHandler(function(req) {

    req.response.end(subm(z++));

}).listen(8080, '0.0.0.0');

Number of concurrent users : 2000 (ab – 3 Ubuntu server on the same network)
Number of requests : 500.000
Server: Quad Core 2.2Ghz, Ubuntu Server 64x 12.04LTS, 8 GB Memory
Vert.x: v2.0.2
JXcore: 0.11.11-beta
nodeJS: 0.11.11-pre

Results – Time per Request
JXcore -MT : 0.308 ms (average of 10 runs) (MT: Multi Threaded)
Vert.x : 1.274 ms (average of 10 runs)
JXcore -ST : 1.402 ms (average of 10 runs) (ST: Single Threaded)
node.JS : 1.483 ms (average of 10 runs)
node.JS -CL : 0.519 ms (average of 10 runs) (CL: Cluster)

Below chart shows the number of requests handled by each platform per second
Benchmark

Hello World!
Now the benchmarks for simple “Hello World” output instead. (server responses with a simple ‘Hello World’)

Results – Time per Request
JXcore -MT : 0.201 ms (average of 10 runs) (MT: Multi Threaded)
Vert.x : 0.360 ms (average of 10 runs)
JXcore -ST : 1.283 ms (average of 10 runs) (ST: Single Threaded)
node.JS : 1.291 ms (average of 10 runs)
node.JS -CL : 0.394 ms (average of 10 runs) (CL: Cluster)

Below chart shows the number of requests handled by each platform per second
benchmark

Indeed, Vert.X responses much faster when there are no server side operations keeping the server busy. The difference on JXcore MT and Node.JS CL is comparable to the time spent on the custom method handling for the calculations. JXcore ST and Node.JS single process results are very slow compared to Vert.X.

The performance gain behind JXcore MT is simply due to sharing the http server load across separate threads and the V8 blocks under the same process. As a result, there is no latency because of the multi process communications.

Overall, Vert.X is faster than Node.JS as suggested by others before. Results of Node.JS CL were comparable to Vert.X. However, JXcore MT is clearly faster than both Vert.x and Node.js CL

Note that you can run your Node.js 0.10+ projects with JXcore.

obastemur /

  • gervaissc

    Is it possible to have both nodejx multithreading and cluster in the same time? or have you tested it ?

  • obastemur .

    @gervaissc:disqus No we didn’t and I prefer not to say anything before testing that properly!

  • Tim Fox

    Also – 1) how many instances of Vert.x did you run (-instances flag) 2) Vert.x 2.1M2 has significant performance improvements over 2.0.2 3) I recommend the Nashorn JS module not the default Rhino one for the best performance

  • pycior

    So this shows that Vert.X is a few times faster than nodeJX in serving static files and a little faster in doing computation stuff. You can’t really compary nodeJX MT with Vert.X without running it multi-threaded – the same goes for clustering – and yes, Vert.X can be clustered with multiple threads on each cluster.

    It’s also not a secret that rhino is much much slower than v8 (10-20x in some cases) – but still Vert.X outperforms nodeJX – switching to Nashorn would widen the gap.

    Regarding the above this statement: “However, nodeJX MT is clearly faster than both Vert.x and Node.js CL” is clearly a lie.

    P.S. I really like the idea behind nodeJX :)

    • obastemur .

      Thanks. I couldn’t catch the part Vert.x is faster ?

      • pycior

        According to your test results – yes :)

        Flame aside. Did you test Vert.x multi-threaded or was a single thread used in the benchmark?

        • Tim Fox

          From his reply to one of my comments, it appears he only ran Vert.x with a single event loop, which means any comparison between multi-threaded NodeJX and Vert.x is apples-oranges. So, if we remove the NodeJX multi-threaded results the only result we can take home is Vert.x is faster (as you mentioned) :)

        • obastemur .

          both multi instances (4) and single. Multi instanced first scenario failed many times hence I couldn’t go forward. For “Hello World” there was no difference whether the instances parameter is 4 or default (aprox 10 to 20 ms difference in average. results from 4 instances) . We had this benchmark to see how it affects the responsiveness of the solution with MT. Besides, for the first scenario, the comparison may not be fair because of v8 and Rhino difference but not much to say for the second scenario.

  • Tim Fox

    If the tests have been run with just a single threaded Vert.x against a multi-threaded NodeJX it’s really an apples-oranges comparison. However it’s not clear from this report how many instances have been used as the methodology of the benchmark hasn’t been explained in any detail.

    • obastemur .

      Surely I didn’t mention the problems I had during the test like none of the ST nodes or Vert.x were able to answer all the requests. single or multi instances for Vert.x didn’t make any difference on “Hello World” test but for the first scenario, I couldn’t finish the test with 4 Vert.x instances.

      • Tim Fox

        Sounds like a setup issue. If you’re publish your actual benchmark setup somewhere (with exact instructions for replicating) then people could take a look, maybe see where the issues are, and re-run it. Most credible benchmarks will make everything public, so results can be reproduced. Also (footnote) ab sucks as a benchmarking tool. wrk (what techempower) uses is much better :)

        • obastemur .

          Actually, It’s all clear apart from the thing that I needed to make some tweaks on Ubuntu for better CCU handling.

  • Tim Fox

    With any benchmark I think it’s important to be clear and transparent with the setup and the methodology used – i.e. show the command lines used to start the servers. Explain how long the tests were run for. How long the warmup period was (warmup is critical for any JVM servers as it can take some time before JIT kicks in). etc

    • obastemur .

      You are right. Warmup period is very critical. Before every run, I restart the test app. and check the instance is working properly (accessed from the browser).

      • Tim Fox

        Accessing via a browser will not ensure the server is warmed-up!
        To get JIT to kick in often requires several minutes of running at high load, as it might require methods to excecuted 10s of thousands of times before performing optimistaions. For that reasons most benchmarks will run with full load for (say) a couple of minutes, before taking any results. The initial results during the warmup should be thrown away.

        • gervaissc

          So, this wouldn’t be a real life scenario… ’10s of thousands of times!’ … ohh..

          • pycior

            Well it would – for any production app. NodeJS will be quicker after start but it will consume more memory over time and slow down.

          • obastemur .

            it totally depends to the app.

  • Pingback: NodeJX vs Vert.x vs Node.JS Cluster | nodeJX | ...

  • Henrik Östman

    As with all benchmarks you run some tests, people points out flaws, you re-run them and for each iteration we get more satisfied and the result more accurate. :-)
    I agree with Tim that you really should run a warmup period first before running the benchmark, also specify the warmup period, JVM-version, commandline arguments for starting Node.js/jx and Vert.X, in your blog-post. I’m curious, why did you run the tests with stable vert.x 2.0.2 but with a beta-quality Node(0.11.11)? Run a benchmark with vert.x 2.0.2 and Node 0.10.25 for those of us that want something stable and production ready, and a benchmark with vert.x 2.1M3 and Node 0.11.11 for those who are interested what comming in the next versions.

    • obastemur .

      Henrik, just because nodeJX is a 0.11.11 fork … Indeed, I’m quite open to get help from any Vert.x expert to rerun the test for Vert.x one more time. But no matter what I did with the second case on Vert.x side (different number of instances) the result was similar to the one shared here. Warming up a test subject with 10 thousands of requests wouldn’t be fair for others for the above test scenarios! Soon we release nodeJX and any interested party can benchmark it. I’m open to help on nodeJX benchmarking part.

  • guilleiguaran

    Can you try using Vert.x multi-threaded and using Nashorn instead of Rhino?

    • obastemur .

      …. Vert.x was multi instanced! I don’t know if there is a multi threading ? For the second one Nashorn or Rhino doesn’t matter..