← Back to context

Comment by avar

8 years ago

Best out of 5 times on my Debian testing laptop for a "hello world", in order of worst to best:

    ruby2.5:     83ms (-e 'puts "hi"')
    python3.6:   35ms (-c 'print("hi")')
    python2.7:   24ms (-c 'print("hi")')
    perl5.26.2:  8ms  (-e 'print "hi"')
    C (GCC 7.3): 2ms  (int main(void) { puts("hi"); })

35ms for Python is ok. What we see in reality is that the imports that a real application will use, adds a whole lot more time.

For example, if you want a snappy command line response for a Gtk-using Python program, you probably want to handle command line arguments before even importing Gtk. Maybe it is --help or an argument that you pass on to another running instance, and you want it to be absolutely snappy and fast.

  • I have read that conditional imports are "un-pythonic", but I tend to do exactly that in order to keep resource usage lower.

  $ time ruby --disable-gems -e 'puts "hi"'
  hi

  real    0m0.009s
  user    0m0.008s
  sys     0m0.000s

  • Sure, two can play that game. Let's add `-S`, which disables the site module, to the Python invocations.

        perl ........... 0m0.012s
        siteless py27 .. 0m0.018s
        gemless ruby ... 0m0.021s
        siteless py36 .. 0m0.025s
        siteful py27 ... 0m0.034s
        siteful py36 ... 0m0.049s
        gemful ruby .... 0m0.089s

> C (GCC 7.3): 2ms (int main(void) { puts("hi"); })

Not really a fair comparison given the other 3/4 have to do all their parsing and compiling. Unless in those 2ms you include compilation time. Or use tcc -run.

  • The user doesn't care, they just invoke "hg" or "git", and language is always a choice, so it's valid from that perspective.

    But the reason I included it is because it gives a baseline for the overhead of invoking any program, no matter how trivial.

    • I must say even 2 ms feels rather slow just to execute something hot in cache.