Comment by Quothling

1 day ago

Why? I've build some massive analytic data flows in Python with turbodbc + pandas which are basically C++ fast. It uses more memory which supports your point, but on the flip-side we're talking $5-10 extra cost a year. It could frankly be $20k a year and still be cheaper than staffing more people like me to maintain these things, rather than having a couple of us and then letting the BI people use the tools we provide for them. Similarily when we do embeded work, micro-python is just so much easier to deal with for our engineering staff.

The interoperability between C and Python makes it great, and you need to know these numbers on Python to know when to actually build something in C. With Zig getting really great interoperability, things are looking better than ever.

Not that you're wrong as such. I wouldn't use Python to run an airplane, but I really don't see why you wouldn't care about the resources just because you're working with an interpreted or GC language.

> you need to know these numbers on Python to know when to actually build something in C

People usually approach this the other way, use something like pandas or numpy from the beginning if it solves your problem. Do not write matrix multiplications or joins in python at all.

If there is no library that solves your problem, it's a great indication that you should avoid python. Unless you are willing to spend 5 man-years writing a C or C++ library with good python interop.

  • People generally aren’t rolling their own matmuls or joins or whatever in production code. There are tons of tools like Numba, Jax, Triton, etc that you can use to write very fast code for new, novel, and unsolved problems. The idea that “if you need fast code, don’t write Python” has been totally obsolete for over a decade.

    • Yes, that's what I said.

      If you are writing performance sensitive code that is not covered by a popular Python library, don't do it unless you are a megacorp that can put a team to write and maintain a library.

      1 reply →

From the complete opposite side, I've built some tiny bits of near irrelevant code where python has been unacceptable, e.g. in shell startup / in bash's PROMPT_COMMAND, etc. It ends up having a very painfully obvious startup time, even if the code is nearing the equivalent of Hello World

    time python -I -c 'print("Hello World")'
    real    0m0.014s
    time bash --noprofile -c 'echo "Hello World"'
    real    0m0.001s

  • What exactly do you need 1ms instead of 14ms startup time in a shell startup? The difference is barely perceptible.

    Most of the time starting up is time spent seartching the filesystem for thousands of packages.

    • > What exactly do you need 1ms instead of 14ms startup time in a shell startup?

      I think as they said: when dynamically building a shell input prompt it starts to become very noticable if you have like 3 or more of these and you use the terminal a lot.

      3 replies →