Comment by zahlman

11 hours ago

> the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!

I still genuinely do not understand why this is a serious selling point. Linux systems commonly already provide (and heavily depend upon) a Python distribution which is perfectly suitable for creating virtual environments, and Python on Windows is provided by a traditional installer following the usual idioms for Windows end users. (To install uv on Windows I would be expected to use the PowerShell equivalent of a curl | sh trick; many people trying to learn to use Python on Windows have to be taught what cmd.exe is, never mind PowerShell.) If anything, new Python-on-Windows users are getting tripped up by the moving target of attempts to make it even easier (in part because of things Microsoft messed up when trying to coordinate with the CPython team; see for example https://stackoverflow.com/questions/58754860/cmd-opens-windo... when it originally happened in Python 3.7).

> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...

Sure, but that has everything to do with not understanding (or caring about) virtual environments (which are fundamental, and used by uv under the hood because there is really no viable alternative), and nothing to do with getting Python in the first place. I also don't know what you mean about "native pip" here; it seems like you're conflating the Python installation process with the package installation process.

Linux systems commonly already provide an outdated system Python you don’t want to use, and it can’t be used to create a venv of a version you want to use. A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.

Even languages with great compat story are moving to support multi-toolchains natively. For instance, go 1.22 on Ubuntu 24.04 LTS is outdated, but it will automatically download the 1.25 toolchain when it seems go 1.25.0 in go.mod.

  • > Linux systems commonly already provide an outdated system Python you don’t want to use

    They can be a bit long in the tooth, yes, but from past experience another Python version I don't want to use is anything ending in .0, so I can cope with them being a little older.

    That's in quite a bit of contrast to something like Go, where I will happily update on the day a new version comes out. Some care is still needed - they allow security changes particularly to be breaking, but at least those tend to be deliberate changes.

  • > Linux systems commonly already provide an outdated system Python you don’t want to use

    Even with LTS Ubuntu updated only at EOL, Python will not be EOL most of the time.

    > A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.

    My experience has been radically different. Everyone is trying their hardest to provide wheels for a wide range of platforms, and all the most popular projects succeed. Try adding `--only-binary=:all:` to your pip invocations and let me know the next time that actually causes a failure.

    Besides which, I was very specifically talking about the user story for people who are just learning to program and will use Python for it. Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.

    • For much of the ML/scientific ecosystem, you're lucky to get all your deps working with the latest minor version of Python six months to a year after its release. Random ML projects with hundreds to thousands of stars on GitHub may only work with a specific, rather ancient version of Python.

      > Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.

      I compiled the latest GCC many times with the standard configure / make / make install dance when I just started learning *nix command line. I even compiled gmp, mpfr, etc. many times. It Just Works. Do you compile your GCC every time before you compile your Python? Why not? It Just Works.

      1 reply →

    • Sure. You do a source install every time you require a python version newer than system python.

      I'll be using uv for that though, as I'll be using it for its superior package management anyway.

  • Why not just use a Python container rather than rely on having the latest binary installed on the system? Then venv inside the container. That would get you the “venv of a version” that you are referring to

    • Our firm uses python extensively and the virtual environment for every script or script is ... difficult. We have dozens of python scripts running for team research and in production, from small maintenance tools to rather complex daemons. Add to that the hundreds of Jupyter notebooks used by various people. Some have a handful of dependencies, some dozens of dependencies. While most of those scripts/notebooks are only used by a handful of people, many are used company-wide.

      Further, we have a rather largish set of internal libraries most of our python programs rely on. And some of those rely on external 3rd party API's (often REST). When we find a bug or something changes, more often than not, we want to roll out the changed internal lib so that all programs that use it get the fix. Having to get everyone to rebuild and/or redeploy everything is a non-starter as many of the people involved are not primarily software developers.

      We usually install into the system dirs and have a dependency problem maybe once a year. And it's usually trivially resolved (the biggest problem was with some google libs which had internally inconsistent dependencies at one point).

      I can understand encouraging the use of virtual environments, but this movement towards requiring them ignores what, I think, is a very common use case. In short, no one way is suitable for everyone.

    • It's more complex and heavier than using uv. I see docker/vm/vagrant/etc as something as something I reach for when the environment I want is too big, too fancy or too nondeterministic to manually set up locally; but the entire point is that "plain Python with some dependencies" really shouldn't qualify as any of these (just like build environment for a random Rust library).

      Also, what do you do when you want your to locally test your codebase across many Python versions? Do you keep track of several different containers? If you start writing some tool to wrap that, you're back at square one.

      1 reply →

    • 'we can't ship the Python version you want for your OS so we'll ship the whole OS' is a solution, but the 'we can't' part was embarrassing in 2015 already.

      1 reply →