But I'm feeling the lock-in accumulate. Each project adds uv-specific configs, CI assumes uv behavior, team gets used to uv workflows. The GitHub Action is convenient, so we use it. The resolver is better, so we depend on it.
We've watched this movie before. Great developer tool becomes indispensable, then business realities hit. Even Google dropped "don't be evil." The enshittification pattern is well-documented: be good to users until they're locked in, then squeeze. Not saying Astral will - they seem genuinely focused on developer experience.
But that's what everyone says in the first few years.
What's your approach here? Are you building abstraction layers? Keeping alternative workflows tested? Just accepting that you'll deal with migration if/when needed?
I keep adopting uv because it's the right technical choice, but I'm uneasy about having no real fallback plan if things change direction in 2-3 years. The better the tool, the deeper the eventual lock-in.
A lot of people are calling for Python to just bless the tool officially, distribute it with Python etc. (which is a little strange to me given that it's also promoted as a way to get Python!) — the way people talk about uv, makes it seem hard to get people to care even if that did happen.
Regardless, I feel strongly that everyone who cares about distributing their code and participating in the ecosystem should take the time to understand the underlying infrastructure of venvs, wheels/sdists, etc.
A big part of what uv has accomplished for Python is rapidly implementing new standards like PEP 723 and PEP 751 — rather, they were AFAICT well underway on the implementations while the standards were being finalized, and the Astral team have also been important figures in the discussion. Those standards will persist no matter what Astral decides to do as a company.
And for what it's worth, pip actually is slowly, inexorably improving. It just isn't currently in a position to make a clean break from many of the things holding it back.
> What's your approach here? Are you building abstraction layers?
The opposite: I'm sticking with more focused tools (including the ones I'm making myself) and the UNIX philosophy. PAPER is scoped to implement a lot of useful tools for managing packages and environments, but it's still low-level (and unlike Pip, I'm starting with an explicit API). It won't manage a project for you, won't install Python, will have no [tool] config in pyproject.toml... it's really intended as a user tool, with overlapping uses for developers. On the other side, bbbb is meant to hook everything up so that you can run a build step (unlike Flit), choose which files are in the distribution, have all the wheel book-keeping taken care of... but things like locating or invoking a compiler are out of scope. A full dev toolchain would include both, plus some kind of Make-like system (I have a vague design for one), a build front-end (`build` works fine except that it's hard-coded to use either pip or uv to make environments, so I might make a separate PAPER plugin...), an uploader (`twine` is fine, really!) and probably your own additional scripts according to personal preference.
If you are playing around locally, and or uploading to open source like github or pypi, then its fine to use.
For production code that is critical, the entire "build" process needs to be pure local processing/copy - i.e nothing downloaded from the internet, anything that is compiled needs to be done with local tools that are already installed.
I.e for python project deployments, the right way of doing this with containers for is to build a core image where you do run all the install commands that download stuff from the internet, and the actual production build dockerfile involves only copying files to the library locations, and the final run command. You don't need to "build" packages with anything (unless of course your package contains C dependencies, which is a whole nother can of worms).
I think the desire for something like UV comes from the fact that people just don't think about what is going under the hood. Its somewhat understandable if you are a data scientist that just uses python for math, but for people in computer science/engineering, they really should be able to use python as is without any external tools.