Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "declaring script dependencies" thing is incredibly useful: https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...

  # /// script
  # dependencies = [
  #   "requests<3",
  #   "rich",
  # ]
  # ///
  import requests, rich
  # ... script goes here
Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies, magically installed into a temporary virtual environment without you having to think about them at all.

It's an implementation of Python PEP 723: https://peps.python.org/pep-0723/

Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:

  Write a Python script with inline script
  dependencies that uses httpx and click to
  download a large file and show a progress bar
Prior to Claude 4 I had a custom Claude project that included special instructions on how to do this, but that's not necessary any more: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/


shebang mode is also incredibly useful and allows execution like ./script.sh

  #!/usr/bin/env -S uv run --script
  # /// script
  # dependencies = [
  #   "requests<3",
  #   "rich",
  # ]
  # ///
  import requests, rich
  # ... script goes here


I really love this and I've been using it a lot. The one thing I'm unsure about is the best way to get my LSP working with inline dependencies.

Usually, when I use uv along with a pyproject.toml, I'll activate the venv before starting neovim, and then my LSP (basedpyright) is aware of the dependencies and it all just works. But with inline dependencies, I'm not sure what the best way to do this is.

I usually end up just manually creating a venv with the dependencies so I can edit inside of it, but then I run the script using the shebang/inline dependencies when I'm done developing it.


Yeah i don't think there is a neater way to do this right now. One thing that maybe saves some effort is a "uv sync" will pay attention to the $VIRTUAL_ENV env var, but only if that var points to a venv that already exists. You can't point to an empty dir and have it create the venv.

  # make a venv somehow, probably via the editor so it knows about the venv, saving a 3rd step to tell it which venv to use
  $ env VIRTUAL_ENV=.venv/ uv sync --script foo.py
but it's still janky, not sure it saves much mental tax


I'd encourage using --dry-run before running uv sync, by default it can be destructive.

  By default, an exact sync is performed: uv removes packages that are not declared as dependencies of the project. Use the `--inexact` flag to keep extraneous packages.


This was my contribution to their docs, happy to see people find it useful :)


'env -s' it's not portable.


Should be portable as long as there are no quotes. It’s basically a nop on macOS since in macOS the shebang args are already split by spaces (even when quoted).

Edit: further reading <https://unix.stackexchange.com/a/605761/472781> and <https://unix.stackexchange.com/a/774145/472781> and also note that on older BSDs it also used to be like that


Not under OpenBSD.


Thanks for pointing it out.

Indeed, OpenBSD’s and NetBSD’s `env` does not support `-S`. DragonflyBSD does (as expected).

Solaris as pointed out by the first link doesn’t even support more than one argument in the shebang so no surprise that its `env` does not support it either. Neither does IllumOS (but not sure about the shebang handling)


Also, it doesn't work on Busybox either (e.g. on Alpine Linux).


That’s pretty wild. Thanks for sharing


PEP723 is also supported by pipx and hatch, even though uv is (deservedly) getting all the attention these days.

Others like pip-tools have support in the roadmap (https://github.com/jazzband/pip-tools/issues/2027)


I plan to support it in PAPER as well. A big part of the original rationale for PEP 723 (and 722) was to support people who want to distribute a single Python file to colleagues etc. without worrying about Python packaging (or making a "project"), and allow them to run it with the appropriate dependencies and their own local Python. So while I am trying to make a tool for users (that incidentally covers a portion of developer use cases) rather than developers, this definitely fits.


I thought that was a heart emoticon next to requests, for a second.


I mean, who doesn't love requests?


Well... maybe the people who have stopped working on it due to the mysterious disappearance of 30,000 fundraised dollars, selling paid support but "delegat[ing] the actual work to unpaid volunteers", and a pattern of other issues from other community members who have not spoken up about them.

https://vorpus.org/blog/why-im-not-collaborating-with-kennet...

https://news.ycombinator.com/item?id=19826680


> he said his original goal was just to raise $5k to buy a computer. Privately, I was skeptical that the $5k computer had anything to do with Requests. Requests is a small pure-Python library; if you want to work on it, then any cheap laptop is more than sufficient. $5k is the price of a beefy server or top-end gaming rig.

Kenneth Reitz has probably done more to enrich my life than most anyone else who builds things. I wouldn't begrudge him the idea of a nice workstation for his years of labour. Yeah, he's very imperfect, but the author has absolutely lost me


What has he built that you like using that much? Honest question, not being snarky.

I liked Requests way back when but prefer httpx or aiohttp. I liked piping for about a month when it first came out, but jumped ship pretty quickly. I'm not familiar with his other works.

I also wouldn't begrudge the guy a laptop, but I do get what the author was saying. His original fundraiser felt off, like, if you want a nice laptop, just say so, but don't create specious justifications for it.


> I liked Requests way back when but prefer httpx or aiohttp.

Those two tools are modeled after `requests`, so Reitz still has an influence in your life even if you don't use his implementation directly.


They’re pretty close, sure, but Requests itself basically models a web browser. The newer ones model newer browsers (eg with HTTP/2 and async).


Kenneth Reitz has been good and bad at times. And while things of his are genius level, he also has done asshole level things. But on the other hand, we get that with a lot of geniuses for some reason or another. Really smart people can be really dumb too.

It would be like saying, "Don't use Laplace transforms because he did some unsavory thing at some point in time."


Looking at this drama for a bit, I haven't seen anybody advocate for 'canceling' requests itself.

Maybe it's more like: Laplace created awesome things, but let's be fair and also put in his wikipedia page a bit about his political shenanigans.

A lot of of so-called geniuses, especially the self-styled ones with some narcissistic traits, get away with being an asshole. Their admirers have different norms for regular, boring people. I don't think that is fair or healthy for a community.


Back when the drama was fresh, there was a lot of talk about “canceling” Requests, but as often happens, everyone moved o to something else and it kinda got forgotten about with time.


It wasn't really requests that got cancelled, it was pipenv that got cancelled, about the time when Jacob Kaplan-Moss stated that poetry was good. And frankly it was Reitz's actions that directly caused pipenv to be cancelled.

I'm not defending his assholery here, but it's not uncommon in tech.

Take an asshole techie and notice they tend to have devoted fans. It's just possible that Kenneth Reitz didn't get his fan base up before he exposed his personality for who he truly was. Steve Jobs, Zuck, Bill Gates, Linus Torvalds, ... were all called assholes at some point or another. Geez and those people aren't even the worst these days.


It stopped accepting new features a decade ago, it doesn’t support HTTP/2, let alone HTTP/3, it doesn’t support async, and the maintainers ignored the latest security vulnerability for eight months.

It was good when it was new but it’s dangerously unmaintained today and nobody should be using it any more. Use niquests, httpx, or aiohttp. Niquests has a compatible API if you need a drop-in replacement.


I've noticed that Claude Code prefers httpx because it's typed.


I really like httpx’s handling of clients/sessions; it’s super easy to throw one in a context manager and then get multiple hits to the same server all reusing one http2 connection.


Weird! I have the opposite: I want it to prefer httpx but it always gives me Requests when I forget to specify it.

LLMs are funny.


Me, because nearly every time I see it used, it’s for a trivial request or two that can easily be handled with stdlib.

If you need it, sure, it’s great, but let’s not encourage pulling in 3rd party libs unnecessarily.


Strong agree. Pip doesn't really need Requests' functionality as far as I can tell, but the vendored transitive dependencies represent a considerable fraction (something like a quarter IIRC) of Pip's bulk. And a big fraction of that bulk (and it's much the same for Rich) at startup, even if ultimately it turns out that no web requests need to be made. Which in turn is the main reason why Pip on my machine takes longer than the https://lawsofux.com/doherty-threshold/ to process `pip install` with no actual package specified. (A process that involves importing more than five hundred Python modules, of which almost a hundred are Requests and its dependencies.)

(Of the non-Requests imports, about two thirds of them occur before Pip even considers what's on the command line — which means that they will be repeated when you use the `--python` option. Of course, Requests isn't to blame for that, but it drives home the point about keeping dependencies under control.)


This is cool!

This gave me the questionable idea of doing the same sort of thing for Go: https://github.com/imjasonh/gos

(Not necessarily endorsing, I was just curious to see how it would go, and it worked out okay!)


That's fun. I gave this a try myself. I took a different approach to the solution and used a bash script.

https://gist.github.com/JetSetIlly/97846331a8666e950fc33af9c...


My question is: if you already have to put the "import" there, why not have a PEP to specify the version in that same import statement, and let `uv` scan that dependency? It's something I never understood.


This is addressed in detail the relevant ecosystem-wide standard: https://peps.python.org/pep-0723/#why-not-infer-the-requirem...

My own take:

PEP 723 had a deliberate goal of not making extra work for core devs, and not requiring tools to parse Python code. Note that if you put inline comments on the import lines, that wouldn't affect core devs, but would still complicate matters for tools. Python's import statement is just another statement that occurs at runtime and can appear anywhere in the code.

Besides that, trying to associate version numbers with the imported module is a complete non-starter. Version numbers belong to the distributions[1], which may define zero or more top-level packages. The distribution name may be completely independent from the names of what is imported, and need not even be a valid identifier (there is a standard normalization process, but that still allows your distribution's name to start with a digit, for example).

[1]: You say "packages", but I'm trying to avoid the unfortunate overloading the term. PyPA recommends (https://packaging.python.org/en/latest/discussions/distribut...) "distribution package" for what you download from PyPI, and "import package" for what you import in the code; I think this is a bit unwieldy.


They don't want tools like uv to have to parse Python, which is complex and constantly changes.

See also https://peps.python.org/pep-0723/#why-not-infer-the-requirem...


Ahh thanks for that! Makes sense, since you could also have an import somewhere down the script (although that would be bad design).

Another point is the ambiguous naming when having several packages doing roughly the same... Which is crucial here. Thank you! :)


> without you having to think about them at all.

Hahahahaha.

Oh. I'm rolling on the floor. Hahahahaha.

How do you never learn? No, honestly, how do you never learn this simple thing: it will break! I will bet my pension on that it will break, and perhaps not you, but some hundreds of developers will have to try to debug and try to figure out where the dependencies went and why they weren't installed correctly, or why something was missing and so on.

There will never be a situation that you don't have to think about something as important as dependencies at all.


Have you used uv run?


Out of curiosity, yes, I did. But I don't have any use for this tool. I can do better without it. I use these tools because I'm often requested to help those who use them and in so doing make a mess out of their working environment to the point that they can no longer do any work. So, I want to be up to date on latest threats. But as for my personal work with Python, there's simply no use-case where I'd need a tool like uv. It serves no purpose.


A lot of people disagree with you there.

I've found it extremely useful because it makes it trivial for me to try out new dependency versions without thinking about which environment should install them in first.

When I'm teaching people Python the earliest sticking point is always "now activate your virtual environment" - being able to avoid that is huge.


> Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies,

Be aware that uv will create a full copy of that environment for each script by default. Depending on your number of scripts, this could become wasteful really fast. There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default, or which disadvantages this has, but so far it's working fine for me, and saved me several gigabytes of storage.


>There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default

The hard-link strategy still saves disk space — although not as much (see other replies and discussion).

Possible reasons not to symlink that I can think of:

* stack traces in uncaught exceptions might be wonky (I haven't tried it)

* it presumably adds a tiny hit to imports from Python, but I find it hard to imagine anyone caring

* with the hard-linking strategy, you can purge the cache (perhaps accidentally?) without affecting existing environments


By default it will create hard links for python packages, so it won't consume any more memory (besides the small overhead of hard links).


As far as I can tell, this only applies to the wheel contents, not to the .pyc bytecode cache created by the interpreter. If you use the defaults, Python will just create per-environment copies on demand; if you precompile with `--compile-bytecode`, uv will put the results directly in the installed copy rather than caching them and hard-linking from there.

I plan on offering this kind of cached precompiled bytecode in PAPER, but I know that compiled bytecode includes absolute paths (used for displaying stack traces) that will presumably then refer to the cached copies. I'll want to test as best I can that this doesn't break anything more subtle.


Unless it can't because you happen to have mounted your user cache directory from a different volume in an attempt to debloat your hourly backups.


In that case, you use copy OR what you can can also do, if you really care about disk usage, is use symbolic links between the drives. have a .venv sym link on drive A (raid 1) point to the uv_cache_dir's venv on drive B (raid 0). I have not tested though what happens when you unmount and sync.


... your hourly backups aren't also using the same hard-linking strategy?


They’re ZFS snapshots, so no. That’s why I’m forced to keep my cache directory in a different dataset.

Ironically, if my backup scheme were using hard links, then I could simply exclude the cache directory from backup, so I’d have no reason to do that mountpoint spiel, and uv’s hard links would work normally.


... ZFS snapshots don't produce copies of the underlying file data, do they? (Yeah, I'm still a dinosaur on ext4...)


Nothing wrong with ext4! Having choices and preferences is a good thing.

You’re correct, ZFS snapshots don’t produce copies, at least not at the time they’re being created. They work a little like copy-on-write.


This is cool, but honestly I wish it was builtin language syntax not a magic comment, magic comments are kind of ugly. Maybe some day…

(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)


It IS built-in language syntax. It's defined in the PEP, that's built-in. It's syntax:

"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."

That's the syntax.

Built-in language syntax.


Suppose I have such a begin block without the correct corresponding end block - will Python itself give me a syntax error, or will it just ignore it?

It might be “built-in syntax” from a specification viewpoint, but does CPython itself know anything about it? Does CPython’s parser reject scripts which violate this specification?

And even if CPython does know about it (or comes to do so in the future), the fact that it looks like a comment makes its status as “built-in syntax” non-obvious to the uninitiated


Semantic whitespace was bad enough, now we have semantic comment blocks?

I'm mostly joking, but normally when people say language syntax they mean something outside a comment block.


Shebang lines are semantic comments too


Not necessarily-while many languages which accept shebangs use hash as a line comment introducer, others don’t, but nonetheless will accept a shebang at the start of a file only. e.g. some Lisp interpreters will ignore the first line of a source file if it starts with #!, but otherwise don’t accept # as comment syntax.


I am - while awesomely valuing uv - also with the part of the audience wondering/wishing this capability were a language feature.-


The PEP explains why it isn't part of the regular python syntax.

uv and other tools would be forced to implement a full Python parser. And since the language changes they would need to update their parser when the language changes.

This approach doesn't have that problem.

Making it a "language feature" has no upside and lots of downside. As the PEP explains.


Beyond that, the needed name to download from PyPI doesn't necessarily have anything at all to do with the name used for an `import` statement. And a given PyPI download may satisfy multiple `import` statements. And it can even be possible to require a download from PyPI that doesn't install any importable code (it may legally be a meta-package that runs one-shot configuration code when "built from source").


> Beyond that, the needed name to download from PyPI doesn't necessarily have anything at all to do with the name used for an `import` statement. And a given PyPI download may satisfy multiple `import` statements.

I think this is a design issue with PyPI though. It really should have some kind of index which goes from module names to packages which provide that module. (Maybe it already does but I don't know about it?)

Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.

Once that were done, it would be possible to implement the feature where "import X", if X can't be found locally, finds the primary package on PyPI which provides module X, installs it into the current virtualenv, and then loads it.

Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it. And CPython might say the virtualenv configuration needs to nominate an external package installer (pip, pipx, poetry, uv, whatever) so CPython knows what to do in this case.

You could even build this feature initially as an extension installed from PyPI, and then later move it into the CPython core via a PEP. Just in the case of an extension, you couldn't use the "from __future__" syntax.

> it may legally be a meta-package that runs one-shot configuration code when "built from source"

True, but if Python were to provide this auto-install via "import X" feature, packages of that nature could be supported by including in them a dummy main module. All it would need would be an empty __init__.py. You could include some metadata in the __init__.py if you wished.

Once "import X" auto-install is supported, you could potentially extend the "import" syntax with metadata to specify you want to install a specific package (not the primary package for the module), and with specific versions. Maybe some syntax like:

    import foobarbaz ("foo-bar-baz>=3.0")
I doubt all this is going to happen any time soon, but maybe Python will eventually get there.


> I think this is a design issue with PyPI though. It really should have some kind of index which goes from module names to packages which provide that module. (Maybe it already does but I don't know about it?)

PyPI never really saw much "design" (although there is a GitHub project for the site: https://github.com/pypi/warehouse/ as well as for a mirror client: https://github.com/pypa/bandersnatch). But an established principle now is that anyone can upload a distribution with whatever name they want — first come, first serve by default. Further, nobody has to worry about what anyone else's existing software is in order to do this. Although there are restrictions to avoid typo-squatting or other social engineering attempts (and in the modern era, names of standard library modules are automatically blacklisted).

> Of course, that doesn't help if multiple packages provide the same module; but then if there was a process to reserve a module name – either one which no package is currently using, or if it is currently used only by a single package, give the owner of that package ownership of the module – and then the module name owner can bless a single package as the primary package for that module name.

These kinds of conflicts are actually by design. You're supposed to be able to have competing implementations of the same API.

> Obviously it shouldn't do this by default... maybe something like "from __future__ import auto_install" to enable it.

The language is not realistically going to change purely to support packaging. The time to propose this was in 2006. (Did you know pip was first released before Python 3.0?)

> but maybe Python will eventually get there.

That would require the relevant people to agree that with heading in that direction. IMX, they have many reasons they don't want to.

Anyway, this isn't the place to pitch such ideas. It would be better to try the Ideas and/or Packaging forums on https://discuss.python.org — but be prepared for them to tell you the same things.


Perfect sense.-


Funny how all this time, the only commonly-used import syntax like this was in HTML+JS with the <script> tag.


I'm not a Python dev, but had to write a script the other day and got all cought up with the virtual env stuff. Why can't `uv` just infer the dependencies from the `import ...` line? Why declare the dependencies twice?


Python import names are not necessarily unique or the name of the package on pypi/pip. Something like PyYaml is imported as yaml, but potentially other packages could supply a slightly different yaml to import


I hope it's in the same format as requirements files. Because then one could put a comment with a one liner next to it for people who don't have uv.

Something like `pip install -r <(head myscript.py)`. (Not exactly, but you get the idea).


Most _current_ LLMs know about PEP 723, but you have to say "PEP 723" for them to do it properly.


uv is so cool. haha. goodbye conda.


conda is a completely no-go for me. It’s for data scientists, and only data scientists who do not need to care much about production.


Sadly this isn't always true, if you have non-pure Python dependencies, you might still need to use conda since they package database drivers or complex C/C++ dependencies for example.


SciPy already provides pip-installable wheels for a wide variety of platforms: https://pypi.org/project/scipy/#files




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: