Hacker Newsnew | past | comments | ask | show | jobs | submit | persedes's commentslogin

Some helpful guidelines, but it's 2025 and people still use time.time and no stats with their benchmarks :(

In general I feel like these kind of benchmarks might change for each python version, so some caveats might apply.


Perhaps you could suggest what should be used instead of time.time


https://switowski.com/blog/how-to-benchmark-python-code/ has a decent overview of some benchmarking libraries


yes, the pocket operator is a great gift for 8-9+ year olds if you're not as talented as OP :D



Not the OP but does this actually package CUDA and the CUDA toolchain itself or just the libraries around it? Can it work only with PyTorch or "any" other library?

Conda packaging system and the registry is capable of understanding things like ABI and binary compatibility. It can resolve not only Python dependencies but the binary dependencies too. Think more like dnf, yum, apt but OS-agnostic including Windows.

As far as I know, (apart from blindly bundling wheels), neither PyPI nor Python packaging tools have the knowledge of ABIs or purely C/C++/Rust binary dependencies.

With Conda you can even use it to just have OS-agnostic C compiler toolchains, no Python or anything. I actually use Pixi for shipping an OS-agnostic libprotobuf version for my Rust programs. It is better than containers since you can directly interact with the OS like the Windows GUI and device drivers or Linux compositors. Conda binaries are native binaries.

Until PyPI and setuptools understand the binary intricacies, I don't think it will be able to fully replace Conda. This may mean that they need to have an epoch and API break in their packaging format and the registry.

uv, poetry etc. can be very useful when the binary dependencies are shallow and do not deeply integrate or you are simply happy living behind the Linux kernel and a container and distro binaries are fulfilling your needs.

When you need complex hierarchies of package versions where half of them are not compiled with your current version of the base image and you need to bootstrap half a distro (on all OS kernels too!), Conda is a lifesaver. There is nothing like it.


No, it’s PyTorch built against a particular version of CUDA. You need to install that on your system first.


If I find myself reaching a point where I would need to deal with ABIs and binary compatiblity, I pretty much stop there and say "is my workload so important that I need to recompile half the world to support it" and the answer (for me) is always no.


Well handling OS-dependent binary dependency is still unsolved because of the intricate behavior of native libraries and especially how tightly C and C++ compilers integrate with their base operating systems. vcpkg, Conan, containers, Yocto, Nix all target a limited slice of it. So there is not a fully satisfactory solution. Pixi comes very close though.

Conda ecosystem is forced to solve this problem to a point since ML libraries and their binary backends are terrible at keeping their binaries ABI-stable. Moreover different GPUs have different capabilities and support different versions of the GPGPU execution engines like CUDA. There is no easy way out without solving dependency hell.


If you’re writing code for an accelerator, surely you care enough to make sure you can properly target it?


What about nix?


Doesn't work on Windows.

It is also quite complex and demands huge investment of time to understand its language which isn't so nice to program in it.

The number of cached combinations of various ABI and dependency setting is small with Nix. This means you need source compilation of a considerable number of dependencies. Conda generally contains every library built with the last 3 minor releases of Python.


Always wondered if this could also be expressed "simply" with the Reynolds number to determine how to keep your flow laminar.. But then again how does one map software capabilities to SI units :D


Isn't this essentially the idea behind agile? I'm not too deep into the agile theory, but the Phoenix project is always a very good read (albeit stressful if you work in software teams lol)


The rise of the prompstitudes. Had a colleague (supposedly they weren't at the junior level), send me a chat gpt response "guessing what your coworker was saying". Can definitely relate to the article about feeling violated lol.

My take is that these people are not necessarily incompetent, but just so addicted to the llms that they turn into those sad seniors at the casino that endlessly pull the slot machines.


I believe creating "spicy" content without the persons consent and charging money for it is more the issue here.


Would be interesting to see how e.g sentence transformer models compare to this. My takeaway with the e.g. openai embedding models was that they were better suited for larger chunks of texts, so getting god + dog with a higher similarity might be indicative that it's not a good model for such small text?

  emb = SentenceTransformer("all-MiniLM-L6-v2")
  embeddings = emb.encode(["dog", "god"])
  cosine_similarity(embeddings)
  Out[16]: 
  array([[1.        , 0.41313702],
       [0.41313702, 1.0000004 ]], dtype=float32)


Unless that has changed, anthropics (and gemini) caches are opt-in though if I recall, openai automatically chaches for you.


They've been pretty great about pushing for open standards. In the last article their argument to provide these tools for free was along the lines of "A rising tide lifts all boats".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: