Hacker Newsnew | past | comments | ask | show | jobs | submit | rwaksmunski's commentslogin

jemalloc 5.2.1 vs mimalloc v3.2.8 in Rust software processing hundreds of Terabytes. Could not measure a meaningful difference, but mimalloc would release freed memory to the OS a lot sooner and therefore look nicer in top. That said, older mimalloc from default rust crate would cause memory corruption with large allocations >2Gb in about 5% of the cases. Stuck with battle hardened jemalloc for now.

Mimalloc my beloved. The fact that jemalloc is this fiendishly complex allocator with a gazillion algorithms and approaches ( and a huge binary), yet mimalloc (a simple allocator with one bitmap-tracked pool per allocation size, and one pool collection per thread) is one of the bigger wins in software simplicity in recent memory.

AI seems to work a lot better once you acquire some AI equity, you go from not working at all to AI writing all the code. /s


Every Rust SIMD article should mention the .chunks_exact() auto vectorization trick by law.


Didn't know about this. Thanks!

Not related, but I often want to see the next or previous element when I'm iterating. When that happens, I always have to switch to an index-based loop. Is there a function that returns Iter<Item=(T, Option<T>)> where the second element is a lookahead?


You probably just want to use `.peekable()`: https://doc.rust-lang.org/stable/std/iter/trait.Iterator.htm...


Note that `peekable` will most likely break autovectorization


I use the slice::windows for that.


Early in 1999 my 1st build was a Celeron 300A on Asus P2B-LS overclocked to 450. Later upgraded to 1.4Ghz and 512MB ECC RAM. Much later running FreeBSD as a home server probably till 2015 when the power supply finally gave out, I wish I kept it. Got me through the capacitor plague and was competitive with Pentium 4 for a while. Absurdly stable and quite snappy with a 10k SCSI system drive. Would love to install Windows 98SE again on it and play some Unreal Tournament.


It's decent at explaining my code back to me, so I can make sure my intent is visible within code/comments/tracing messages. Not too bad at writing test cases either. I still write my code.


Are you saying that you literally write the features by yourself and that you only use LLMs to understand old code and write tests?

Or a more meta point that “LLMs are capable of a lot”?


I'm saying it's not good enough to write code yet, but good at explaining code. If it can't that means I messed up and I need to make it clearer. Once it's explanation and my meaning line up, the code is good enough and I move on. Meta point is that people are using AI for the wrong thing. It's great at consuming, not creating.


Been doing Rust lambdas for 4 years now, Rust is absurdly fast, especially when compared to non compiled languages. If anything, Rust is even faster than those benchmarks in real world workloads.


I got over 20 years of sane, reliable and consistent computing from the FreeBSD Project, thank you.


FreeBSD on bare metal hooked up to a nice network.


Tier 3 is max official


I prefer developing Rust on FreeBSD, memory allocator, scheduler, network stack, kqueue, dtrace and other instrumentation are all superior than the Linux counterparts. What's missing for you?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: