Hacker Newsnew | past | comments | ask | show | jobs | submit | dalvrosa's commentslogin

Fair, but agentic tooling can benefit quite a lot from this

Opencode, ClaudeCode, etc, feel slow. Whatever make them faster is a win :)


The 2ms it takes to run jq versus the 0.2ms to run an alternative is not why your coding agent feels slow.

Still, jq is run a whole lot more than it used to be due to coding agents, so every bit helps.

The vast majority of Linux kernel performance improvement patches probably have way less of a real world impact than this.


> The vast majority of Linux kernel performance improvement patches probably have way less of a real world impact than this.

unlikely given that the number they are multiplying by every improvement is far higher than "times jq is run in some pipeline". Even 0.1% improvement in kernel is probably far far higher impact than this


Jq is run a ton by AIs, and that is only increasing.

I can't take seriously any talk about performance if the tools are going to shell out. It's just not a bottleneck.

It's not running jq locally that's causing that

Lol. Funny story :)

I'm not sure about the single-core scenario, but would love to learn if someone else wants to add something

In reality multiple threads for single core doesn't make much sense right?


> In reality multiple threads for single core doesn't make much sense right?

Not necessarily, I think -- depends what you're doing.


Codeberg vs selfhosted Gitlab. What do you think?

I think the question is rather gitlab.com vs. self-hosted GitLab and Codeberg vs. self-hosted Forgejo.

Fair

For what its worth, it's pretty easy to maintain a low traffic Gitlab instance.

Agreed. For benchmarking I used this <https://github.com/david-alvarez-rosa/CppPlayground/blob/mai...> which relies on GoogleBenchmark and pins producer/consumer threads to dedicated CPU cores

What else could be improved? Would like to learn :)

Maybe using huge pages?


kernel tickrate is a pretty big one, most people don't bother and use what their OS ships with.

Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.

Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.

Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.


Thanks. Do you happen to know why hyperthreading should be disabled?

In prod most trading companies do disable it, not sure about generic benchmarks best practices


There are some microarchitectural resources that are either statically divided between running threads, or "cooperatively" fought over, and if you don't need to hide cache miss latency, which is the only thing hyperthreading is really good at, you're probably better off disabling the supernumerary threads.

Thanks for the explanation :)

It eliminates cache contention between siblings, which leads to increased latency (randomly)

Thanks!

Exactly, that's right

Thanks! That's not ensured, optimizations are only valid due to the constraints

- One single producer thread

- One single consumer thread

- Fixed buffer capacity

So to answer

> Are they ensuring two threads can't push to the same slot nor pop the same value from the ring?

No need for this usecase :)


Thanks for the feedback <3

100% agree +1

Glad that it helps :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: