Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, but it was initially leading to some incorrect conclusions on my end, about certain things being “slow”.

For example, because I was trying to use fine-grained timers for everything async, I thought the JSON parsing library we were using was a bottleneck, because I saw some numbers like 30ms to parse a simple thing. I wasn’t measuring total throughput, I was measuring individual items for parts of the flow and incorrectly assumed that that applied to everything.

You just have to be a bit more careful than I was with using timers. Either make sure than your timer isn’t going across any kind of yield points, or only use timers in a more “macro” sense (e.g. measure total throughput). Otherwise you risk misleading numbers and bad conclusions.



I would highly recommend using a specialized library like BenchmarkDotNet. It's more relevant for microbenchmarks but can be used for less micro benchmarks as well.

It will do things like force you to build in Release mode to avoid debug overhead, do warmup cycles and other measures to avoid various pitfalls related to how .NET works - JIT, runtime optimization and stuff like that, and it will output nicely formatted statistics at the end. Rolling your own benchmarks with simple timers and stuff can be very unreliable for many reasons.


Oh no argument on this at all, though I haven't touched .NET in several years, since I no longer have a job doing F# (though if anyone here is hiring for it please contact me!).

Even still I don't know that a benchmarking tool would be helpful in this particular case, at least at a micro level; I think you'd mostly be benchmarking the scheduler more than your actual code. At a more macro scale, however, like benchmarking the processing of 10,000 items it would probably still be useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: