Sure, yeah but it’s not like using printf is a problem for error messages or logging. Using write() for that kind of stuff is more hassle than it’s worth.
I can’t count the number of times I’ve gotten 20-50x speed ups just turning off logging. Assuming printf is free (or even cheap) is the sort of thing that’ll get you into trouble. It might not always matter, but when it does if your application is parsing strings at runtime at high volume, it’s going to hurt!
I rarely log messages anyway, preferring to set state that I can then printout on command (e.g. siginfo)
So did you just turn logging off, or did you in fact write a better custom implementation? And how much faster did it go? Did you actually measure that? What kind of system was that on?
My first thought is, probably there was an fsync() or similar after logging each message. Also, printf() and friends have to lock the streams, so there might be some contention somewhere in case you're multi-threaded. That's not a problem that you can solve without some custom routing of logging messages.
Using printf, you can easily write dozens of megabytes of the most complicated formatting code on contemporary systems (I'm just pulling a number out of thin air here). More than any system should want to log.
People who deal in megabytes have different problems than people who deal in petabytes.
Usually just turn it off. At least at first. Note I’m talking diagnostic here; If I need the log messages obviously I have to write a custom streaming system even if the system printf is fast enough because the API complicates recovery.
Printf can also vary in performance by 10msec or more based on the internal state. That’s not good enough since my entire application has a hard limit of around 30msec. I can’t even do one printf — even every N messages (for any N) because I’ll never catch up.
> People who deal in megabytes have different problems than people who deal in petabytes.
I just hope that this is not meant ad-hominem. At the very least it's a bad reply, unless you are dealing with petabytes per second. (Btw one of my current projects is a text editor that can handle gigabytes in memory; local operations take 5-50 microseconds. So I have reason to think I'm not entirely clueless).
> I rarely log messages anyway, preferring to set state that I can then printout on command (e.g. siginfo)
That sounds much more reasonable to me giving the volumes that you cite, and why shouldn't we use printf() for that?
Why would printf "vary in performance by 10msec" or more, in ways that another application wouldn't? For how much data is that? How many printf() calls?
Anyway I shouldn't have gotten in the woods here. The blanket statement that printf() is slow and therefore an obscure API like sbrk() is a better use, is nonsensical for a guide that seemingly gives general advice for memory allocation.