Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How efficient can cat(1) be? (ariadne.space)
74 points by benhoyt on July 18, 2022 | hide | past | favorite | 49 comments


While it's certainly just example code, the initial sendfile version is badly buggy.

   /* Fall back to traditional copy if the spliced version fails. */
    if (!spliced_copy(srcfd))
     copy(srcfd);
The thing that they are trying to avoid is sendfile failing due to inability to mmap the the fd. But they don't check for that (it would return EINVAL), and in fact, by converting the error code to boolean, destroy the ability to differentiate[1]. Instead, they check that sendfile failed for any reason, and then redo it with copy.

Which means sendfile could output half the data, fail for some reason, and depending on why it failed, the fallback copy read/write will do bad things. for example, output the same data again, or more likely, skip data. Since they are just reading from the fd as it now exists after sendfile failing, it is most likely to skip data but pretend it completed successfully.

Normally, cat would just fail in that situation, as this should - it should not retry the copy when sendfile returns EINVAL or ENOSYS

This is what you get for transforming error codes into booleans :)

(I guess errno will still be set, and they could still check it here, but ugh)

[1] This is why the man page says: Applications may wish to fall back to read(2)/write(2) in the case where sendfile() fails with EINVAL or ENOSYS.


Should sendfile(2) not report error if it managed to send a non-zero amount of data? IIRC write(2) behaves like that.


I don't honestly remember all of the write semantics - I thought it will only not report error if it gets interrupted by a signal handler after it wrote something. In case of true error, i thought it always return an error.

write does have a weird error-checking semantic - you can get it to check for a bunch of errors without writing data by using a count of 0.

At least as documented, sendfile does not have any of these semantics - it only returns number of bytes written if the transfer was successful. Otherwise, you can't tell how far it made it :)


As documented, sendfile (I haven't looked at whatever this "spliced_copy" is) does what Joker_vD said it should: if it does a partial write then it is considered "successful" and the caller is required to retry the call if less than the expected amount of data was sent.


Where is this documented? man sendfile on my linux box doesn't even contain the word "partial", nor does it say what you said in some other way that i can see.


From my experience, it will never be so efficient that someone smarter than me doesn't publicly shame me for winning the "useless use of cat" award on a forum where I ask for help.

28 years later and I'm still sore I asked for help as a 15 year old that one time. Very effective way to teach a new user.


Another reason I still use "cat" is that I don't want the remaining commands to modify the input file, but I don't want to spend the time to inspect the command line to make sure that is the case.

For example I'm pretty sure "grep" won't change the input file but "sed" may depending on the "-i" flag. Using "cat" conveniently bypasses all that thinking because the program only gets stdin from a pipe.


You can replace `cat file | cmd` with just `cmd <file` though. `cat` is still useful if you need to concatenate multiple files, but when using it just to feed stdin, actual stdin redirection is better.

Even when trying to say something like `echo $(cat foo)`, in Bash you can write `echo $(<foo)`. Though other shells may still require `cat` in this scenario.


Or <file cmd, to preserve the order.


That works in Bash, it doesn't work in all other shells though (it doesn't work in Fish at least, I haven't checked others).


I think fish is the outlier here, not the norm. It’s specified in POSIX and most non-POSIX shells like zsh and rc support it.


that’s a lesson that’s no longer applicable on modern machines I think? —- it often makes sense to start a sequence of piped commands with a cat invocation. One reason is that then the order of sources and sinks in the command syntax matches reality.


I always pipe from cat, as a matter of habit.

Reason being: I've probably just catted the file at least once, and it's even likely I catted it right before piping it. So I can extend that command from the history, and if I keep piping (likely) I don't have the weird syntactic stutter at the beginning where `blah < file.txt` | next` puts things out of order.

Also, you can't mistype `cat file.txt | blah` and overwrite file.txt accidentally with the output of blah. That's ergonomic.


‘<file.txt blah’ works just fine.


I often use cat because when constructing a pipeline I'll do multiple rounds of:

    head filename | ...
Once its working I find it easy to just type C-aM-dcat (i.e. beginning-of-line kill-word cat).


Thanks for the reminder (sincerely) but to me this is worse, I would use a shell which doesn't offer this particular form of irregular grammar, I hate it.

100% aesthetic, I shouldn't be able to point a file at my prompt and have it end up streamed into the next word. Just awful, 0/10, do not want.


From another perspective it’s more regular that position doesn’t matter. (This behavior works in all POSIX shells, zsh, rc, etc. But not fish :p)


It's preference, like Python list comprehensions vs Ruby method chain.


A bit of a tangent, there are few instances of "do-while" that I've ever ever written and not removed soon after. In practice, I've found that the looping situations that don't easily match the "for (int i = 0; i < n; i++)" pattern are normally "random" enough that it's best to just write "for (;;)" and put explicit checks and breaks inside the body, wherever they naturally fit. Forcing "for (...)" or "while (...)" or "do-while(...)" syntactic constructs is likely to lead to an unnatural sequence of statements. Doing break anywhere is just fine!

    do
    {
        splice(from stdin...);

        if (A)
            handle_a();
            goto out;

        splice(to stdout...);

        if (B)
            handle_b();
            goto out;

    } while (nwritten > 0);

   // do we need some kind of handle_c()??

    out:
        ...
Why make a special case for the "nwritten > 0" condition here? And what's wrong with "break" vs "goto"?

    for (;;)
    {
        thing_a();

        if (A)
            handle_a();
            break;

        thing_b();

        if (B)
            handle_b();
            break;

        if (C)
            handle_c();
            break;
    }
is cleaner in my eyes.


First of all: Your code is missing braces around the `if` blocks - the `goto out` would be run unconditionally.

But anyway, the case for `goto` here is that it jumps immediately to the cleanup that needs to happen always.

If you put something between the loop and that, `break` would jump before that and also execute that.

Yes, this is a workaround for C's lack of automatic cleanup (RAII, garbage collection, python's `with` or whatever).


> Your code is missing braces around the `if` blocks

Sure. It's obviously a sketch.

> If you put something between the loop and that, `break` would jump before that and also execute that.

Sure. But I rarely can see a need to make it so complicated (not in this case anyway). If you need multiple distinct cleanup sections that can't be inlined, then label them all (or put them in separate procedures) and jump to them explicitly. In my example, there is no need for any labels at all.

> this is a workaround for C's lack of automatic cleanup (RAII, garbage collection, python's `with` or whatever).

No need for workarounds here.


"do ... while" maps neatly into Assembly for me. I think it is meant as a bridge between low level thinking and structured programming.


Tangent: It frustrates me that it's apparently impossible to implement cat(1) in a truly portable way.

The problem is supporting unbuffered I/O (`cat -u`). Standard C simply can't do it. setvbuf(3) allows you to change the buffering on an I/O stream, but then fread(3) only allows you to read exact-sized blocks of data. You can only get a short read on EOF or error. So there is no way to say "give me as much data as is available, up to X amount of bytes" and therefore no way to implement unbuffered cat(1) efficiently using only ISO C. You need POSIX for that.


This was a good read! The missing hyperlinks:

- https://man7.org/linux/man-pages/man2/sendfile.2.html

- https://man7.org/linux/man-pages/man2/splice.2.html

(Funny how used I've gotten to "hypertext", I was quite irritated I couldn't click those function names.)


Pinfo makes regular man pages navigable.


I'm having a great Linux day! TIL about 'splice' and 'pinfo'. Thanks!


Go subscribe to https://lwn.net/ and have these moments every day!


That is a heck of a sales pitch, my friend.


And makes info pages pleasant to view.



remember enjoying reading the simple plan9 version of cat(1) - <http://9p.io/sources/plan9/sys/src/cmd/cat.c>


Slightly out of topic:

> There have been a few initiatives in recent years to implement new a new userspace base system for Linux distributions as an alternative to the GNU coreutils and BusyBox.

Have there been? And why?


I heard only about this one:

https://github.com/uutils/coreutils ("Cross-platform Rust rewrite of the GNU coreutils")


> Have there been? And why?

A few reasons why people might want to do this:

- Optimizing for small approachable codebase instead of featurefulness or performance (sbase)[1]

- Dissatisfaction with GPL (toybox)[2]

- Desire to replace C (described as an "unsafe" language) with Rust or Go (examples exist but I don't know of specific ones)

[1]: https://core.suckless.org/sbase/

[2]: https://landley.net/toybox/


Why indeed! Recent versions of GNU cat (and other coreutils) include those optimizations.

https://git.savannah.gnu.org/cgit/coreutils.git/tree/src/cat...


To be honest, GNU coreutils is not a good alternative to GNU coreutils. :-P


It would be nice if user programs didn't have to jump through loops like this. It would be ideal if the kernel made the naive implementation work efficiently.


How would you suggest the kernel would accomplish that feat? If the user calls read() on a file descriptor, what can the kernel do except... you know... actually read from it and copy the data to user space?


It _could_ look at the instructions following the read call and do a sort of high-level software macro-op fusion (https://en.wikichip.org/wiki/macro-operation_fusion)

That already would be going beyond the duty for a jitting virtual machine, so I don’t think you can expect that from a CPU. I also likely would make lots of programmers uneasy if their OS does that sort of thing.

For a ‘real’ CPU, I also fear handling all the edge cases would be horrific (the program may do a read-write pair of calls, but how do you ascertain it doesn’t read that data later? What if the read call tries to read into unwritable memory? What if the program is being traced, and read calls are being logged? Etc)


I know it wouldn't be easy. But the goal could be to let users write simple programs that say what they mean. Since many programs do: while(!eof) { read(); write(); } that could be optimized similar to the way a compiler optimizes known idioms. Perhaps it could notice a well know sequence of calls and avoid userspace/kernels space copies. But, yeah, very non-trivial.


copy_file_range is much preferable to any of these because filesystems can "hook into" it and just share the underlying data, not copying at all.


In general, yes, but as the article notes, itt's not applicable to cat, as cat always outputs to stdout and not to a file.


You can handle the cat foo bar baz > file case this way though, because stdout is a file then.


that is why the article mentions it as an honorable mention :P


I couldn't find the bit where the original performance claim was refuted (or not). Was it one of the listed options?


The original performance claim was about https://vimuser.org/cat.c.txt.

Which just does read/write - so it's the same as the "cat-simple" example, which is the slowest listed.

GNU cat [0] does copy_file_range if it can and falls back to a read/write loop otherwise, so it's unlikely to be much slower (possibly some overhead from argument parsing, but that's just a constant).

So the performance claims are wrong.

[0]: https://git.savannah.gnu.org/cgit/coreutils.git/tree/src/cat...


I'm wondering if it would make sense to use io_uring for this kind of thing. If not, why not?


how about ptracing into your target process and dup2()'ing your FD across?

infinitely fast cat


    cat_spew()
Eww.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: