Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is on top of SSH already having throughput issues in the basic protocol over long fat networks.

Is that why I've sometimes observed slower-than-expected transfers when using rsync over ssh to do a mass migration of server data from one data center to another? Can you recommend an alternative (besides writing the data to external media and physically shipping it)?



You could try something like bbcp (https://www.slac.stanford.edu/~abh/bbcp/) if copying large amounts of data over relatively high-latency network connections.


I had some fun time coercing the build system to build this on an ARM SBC the other month, but alas, it did not seem to speed things up substantially over an LTE modem. Probably need to play with a bit more.


There are patches for openssh when using it over LFNs: https://psc.edu/research/networking/hpn-ssh/ (which seems to be in a sad state currently). These patches might interact poorly with other implementations (e.g., it didn't interoperate with paramiko (2 years ago)).

Also, make sure TCP window scaling is working. I was making transfers through a F5 Big-IP which was running a profile that disabled it.


If SSH throughput itself is the problem, idk., encrypt a tar file and wget it?

If SCP/SFTP is the problem due to small-ish files, use a tarpipe instead. Nothing beats tarpipes for small files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: