Rclone is another great example of open source being better than a commercial product. I'd rather use it than gsutil or s3cmd, it's simple, the commands make sense, and it doesn't bring in a lot of other cruft.
If rclone’s author is around… You should update the lage with an easy and VISIBLE link for dontations. I could not find it while browsing (ios) although it appears somewhere in the FAQ.
So funny! I just spent a few hours last night setting up rclone for the first time on my Macbook!
I've got one of those legacy Google Drive unlimited storage plans, and for years I've been using it to store all my code, video projects I work on, etc.
The way I normally use it is inefficient: if I was importing videos for a project, I'd use the Google Drive web UI to upload them into Drive, where I'd later organize them, etc.
Over the years I tried a variety of Goole Drive syncing tools, but all eventually stopped working, or were so unbearably slow that I couldn't justify it.
Last night I spent some time and set up rclone for Drive, along with a persistent mount so that Google Drive is always accessible via ~/Drive. So far it seems like everything is working well -- really amazing software! <3
I love rclone, have been using it to reliably move large files between different could providers (thanks Nick!). GUI applications were constantly crashing, rclone has always been reliable.
We use Rclone as a "disk in the cloud" solution for the otherwise stateless docker images. Works perfectly well. And the best thing is - it supports transparent end-to-end encryption.
Among all the cloud storages we've tried to use with Rclone, Azure Blobs turned out to be the fastest so far.
The bitter part of the story is that I did not know about Rclone existence until a recent year or two. I still feel ashamed about that.
Interesting, so why do that? Is it just to simplify your client code? Instead of using the s3 api, you just save files in a standard (virtual) file system? Any other benefits or reliability/performance drawbacks?
I use it, for example, for Gitea storage. My host doesn't have a lot of storage, but I use reclone so that it all goes directly onto Google Drive (unlimited paid storage).
Gitea also has a private docker container registry built in, which quickly grows large into several (hundred) gigabytes. It all works perfectly well with rclone.
This makes my host stateless. Just run gitea docker image with Google-Drive backed storage. It works great because both git repositories and docker images are backed by files that are essentially immutable.
An example that would not work well would be trying to run a container whose storage is a SQLite file that updates often. Trying to sync that SQLite file to google drive with rclone would be a bad idea.
Cloud drive approach considerably simplifies the code. You can use shell scripts, Unix tools, 3rd party apps. You can combine them. This approach gives a lot of freedom and power. It is just Unix but with a cloud.
Such approach also protects you from a vendor lock-in: you can use any cloud storage you like today.
Performance drawbacks are evident - if a file is not cached in the local cache then it takes some time to get it there. But it does not really matter for the most apps because that initial lag is relatively short.
I'm a heavy user of rsync and sftp between personal devices, but have never used rclone. Is it only used with cloud storage from the big companies? I have never used any of those so far.
No, while it supports a number of big-box cloud provider storage systems it also includes smaller providers and various protocols. Protocols like HTTP, FTP, SFTP, SMB, local filesystem, and things like "S3 compatible".