Hi, this is a complete rework, though the core idea remains the same. Results are now much better due to improved engineering, and we compare to recent SOTA methods up until 2025. Also we have some new experiments and worked a lot on figures and presentation :)
That's an interesting example! Perhaps too out-of-distribution, though. For fair comparison with other methods, we used the DIV2K training set in our paper, which only comprises 800 images. Would be cool to train a version on a much bigger set, potentially including images similar to what you tried :)
Hi, author here :) It shouldn’t be OOD, unless its too noisy maybe? And what scaling factor did you use? Single image SR is a highly ill-posed problem, so at higher upscaling factors it just becomes really difficult…
Author here -- Generally in single image super-resolution, we want to learn a prior over natural high-resolution images, and for that a large and diverse training set is beneficial. Your suggestion sounds interesting, though it's more reminiscent of multi image super-resolution, where additional images contribute additional information, that has to be registered appropriately.
That said, our approach is actually trained on a (by modern standards) rather small dataset, consisting only of 800 images. :)
It feels like it's multishot nl-means, then immedeately those pre-trained "AI upscale" things like Topaz with nothing in between. Like, if I have 500 shots from a single session and I would like to pile the data together to remove noise and increase detail, preferably starting from the raw data, then - nothing? Only guys doing something like that are astrophotographers, but their tools are .. specific.
But for "normal" photography, it is either pre-trained ML, pulling external data in, or something "dumb" like anisotrophic blurring.
I'm not a data scientist, but I assume that having more information about the subject would yield better results. In particular, upscaling faces doesn't produce convincing outcomes; the results tend to look eerie and uncanny.
IMO, it hits a nice sweet spot between performance and level of abstraction, especially w.r.t. concurrency and networking. Also I found that you get things done incredibly fast. I am mostly doing Python and some C, so Go feels like "somewhere in between".
I think there are a couple packages out there for using Websockets to proxy a tcp connection, and some of them support SOCKS. I think they all overload that Dialup function as a generic way of opening connections
I agree! Honestly, Go made building this quite pleasant, as it has nice abstractions for networking and a great concurrency model. I'm planning to keep it minimal for now, but I would like to add Windows support, SSH multiplexing and maybe some form of throughput measurement. But I'm open to ideas :)