Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's exponentially more expensive to train these at higher resolutions.


Not exponentially. More like linearly in pixel count, you just tack on another convolution block. (StyleGAN doesn't use self-attention, but even naive self-attention is just quadratic.)

The problem is more that you hit diminishing returns fast from training at higher resolutions. There's not too much difference between training at 1024px or training at 512px and using an off-the-shelf superresolution NN upscaler, but the latter is like 4x faster. So why bother? You don't always even have input data which is 1024px+.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: