Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the standardization issue: the advantage of such a method that we presented is that as long as there exists a standard for model specification, we can encode every image with an arbitrary computational graph that can be linked from the container.

Imagine being able to have domain specific models - say we could have a high accuracy/precision model for medical images (super-close to lossless), and one for low bandwidth applications where detail generation is paramount. Also imagine having a program written today (assuming the standard is out), and it being able to decode images created with a model invented 10 years from today doing things that were not even thought possible when the program was originally written. This should be possible because most of the low level building blocks (like convolution and other mathematical operations) is how we define new models!

On noise: I'll let my coauthors find some links to noisy images to see what happens when you process those.



Absolutely! Being able to improve a decoder for an existing encoder (and vica versa) is a great advantage!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: