Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Deep learning tool that repairs damaged/faded photos (hotpot.ai)
83 points by panabee on Dec 17, 2020 | hide | past | favorite | 23 comments


I gave it a test,

original: https://imgur.com/gallery/9CAH8Sk

ai-modified: https://imgur.com/gallery/Ty6stir

First thing to note is reduction in dimension and formatted as .PNG.

Second is that the AI blew out the highlights, losing subtle detail, for example the fluting of the columns. This same effect seems to have flattened the facial contours giving the woman's square mouth an unflattering grimace.

It did a striking job at eliminating all "noise" in the body and roof of the auto, giving an initial impression of great crispness and sharpness, but at least some of that "noise" was film grain and/or actual dirt on the car. Ditto texture of the fabric of her dress.


Thanks for sharing this result and the candid feedback.

We certainly have a long way before the model can approach human quality.

The fluting issue from the pillars, for instance, presents a dilemma since the model is trained to consider these subtle textures as noise.

Besides improving the Microsoft model, we're also researching our own model that hopefully will yield better results.


Cool work! It would be even better if there were some more examples on the page though - the one visible looks very impressive, but it's hard to get an idea of general performance, and many viewers may not have old photos to hand or be willing to upload them to see it in action.


My thoughts too. I ran a few damaged photos to see how it behaved.[1]

[1] https://imgur.com/a/rW7Ac7k

----------

[Edit] It appears Imgur has decided against preserving image upload order and I can't change it now. The first image should be at the bottom. Sorry!


Thanks for this!

Do you mind sharing where you found this damaged photo: https://i.imgur.com/4xmreDb.jpeg

Clearly, the model struggled with this picture. We would like more photos like this to improve the model.


It was the 7th result of a Google search for "damaged photo".[1] That's where all of these images came from. The source article for that picture is incidentally a pretty neat Photoshop restoration example.[2]

[1] https://www.google.com/search?q=damaged+photo

[2] https://www.proglobalbusinesssolutions.com/photo-restoration...


This is a great idea. We have a gallery for another one of our ML services and should add one for this, too.

Thanks for the suggestion.


This is great fun. I have lost a couple hours poking at these tools. Fun stuff!

Big fan of this result with the Art Personalization tool.

https://i.imgur.com/JUiVZBB.png

Photo of chickens on my back porch. Styled The Persistence of Memory


Thanks! Did try this with the more artistic setting? (Disable "Resemble base.")

Also, could we share this in our gallery?


https://imgur.com/a/hkpkfVJ I think that link will work for you. Has the original as well as Resemble base on and off. Surprised that it is not more different - think. :D I don't know why I expected to see a chicken melted over the railing.

You are welcome to share these photos on your gallery.


Thanks. The neural style transfer model is a little finicky. Sometimes it produces very artistic versions ... and sometimes it's less creative.

Example where the model took his creative pills that morning: https://www.reddit.com/r/deepdream/comments/jv3c7k

If you try again with other styles, hopefully you'll see more artistic results.


This is incredible! Surprised this didn’t get more attention when posted


Thanks! Full credit goes to the Microsoft team for amazing research.

Perhaps I should have posted in the morning PST instead; it seems more people find the service helpful during these hours. :)


Just out of curiosity, How much resources (time, money, etc) does it take to train a model like this?


Good question. It depends on your goals.

The vast majority of training for this model was done by Microsoft, so I unfortunately cannot shed much insight there.

From this base model, the amount of additional training and improvement depends on how much you want to optimize the original model. If the research model is sufficient, you may only need a few hours.

That said, we are researching our own model and can share the training time and resources required for this new version once we finish (and assuming the results are superior).


Thanks for the details. Yes i would love to hear that. The biggest thing that keeps me off from experimenting with such tech is the lack of resources.

I see sentdex literally burning a $2k Nvidia gpu trying to train some data (on YouTube), so i was just curious on how people did it without burning a massive hole in their pockets.


They reference this paper: https://github.com/microsoft/Bringing-Old-Photos-Back-to-Lif...

For which the models are available online. No need to train in this case.

If this was a new project and you were starting from scratch but already had a clean dataset, 0.5 to 3 days of work (excluding training time).

The model is a VAE so I would approximate less than 2 days of training time on a modern GPU.


Some pictures it doesn't appear to do anything. I wish it would report on what it did...


Sorry about this! Do you mind sharing photos so we can improve the model?


No worries - just clear pictures of mechanical assemblies (what I had lying around) - nothing for the api to do. But it took a while for me to figure out the API indeed did nothing. Just looking for a little feedback from the API.


How much are you burning to keep your services running? For most of these gpus are required for inference and I know gpu instances aren't cheap


Uploaded this photo

https://i.imgur.com/jhoPqhP.jpg

This is what your tool produced:

https://i.imgur.com/bGZSDwL.png

This is what Remini produced:

https://i.imgur.com/nrSDLM8.jpg


Ok, the one reconstructed by Remini is outright scary - they're not even the same persons anymore :O




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: