Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens when you add random noise to the inputs to the neural net?


I like this idea. Maybe it ought to be not just the inputs to the ANN, but to the entire network- like drop-out turning a fraction of the values to 0 some of the time. The network would have to work really hard to generalize to deal with all the noise. I'm sure tuning such a noise function would be critical too.


You would probably not be able to distinguish an adversarial image with added noise from a normal image with added noise.

While it is difficult to locate the adversarial examples by random permutation, they do not appear to be extremely specific. The paper even suggests that they exist within specific areas of the input space. So depending on the size of said area, adding noise will just lead you to a similarly adversarial image.

Regardless they propose a better way of fixing the problem by just modifying the training algorithm to penalize networks that have a structure allowing this kind of error.

Neural networks can still perform arbitrary computations, despite this result, so there is no reason to try and manually fix up bad inputs when you can train the network to do it.


If you rephrase your question to "What happens when I add random noise to the inputs of a neural network and try to teach it to output a "denoised" version of the input." and you've just invented denoising autoencoders.

http://www.iro.umontreal.ca/~lisa/publications2/index.php/pu...


Alright, that's neat. Not at all what I was suggesting, but neat nonetheless.



the perturbations in the study were not random, they had to be crafted.

To the GP, noise is often added to training datasets for exactly the reason you're suggesting it. One of the novel things the paper cited discusses, however, is even if you feed the adversarial perturbations into additional training data, there are yet new ways to subtly perturb the inputs to get incorrect results.

Misclassification is a pretty fundamental consequence of dimensionality reduction, of course, but the surprise is how close those misclassifications are in input-space. This isn't mistaking a box for a square because it's looked at head on, it's mistaking a bus for an ostrich because some of the pixels in the image changed to a slightly different shade of yellow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: