Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did the same thing, but with gradient descent. You can create a soft version of the game of life, that is differentiable. Here is my messy collab notebook: [1]

[1] https://colab.research.google.com/drive/12CO3Y0JgCd3DVnQeNSB...

Edit: only 1 step though, not 4, as in the OP. I couldn't get my differentiable version to converge more than 1 step into the past.



See also, a post from mid-2020 that does something similar with a "softened" Life: http://hardmath123.github.io/conways-gradient.html


That's a really nice write up. It's insane how similar our approaches are.

Could it be a case of [1] (but on a non grand scale) :P? I can list my sources of inspiration: [2] [3] [4].

I also tried training convolutional networks, using the soft life set-up, but failed to get them to converge.

[1] https://en.wikipedia.org/wiki/Multiple_discovery

[2] https://kevingal.com/blog/mona-lisa-gol.html

[3] https://arxiv.org/abs/1910.00935

[4] https://nicholasrui.com/2017/12/18/convolutions-and-the-game...


> I also tried training convolutional networks, using the soft life set-up, but failed to get them to converge.

Do you have any idea why that might be? It seems like convolution would be a natural for this problem.


I didn't work on it long enough to be able to draw any conclusions, but I can speculate.

I had the gradients going through the soft life approximation (i.e. it was part of the model), rather than simply training a normal cnn with life boards as the inputs and outputs. But I think the approximation may not have good enough gradient signals.


Note, skip to the bottom to see the resulting plots.

Here gradient descent is used to try and predict random game of life games [1]

[1] https://colab.research.google.com/drive/1NKWRarxM-ar18x1ON71...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: