Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I put symbolic reasoning at the spotlight for it is something that NN is particularly bad at: discrete data, hard to design, often approximate and non-differentiable measurement.

The problem is so inherently hard that we are struggling even to come up with a meaningful task, telling us how bad we are doing. That comes to your first point, I think finding the right loss function is a like a chicken-and-egg situation here. When you have the loss function at hand, you already what task and problem you are going to solve, then it becomes easier. But that is apparently not our current situation.

That is why I think DeepMind has a good reason to go after reinforcement learning, after all, that is how we human are trained, through exams and the feedbacks.

As to your point about LSTM, I am not very passionate to qualitatively claim it whether it can/can't handle short/long term memory. That is apparently task dependent, and all the concepts involved are ill-defined.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: