If you think about the way that humans learn, we get formal education that the computer doesn't get. We are getting corrected for our mistakes by other people. This work did not originally do that. The newer versions of it get corrected by people but that doesn't mean that it couldn't be corrected by more instances of itself that operate over different datasets via a consensus algorithm. The algorithm will also probably get flack that it is not unsupervised when human learning is actually partially supervised.
One could argue that humans have a similar problem of errors propagating. Areas that we feel strongly about can bias us against learning in fields such as religion, politics, and, of course, programming language design.
I think that it is possible that humans would make some similarly poor responses in areas that we don't talk about often.
I agree absolutely. I believe that future AI's will be educated in much the same way we educate children. Perhaps less education will be needed - possibly we could start them off at a higher age bracket for example. But ultimately I'm certain the first "strong" AI's will be educated/supervised to some degree.
My point was more that perhaps we're a bit too focused on the wrong metric for success. The current criteria set is {positives, negatives, false positives, false negatives}, and we try to optimise for high/low degrees of one or the other in order to determine whether a particular approach is successful or not.
What is then overlooked, is that perhaps we don't need to have a near-perfect positive rate, but instead achieve an acceptably-incorrect false positive or false negative rate. Where the answer may be wrong, but it's not too far wrong. Much like a human might pin a country like India in the wrong place on the map, but wouldn't ever put it in the middle of the Indian ocean.
In summation: Perhaps the key for computers to appear intelligent, is not to be perfectly correct, but to be not too disastrously incorrect.
That makes me remember that as a small kid there were certain spots in my environment beyond which, I was conviced, infinite wilderness would adjoin or the world would just end (for example behind forests or hedges). Growing older I was often sobered when I discrovered there was just bog-standard urban landscape.
Yet, the machine seems to come from an entirely differernt direction. It has much more facts accumulated than a child and can articulate and process them to absolute precision.
I remember visiting my childhood elementary school and seeing that the "forest" behind the building was just a scraggly patch of trees with a chain-link fence on the other side. It seemed so much bigger back then...
One could argue that humans have a similar problem of errors propagating. Areas that we feel strongly about can bias us against learning in fields such as religion, politics, and, of course, programming language design.
I think that it is possible that humans would make some similarly poor responses in areas that we don't talk about often.