It's fascinating, but suggesting that this is anything like what's been depicted in science fiction movies (yes, like The Matrix) is silly.
It's certainly a big leap beyond what's actually been demonstrated here. But it is a very interesting first step along that path, and if something like Matrix-style learning was ever going to be possible, I'd imagine this would be the most fruitful approach to follow.
Just to be clear: what's going on here is essentially that they've set up an apparatus to display an error function that measures the difference between the target and actual brain states in a person, and asked the person to minimize that error. The idea would be that by trying to match brain states you could shortcut the learning process that lead to that brain state, and instead skip directly to the result. Pretty standard optimization problem, except that it involves the brain.
The difficulties come in two parts:
1) Is it possible to efficiently "navigate" your own brain state based on feedback towards a target based on an error function, and do so well enough to jump out of crappy local maxima? More generally, is it possible to reliably navigate to a target brain state quickly enough so that it's more efficient than actually going through the learning process that the target state was arrived at from?
2) Can we even measure "brain state" accurately enough so that successful completion of 1) would be useful at all?
1) is a biological question, 2) is technical. This particular research doesn't offer much on either front, sadly, because the task they picked was so simple that it would clearly have been better achieved by direct learning (they spent several days training people on the "error function", and I can't imagine it took that long to recognize an orientation of an image for the training group...).
I'd be very skeptical about how useful low resolution views of brain activity will be when it comes to higher level understanding rather than simple visual recognition tasks. And I'd also be skeptical about whether we'd still be able to effectively navigate our brains through the fitness landscapes if we ever did achieve resolution good enough to help with more difficult tasks.
But I don't think it's a wash. It's pretty likely that even low-res views of brain activity for certain tasks can be helpful - if you could match your brain activity even at a rough scale to the way Richard Feynman's looked when he was thinking about physics, it would probably put you in a better state of mind to do physics than you might have been otherwise, and that could still be useful, even if it couldn't directly transfer his knowledge about QED to your brain.
If we ever get to a point where scanners are much better, I imagine there will be quite the industry in picking out which low-res reductions of high-res brain activity are the best to use as training targets...
It's certainly a big leap beyond what's actually been demonstrated here. But it is a very interesting first step along that path, and if something like Matrix-style learning was ever going to be possible, I'd imagine this would be the most fruitful approach to follow.
Just to be clear: what's going on here is essentially that they've set up an apparatus to display an error function that measures the difference between the target and actual brain states in a person, and asked the person to minimize that error. The idea would be that by trying to match brain states you could shortcut the learning process that lead to that brain state, and instead skip directly to the result. Pretty standard optimization problem, except that it involves the brain.
The difficulties come in two parts:
1) Is it possible to efficiently "navigate" your own brain state based on feedback towards a target based on an error function, and do so well enough to jump out of crappy local maxima? More generally, is it possible to reliably navigate to a target brain state quickly enough so that it's more efficient than actually going through the learning process that the target state was arrived at from?
2) Can we even measure "brain state" accurately enough so that successful completion of 1) would be useful at all?
1) is a biological question, 2) is technical. This particular research doesn't offer much on either front, sadly, because the task they picked was so simple that it would clearly have been better achieved by direct learning (they spent several days training people on the "error function", and I can't imagine it took that long to recognize an orientation of an image for the training group...).
I'd be very skeptical about how useful low resolution views of brain activity will be when it comes to higher level understanding rather than simple visual recognition tasks. And I'd also be skeptical about whether we'd still be able to effectively navigate our brains through the fitness landscapes if we ever did achieve resolution good enough to help with more difficult tasks.
But I don't think it's a wash. It's pretty likely that even low-res views of brain activity for certain tasks can be helpful - if you could match your brain activity even at a rough scale to the way Richard Feynman's looked when he was thinking about physics, it would probably put you in a better state of mind to do physics than you might have been otherwise, and that could still be useful, even if it couldn't directly transfer his knowledge about QED to your brain.
If we ever get to a point where scanners are much better, I imagine there will be quite the industry in picking out which low-res reductions of high-res brain activity are the best to use as training targets...