Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Three people play a Tetris-like game using a brain-to-brain interface (washington.edu)
159 points by lxm on July 20, 2019 | hide | past | favorite | 27 comments


https://arxiv.org/abs/1809.08632

If I understood the paper correctly, the results are totally unimpressive. The experiment setting seems to me to be intentionally convoluted to sound impressive. Like it's just creative bullshitting.

This is how it works:

(Edited after scribu corrected me)

1. Senders make a decision by concentrating on flashing lights. They use EEG-cap to capture difference in spectral power between light flashing 17Hz and 15 Hz using the Welch's method. The choice is difference averaged over several epochs in 10-second period (must be very low paced game). Lots of signal processing and averaging to get yes/no answer between bright 17Hz and 15Hz visual signal from steady state visually evoked potentials.

2. This one bit of information was then conveyed to the Receiver using TMS using signal where 10 consecutive pulses is yes, absence of pulses is no. Thresholds are well calibrated beforehand so that yes/no can transmitted. Receivers gets TMS signal that is completely different from what Senders did.


The senders do NOT control the cursor by hand. From the paper:

> The Senders convey their decisions of "rotate" or "do not rotate" by controlling a horizontally moving cursor (Figure 8) using steady-state visually-evoked potentials (SSVEPs).


It seems that you are correct. The cursor is moved by concentrating on the lights.


Reading these papers feels like watching someone high dive into a kiddie pool[0].

Incredible amounts of preparation and equipments are needed, for a simple transfer of body to pool.

Though it’s thrilling and makes a cool headline, why not just give everyone a normal diving board and normal pool? That’s hella fun, too, but surprisingly overlooked in research (and honestly looked down upon).

[0] https://youtu.be/9y0ssJJRFS8


Nature Scientific Reports where this is published is so called mega-journal. They accept practically everything that is technically sound, even if it's not scientifically important.

They do two basic and technically sound things in this paper.

1. Detect yes/no answer from EEG-singal when user is looking light that flickers in different frequency depending on the answer.

2. User is able to detect when transcranial magnetic stimulation is used to signal his brain.

It's not the bleeding edge science but neither of these is technically trivial experiments to set from scratch. Surprising amount of work is spend just setting it up and getting everything right, and fixing all errors. Writing technical report detailing of they did would be probably interesting at least for some. It's unfortunate that they feel the need to tie it up into this brain-to-brain gimmick stuff.


> “We essentially ‘trick’ the neurons in the back of the brain to spread around the message that they have received signals from the eyes. Then participants have the sensation that bright arcs or objects suddenly appear in front of their eyes.”

Incredible. Also, the article mentions using a coil to stimulate the receiver's brain. Is this some form of trans-cranial magnetic stimulation?

By the way, in case anyone wants to listen to a decent critical theory lecture on neuralink-esque technologies, here's one by slavoj zizek https://www.youtube.com/watch?v=38alQSKtVbA


Yeah from the image at the start of the article, it looks like they’re using TMS. You can see the coil sticking out to the right behind the man facing the camera. It’s attached to a blue arm.

Furthermore if you look to the right of the coil, there are 3 little silver balls. These are coated in an IR-reflective layer (silver color) that are used by a camera to track the position of the coil in 3D space. This is more than likely a Localite system if anyone wants to look it up.

My guess is that the people had MRI scans, which were used to find the right place to stimulate. The Localite system uses a pointer with another 3 balls that is pointed at key positions of the head and then slid around the surface of the head. The head also has its own 3-ball tracker. This allows the Localite system to get an accurate reading of head size and positioning in 3D space. This also allows the system to “place” the brain image from the MRI inside the head. Then, using a screen, you get an augmented reality view of the TMS coil and the brain relative to the head to get accurate positioning and angle to place the coil.

The 3-ball tracker for the head was probably removed for the image, but you can see it in the image further down in the article. You can also get a better view of the TMS coil. You can also see the Localite system in action on the monitors and the camera in the top right.

It’s really cool stuff.


Your description of the localization approach is accurate, but this system is almost certainly Brainsight, given the screen shown in the image at the bottom, the IR camera in that image, and the shape of the tracker arm.

The general approach is straightforward: MRIs are have real-world coordinates. Anything on the head (TMS, EEG, a surgical instrument) also has real-world coordinates. To co-register the two, you need to associate 1) markers at MRI time 2) markers at TMS time.

Once you have that correspondence, you can position any other objects relative to either one, like surgical instruments with reflective markers or TMS systems or whatever.


Looking at it again, you’re right. That isn’t Localite. I should’ve said localization system instead. I’m not familiar with Brainsight and I haven’t been involved with the research using Localite in a couple of years, so I missed that.


The Neuralink presentation a few days ago made the important point that the physics of neurons makes it impossible to read and/or write their state with any kind of accuracy without getting very, very close to them.

As such, these kinds of completely non-invasive methods of interfacing with the brain are a dead end. Barring breakthroughs in scanning technologies that completely upend basic laws of physics, you will always be limited to a low-bandwidth channel that only works by reading the crude, aggregate state of the whole brain or a large area of it, and requires extensive training for participants to learn how to send basic signals.

Since the limits here are hard physical ones, not ones that can be engineered around, there's no way for this to be gradually refined into a more useful system. It will ways be a hack. If we're going to produce high-bandwidth brain-machine / brain-machine-brain interfaces that allow useful collaboration, it's going to require getting inside the skull and getting up close and personal with brain matter, whether we like it or not.


This is a little weird. Why would you use a Tetris like game for this? There is only one bit sent.

It’s also quite hackish. To select yes or no, they have to look at some flashing lights the scientists know are going to generate different patterns in the brain. So it’s not like the guys are thinking “yes” and that’s being transmitted.

Similarly the receiver’s brain is being stimulated kind of randomly by electrical impulse.

It’s basically electrical communication but where the transmitter and receiver are wired in unsual places in the brain...


Of course it’s hackish. The technology is still somewhat rudimentary. They’re stimulating the visual cortex and the only way to do that (or stimulate brain areas in general) in a non-invasive way is using TMS, which isn’t “pinpoint” accurate. It’s shooting a strong magnetic pulse to stimulate the surface of the brain through the skull. Since TMS doesn’t have deep penetration for the receiver and neither do the electrodes for the senders, they’re limited to surface areas of the brain.

The places are not unusual.

In terms of the senders, it would probably have been easier to simply use an eye tracking camera to gauge which one they were looking at, but it wouldn’t have been “brain-to-brain”. We’re still a LONG ways off from telepathic-like communication. We’re also a long ways from even picking up “yes” or “no” signals from people’s thoughts. It’s still an important achievement nonetheless to get such readings from the senders and to make such manipulations to the receivers conscious visual perception, even if it’s not measuring conscious thought or creating a specific image.


> We’re also a long ways from even picking up “yes” or “no” signals from people’s thoughts

Yes and no. No because even though the tech exists, it's not exactly ready for the masses and doesn't work 100% and usually needs calibration and isn't exactly comfortable to wear. Yes because technology allowing people to form words on a computer by picking individual letters based on just EEG exists already; so that's more then yes/no. Likewise there's the experiments to have people move robot arms etc. (i.e like Neuralink from Musk, which isn't really new). You can even already buy commercial games where you control a cursor on the screen based on just 2 electrodes.


My understanding is that what musk is doing isn't new in the sense of novel but new in the sense that he's pushed the refinement of the technology to the outer envelope.

Which is a pattern with him (and a good one) in that he takes existing tech and pushes it to the outer envelope, batteries, motors and rockets all existed long before musk was born but he's done valuable and interesting things with them.

Someone had to make a technology for for market.


new in the sense that he's pushed the refinement of the technology

That's what I understood as well. For example there's a mutlitude of practical problems with current in-brain electrodes, ranging from limits in the amount of them, their lifetime, lifetime of connectors, surgeries being quite difficult, possible problems with infections etc. None of them are extremely hard to solve but it basically requires a ton of money being thrown at and that seems to be Musk's plan, allowing to get like 3000 electrodes with the preamp/digitizer stage implanted subcutaneously.


I was referring to detecting and distinguishing signals when someone is simply thinking of the word yes or no, similar to what you would expect from telepathy. But like I said in my previous comment, it’s an important step nonetheless for many reasons.


Yeah I think think the decoder is not state of the art for non-invasive decoders.

The encoder may be? (But I believe we can do a lot better with invasive encoders though)

So the novel thing is really combining an decoder and an encoder.


What you're describing is simply detecting attention, not conscious thought (e.g. inner voice), which is what I was referring to.


Feels like you're trying to downplay this but that's not really correct imo. Ok the individual parts of the experiments (stimulation, decoding EEG) aren't exactly new but for the rest I think it's quite novel (might be wrong though)? Why Tetris? Why not? It's not because only one bit gets sent now that they don't want to go any further. Also don't forget this maybe took years of preliminary research before they got something working. In those years they might have been using something else the switched to Tetris for publicity. Wrt being hackish: other commenter addressed that already; if you think this is hackish you'd be surprised what has been going in in neuroscience for the past decades.

> It’s basically electrical communication

Well, yes, that's the whole point.


Indeed, it's a nice setup but it's not what you expect from the title.

You cannot put on a cap with three people and then start playing a game of Tetris by thinking and imagining alone. The game is on a screen for everyone. The block must be rotated or not, that's it. To decide or send a decision, participants must make a very conscious action to look at a specific flashing light. To interpret the signals, the participant must be paying attention to the occurrence of strange flashes in his eyesight.

What bothers me a little too with the pumped title is that, as far as I know, there is very little novelty here. All the interfaces were already developed. So what do we gain from this experiment?


"This is the first demonstration of ... a person being able to both receive and send information to others using only their brain."

I'm completely blown away by the gravity of this achievement. This is a somewhat unassuming article for one study among many, but yet here we have something that, in my opinion, is essentially the equivalent of the discovery of how to create fire, or the invention of iron. Of course it will likely take many more decades at least before we can scale this and use it in a more practical way. Nevertheless, maybe I'm being melodramatic but I really feel like we have very clearly just entered into a new age as a civilization and species.


Melodramatic... perhaps. I don't see it as brain to brain. Eyes are clearly required to stare at the yes or no pulses for the sender's.

Seems like an extremely low bandwidth way to digitise 'thoughts'.

Also a bit skeptical about the receipt of this single bit of information being directly into the brain. Essentialy zapping part of the brain to cause a perceived flash. It's and incredible gap still toward encoding something as complex as a real thought.


I wouldn't want to be an early adopter for this type of technology. Even just play old VR is still so new that we don't know what sort of effects prolonged use will have on physiology and cognition, and direct stimulation of the brain, a system whose precise functioning still seems so largely unknown to us, seems all the more dangerous.

I feel that some day we will be able to comprehensively decode neural communication but I would liken our current state to something like surgery in the Victorian era. I wonder if these student volunteers see the situation differently than I do, or if they are just that brave.


VR really isn't new. It's quite literally been around a few decades. Longer if you count devices like Sensorama[1] but the Oculus-style VR headsets that we're familiar with now has been around since the 90s (albeit at a significantly lower polygon count).

Plus there has been lots of research already about extended periods of altered realities. From people living inside mock space capsules through to simple experiments with people wearing special glasses[2]. Though granted I don't know of any research regarding extended use Oculus-style VR for weeks / months at a time, there have also been a lot of research regarding the use of such technology and how it can alter our mental perceptions beyond what we visual see (eg I cannot find a reference for this one but I did read research about people using avatars of the opposite sex and how quickly people came to register that was their pseudo-psyical body)

As an aside, this research reminds me a bit of Sword Art Online[3]. A Japanese light novel (which has been ported to different formats from anime through to computer games) and which is about "full dive" VR headsets.

[1] https://en.wikipedia.org/wiki/Sensorama

[2] https://en.wikipedia.org/wiki/Upside_down_goggles

[3] https://en.wikipedia.org/wiki/Sword_Art_Online


Look at Neurable, they've been around for years: http://neurable.com/

Tech is pretty good, and they're working on a VR headset which works with games. If you talk to the founder, his story is really interesting. He has an uncle who was crippled in a trucking accident, and his end goal is to create a version of this which can help cripples regain mobility. He's using games as his beachhead market to get traction, proof, a chance to iterate, $$$, etc. Pretty inspiring stuff, and "social-conscience" investing done right, in my opinion.



This is fine but doesn't present any jumps in technology. It's still just EEG. The electrical stimulation sounds pretty dangerous as it's non invasive so must have a pretty large stimulation zone.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: