Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Evidence that dendrites actively process information in the brain (kurzweilai.net)
88 points by atpaino on Oct 30, 2013 | hide | past | favorite | 31 comments


This is the very sort of interaction that leads me to be very, very unoptimistic about ever seeing Moore's Law style runaway advancement in biotechnology.

Biology, it seems, is deeply unabstractable. Ie, as one moves up the levels of organization, one rarely (never?) reaches a point where a higher level can be fully modeled without also fully modeling each of the lower levels.

This is in sharp contrast to computer engineering, where, for example, one can model a processor with all practical accuracy by treating the individual as idealized boolean logic (As we move towards smaller and smaller transistors, this abstraction is threatening to become "leaky", but this has been true thus far throughout the Moore-ian advancement).

I suspect that there may be a limit to the degree of complexity humans can "manage", and thus, without the benefit of effective abstraction, there is a limit on the degree of advancement we can achieve in bending biology to our will.

(An example that speaks to this, in my mind, is the fact that our attempts to chemically tweak our own biochemistry (viz. drugs) are hilariously crude (flood the system with a handful of chemicals, which hopefully drives the system as a whole in the general direction we want) compared to the regulation that the body carries out on its own.)


I wouldn't be quite so pessimistic. We can capture a large part of neuronal variability using Hodgkin Huxley type models (like the one they use in the paper). Dendritic spikes have been hypothesized to be involved in computations for quite a while we just haven't had evidence in vivo. My take home from the paper is that the voltage dependent active properties of the dendritic tree act as an amplifier for synaptic events.

This doesn't fundamentally change how we think neurons work, it just fills in one of the major gaps in our understanding of how relatively few synaptic events could lead to a somatic action potential--something that is very hard to explain if dendrites only passively integrate incoming synaptic events. To give an example, there are connections in the brain between excitatory neurons and inhibitory neurons that are known to basically be 1:1 with virtually no failure rate, one spike in the excitatory neuron will evoke a spike in the inhibitory neuron pretty much every single time. Based on what we know about synaptic failure rates and the total number of synaptic events we think are required to generate and action potential, this phenomena is difficult to explain. Active dendritic properties as described in the paper provide a possible mechanism.

edit: I should say that dendritic spikes probably act as a kind of conditional amplifier for synaptic events. The conditions that come to mind are spatial and temporal proximity. This does complicate the idea that relief of the NMDA Mg2+ block is used to detect coincidence of somatic action potentials with presynaptic glutamate release, suggesting that the Mg2+ block may also be used to detect coincidence of a single synaptic event with other nearby synaptic events.


Bioengineering and synthetic biology will never be like electrical engineering, but some of the differences can be exploited.

For one, you can use directed evolution to optimize a biological system without relying on rational design.

For another, biological development is massively flexible. Consider that when you evolve a longer arm, you don't need to mutate genes to ensure you have longer muscles, tendons, nerves, etc. In fact, you can grow an entirely new arm by just initiating a limb bud at the correct time in development. By contrast, in electrical engineering all design aspects are "rational" - when you change one part, you must change the other parts to compensate.

Modularity and reductionism are "problems" only in the sense that they reflect differences between our engineering strategy and the substrate we are trying to engineer. We must discover the engineering principles that match the substrate.


As you say, even futuristic bioengineering will never be like electrical engineering, but it does have parallels with software development.

It feels somewhat similar to declarative programming - our genes contain a large bunch of code that, in effect, says 'if you're seeing chemical X (which should mean that you're on the edge of a limb bud, then produce chemical Y/grow differently/become a skin cell'. And a bug in some other, far-away code can make an embryo grow, for example, a sixth toe, by invoking already existing code that will connect it to your foot and add toenails.

And we have some idea on how to work with such code - sure, it's far away from what we'd call well engineered or intelligently designed code, it's a big horrible pile of buggy spaghetti code that mostly works in most conditions if we discount the large portion of cases where the egg doesn't even develop into a valid embryo. And there's 'bug parity' where fixing a single-item bug is likely to create another bug elsewhere because it relied on the first part being always buggy. And, of course, it's undocumented obfuscated 'assembly code'. But the advantage is that it's only a singe codebase (although even larger than healthcare.gov) with no 'completely new and different' releases coming, so all of us together have to learn it once, and it is almost the same codebase that we'd also use to alter our corn, cows, flu and mosquitoes.


Biology may be very "unabstractable", but that doesn't mean we can't still learn enough about its high level behavior to create practical systems in its image. For example, it is unlikely that whatever structure or property of the brain makes it intelligent exists only at the molecular level, so we likely won't need to simulate the brain at that low of a level to create an intelligent machine.


I suppose the question is: Are there appropriate abstractions which would bring conceptual simplicity to these systems? I find it hard to believe that there is not. A system which does not have such a property is more difficult for natural selection to operate upon. The changes which are likely to happen to a lineage over the course of time are likely to have evolved to be likely to move the lineage closer to a locally optimal phenotype. I do not think that a system which is incompressible, would display such dynamics, the system would be chaotic and small changes in the genotype would lead to divergent phenotypes wrt the fitness landscape. Perhaps the appropriate abstractions are spread out both temporaly and spatially.


The problem is that our knowledge is limited. Once we have a better understanding of certain biological processes, we may be able to find the right abstractions.


You might be missing the point. Biological systems are the result of Natural selection, and the processes of Natural selection are fundamentally irrational. Any biological system, including our own brain, is essentially the outcome of a random, irrational process. There may be cases where we can arrive at some abstractions, but such cases may be the exception, not the norm.


In that irrational assumes sentience, OK. The randomness being "we can't possibly predict all the elements that went into X happening" OK. But neither of those matters in making an abstraction: the biological components didn't evolve outside of chemistry and physics. We can argue about how much detail would be needed, but is there really anything we can't abstract?


If you are interested in making a very stretched analogy, demonstrating dendritic information processing is like realizing that a CPU's transistor is actually itself a little CPU that is itself capable of quite sophisticated computation. In fact, most of a neuron's computation my be carried out by the dendrites. Don't get tied up in the over-simplified model of dendrite=antenna, soma=computer, axon=wires.

Active dendritic information processing has, for several decades, been theorized and modeled. The combination of two-photon microscopy and more "classical" electrophysiology techniques (like patch clamping used in this article) is finally opening the theories to experimentation.

[Not to be too critical, but this paper is far from the first to experimentally investigate dendritic information processing. I, personally, am glad some segment of HN is interested in neural computation.]


I like this connection between memristors and nuerons: "From an information processing perspective, this tutorial shows that synapses are locally-passive memristors, and that neurons are made of locally-active memristors."[1]

1. http://iopscience.iop.org/0957-4484/24/38/383001


Amazing to have some evidence of the processing capabilities dendrites could possess. Though this only makes our understanding of the brain that much slimmer.

With billions of neurons and dendrites interacting all the time, if each are compartmentalized we're going to have a difficult time coming up with a model to replicate the effects. Which, as I understand it, is our goal in an effort to better understand how the brain works overall.

Still, with this insight it's clear we've got some immensely powerful hardware bouncing around between our ears. What a truly brilliant machine.


I wouldn't be discouraged - this is actually a way of computation that we could "read" by looking at the brain.

The dentritic 'computations' would depend on the geometry of the dendrite and the location of synaptic connections; so the current projects that want to slice a brain in thin slices, scan them, and reconstruct the neurons, would be able to build an exact map for that type of computation, simply by automatically converting each dendrite's connection geometry to a formula/model of that dendritic tree.


Novice question:

A given dendrite has a voltage raise, presumable because of transmitter from a neighboring neuron. That voltage increase will always be local unless it is adequate (as it spread and dissipates on its way to the cell body) for an action potential.

If they showed an action potential starting at the dendrite, then I would expect it to eventually move to the rest of the cell body and then I wouldn't expect the language about 'not seeing the rest of the cell light up'. So, how did they measure/show actual processing? I'm missing that part.


The voltage change (i.e., depolarization) is not strictly local.

In some cases, depending on the actual geometry of the dendrite and the particular complement of voltage-activated ion channels, the voltage change as a result of neurotransmitter release might lead to quite a distributed depolarization even without triggering a dendritic action potential.

Conversely, an action potential initiated in the dendrites doesn't necessarily faithfully propagate to the cell body (soma). This is also dependent on the local geometry and ion channel distribution. Dendritic action potentials are not all-or-nothing events like those of the axon.

To answer your question: Smith, et al., did observe dendritic action potentials (spikes) by measuring a proxy: calcium influx indicated by a fluorescent dye that changes efficiency when bound to calcium. This calcium influx, and by extension, the dendritic spike, is what was spatially-restricted. The authors are extrapolating information processing from the spatially-restricted dendritic spike.


Thanks for the answer.

So just to close the loop and make sure I got it, a couple follow ups

'processing' in this case would refer to integrating signals/voltages/neurotransmitters from more than one neighboring neuron?

How do they show that this was processing/integrating and not just particular sensitivity to one external stimulus?

For 'processing' to be meaningful, would it not have to share the result? In other words propagate the action potential or release neurotransmitter?


I'm not a biologist and the parent poster seems to know in far more detail, but from a bunch of neuroscience lectures on how the dentritic spikes travel up to the soma, my takeaway (as a computer guy) was 'hmmm, it looks like a system implemented in FPGA layouts - the geometry features can work as logic gates or delays'; and 'hmmmm, it looks I could design a dendritic tree geometry for almost any boolean function of the inputs, so any computer-chip-like-functionality could be built out of them'.

I mean, if I needed (A xor B) and (C or D), then my impression is a single neuron with rather simple geometry and appropriate dendritic connections could calculate that in the sense that this neuron would spike iff the A,B,C,D neurons spiked as required by that formula; but since neurons tend to have much much more connections, then each neuron is technically capable of much more complex calculations, even if many of them in the end do something like 'spike iff any 100+ of my 1000 inputs are spiking'.

It's not so simple as that because timing is also relevant, and there were examples of known dendritic structures that do "processing" in terms that a neuron spikes if it receives A slightly before B, but doesn't spike if it receives A slightly after B; so it can be used for detecting motion direction and such.


"[I]t looks I could design a dendritic tree geometry for almost any boolean function of the inputs".

That's my outlook on the structure-function link between dendritic morphology and dendritic information processing, with the modification that I'd not restrict it to boolean functions. There are very many more types of functions, linear and non-linear, that can conceivably be built out of neuronal dendrites.

And I like the nuance of your second paragraph. There are all sorts of wacky, complex calculations one can image being possible, but any one neuron may implement a subset. Now, across a few hundred billion neurons in a mammalian nervous system...

You're spot on with regard to timing, too. All this "information processing" with branched dendrites + non-linear ion channels are greatly expanded with a timing component.


Well, AFAIK you don't need anything more than boolean functions, since if we're talking about single spikes (not spike frequency), then there either is or isn't a spike, you don't get some spikes larger than other.

The linear/nondigital functions IMHO seem to be used as implementation details - for example, a neuron "fire iff 1+ VIP-input fires or 3+ normal inputs fire" can be implemented in wetware by having 'vip-inputs' have thrice as strong synaptic connection, summing all input values in the dendrite, and adjusting so that the firing threshold is appropriate (i.e. a linear function); but in silicon the same thing can (should?) be implemented as a boolean function / logic gates.


I hope no one interpreted my statements to suggest that anything you said was wrong. Just trying to fill in details.

I merely want to avoid prematurely narrowing the range of functions that are possible. If we, for the moment, think of the neural computation of a single neuron as a neural network, then the spike/no-spike decision would be in the last layer and a whole host of linear/non-linear (some not necessarily boolean) functions could be implemented by the dendrites. And some single neuron processing we already know behaves in a non-boolean manner.

Be aware, just because arbitrarily powerful logic could be constructed solely out of boolean components (I don't even know if this is true. Isn't this kinda what is going on in an FPGA?) doesn't mean that neural hardware is purposed the same way. They may very well may be analog, at least for some computations.

And to speak to your second paragraph, I should declare my personal biases. As a dendritic physiologist, I wasn't much interested in whole-cell firing characteristics, but in the dendrite's sub-threshold behavior.

How do a smattering of synaptic inputs, each with varying strengths, interact within the complex electrophysiological scaffolding provided by a branched dendrite layered with non-uniform, non-linear ion channel distributions?

So my perspective is somewhat inverted: To me, neuron firing is the implementation detail! <smilie face>


"Processing" doesn't have a consensus definition in the neuroscience community, but a neuroscientist could, with good justification, use that definition if they had a particular experimental scenario. In this article, Smith, et al., use a more narrow definition based on well-known response in which a neuron is selectively-sensitive to a bar of light at a particular angle. How a neuron becomes selectively-sensitive (i.e., how it fires to that angle and not to all the others) is the "processing".

It is overly simplified to say that for the processing to be useful that it has to share the result. If a particular part of the dendritic sub-tree was stimulated enough, it could bring the neuron into a particular electrophysiological state that succeeding synaptic input would cause a wholly different computation to occur. Thus, you can see the importance of timing discussed by the sister comment.


Reminds me of Roger Penrose's assertion in Shadows of the Mind that the microtubules within the neurons might be doing the work - making each Neuron into a metaphorical computer with millions of transistors. This is a different idea but the same conclusion - Neurons aren't the lowest level of computational structure in the brain, which means we have been underestimating the complexity and power of the brain by many orders of magnitude.


We've always known that we need to model the 10^14 synapses (and their strengths) that connect neurons. The dendrite trees that connect these synapses to a soma have at most another 10^14 branching points, so modeling them all explicitly, in the worst case, only doubles the model size; but it might also give also significant possibilities for optimization, if these 'dendritic' calculations can be modelled as a simple formula.


And this is, somewhat ironically, very bad news for Mr. Kurzweil.


Actually, this may be very good news for Kurzweil. In his last book "How to Create a Mind", he lays out a theory centered around a "pattern-recognizer" unit that is repeated throughout the columns and regions of the neocortex. In his book, he assumed it to be made up of several neurons wired in a specific manner, but if each neuron can do some hierarchical processing of its own then the pattern-recognizer might be reducible to a single neuron.


I think it's actually quite promising. We're good at cell biology but poor at systems biology. If we can convert neuroscience to understanding what types of neurons exist and how they function, that is probably very tractable. In comparison, even measuring the connectome is an insane problem, and modeling trillions of neurons to reveal brain function seems intractable.


It's very interesting research, but I have to say I'd have a hard time being clinically detached with regards to probing a live mouse and working with it, knowing I was going to kill it when my testing was done.


My cousin does biomedical research. She said it gets much, much easier.


In my experience/opinion, the worst part is when you try and try and get no good data at all from one.


I hope people in the connectome camp take this to heart. I strongly doubt that modeling the connections of neurons will reveal the way the brain works. The mouse and rat brains are very similar in connectivity, but the behavior of the mouse and rat are quite different. One explanation is that the individual neurons are actually processing information differently, and so differences arise out of neuron functionality rather than connectivity. This research bolsters the argument that meaningful information processing occurs within individual neurons, and even at the sub-cellular level.


If this is the right way, then exactly the connectome camp will reveal the way the brain works - all their methodologies on how to extract connectomes from brain samples also (by necessity) reconstruct the whole dendritic tree structure through which the synapse is linked; so all parameters for these dentritic computations would also be included in their data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: