> Sometimes, the major advances come when fresh ideas are infused from the outside.
I agree with this largely; over-specialization leads to myopia (often accompanied by emotional attachment to one's work).
> In Darwin's case it was his geological work that inspired his theory.
If you read On the Origin of Species, you'll see that Darwin started from very simple observations about cross-pollination leading to hybrid plant strains. He spent years studying various species of animals. In the book, he begins out very modestly, following step by step from his Christian foundations, without making any outrageous claims. The fossils he collected on his Beagle expedition sparked his interest in the field, and served as good evidence for his theory.
> In concurrency maybe it will be ideas from neuroscience.
Unlikely, considering what little we know about the neocortex. The brain is not primarily a computation machine at all; it's a hierarchical memory system that makes mild extrapolations. There is some interest in applying what we know to computer science, but I've not seen anything concrete so far (read: code; not some abstract papers).
> We are only at the beginning of this paradigm shift to massively multi-core CPUs.
From the point of view of manufacturing, it makes most sense. It's probably too expensive to design and manufacture a single core in which all the transistors dance to a very high clock frequency. Not to mention power consumption, heat dissipation, and failures. In a multi-core, you have the flexibility to switch off a few cores to save power, run them at different clock speeds, and cope with failures. Even from the point of view of Linux, scheduling tons of routines on one core can get very complicated.
> In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
Ofcourse, but I don't speculate much about the distant future. The fact of the matter is that silicon-based x86 CPUs will rule commodity hardware in the foreseeable future.
> [...]
All this speculation is fine. Nothing is going to happen overnight; in the best case, we'll see an announcement about a new concurrent language on HN tomorrow, which might turn into a real language with users after 10 years of work ;) I'll probably participate and write patches for it.
For the record, Go (which is considered "new") is over 5 years old now.
I think you missed my point about Darwin. Darwin was inspired by the geologic theory, gradualism, where small changes are summed up over long time periods. It was this outside theory applied to biology that helped him to shape his radical new theory.
Right now threads are the only game in town, and I think you're right. For existing hardware, there probably won't be any magic solution, at least no with some major tradeoff like performance hit you get with Erlang.
I was thinking about neuromorphic hardware when I mentioned neuroscience. From what I hear the software side there is more analogous to HDL.
Go is great stopgap for existing thread based HW. But if the goal is to achieve strong AI, we're going to need some outside inspiration. Possibility from a hierarchical memory system, a massively parallel one.
I wish I could offer less speculation, and more solid ideas. Hopefully someone here on HN will. I think that was the point of the video. To inspire.
I agree with this largely; over-specialization leads to myopia (often accompanied by emotional attachment to one's work).
> In Darwin's case it was his geological work that inspired his theory.
If you read On the Origin of Species, you'll see that Darwin started from very simple observations about cross-pollination leading to hybrid plant strains. He spent years studying various species of animals. In the book, he begins out very modestly, following step by step from his Christian foundations, without making any outrageous claims. The fossils he collected on his Beagle expedition sparked his interest in the field, and served as good evidence for his theory.
> In concurrency maybe it will be ideas from neuroscience.
Unlikely, considering what little we know about the neocortex. The brain is not primarily a computation machine at all; it's a hierarchical memory system that makes mild extrapolations. There is some interest in applying what we know to computer science, but I've not seen anything concrete so far (read: code; not some abstract papers).
> We are only at the beginning of this paradigm shift to massively multi-core CPUs.
From the point of view of manufacturing, it makes most sense. It's probably too expensive to design and manufacture a single core in which all the transistors dance to a very high clock frequency. Not to mention power consumption, heat dissipation, and failures. In a multi-core, you have the flexibility to switch off a few cores to save power, run them at different clock speeds, and cope with failures. Even from the point of view of Linux, scheduling tons of routines on one core can get very complicated.
> In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
Ofcourse, but I don't speculate much about the distant future. The fact of the matter is that silicon-based x86 CPUs will rule commodity hardware in the foreseeable future.
> [...]
All this speculation is fine. Nothing is going to happen overnight; in the best case, we'll see an announcement about a new concurrent language on HN tomorrow, which might turn into a real language with users after 10 years of work ;) I'll probably participate and write patches for it.
For the record, Go (which is considered "new") is over 5 years old now.