The most intresting mistake with the signularity concept is the idea that you can get a liner return on intelegence. While a computer with x processing power might be able to compute 10 moves deep in a game and a system with 10x the power might get there in 1/10th changing the goal to 11 moves might not be reasonable.
Another mistake is the assumption that the universe can support "unlimited" intelegence. Their might be a lot of room at the bottom but nothings says computing power can't hit a fundimental limits at some point in the not to distant future.
PS: When people assume exponential growth continues indefinatly they tend to ignore the growth limiters that only apply as you increase the scale.
"Another mistake is the assumption that the universe can support "unlimited" intelegence."
One thinker who seeks to address this point is Frank J. Tipler with his notion of the 'Omega Point' - a theoretical point where-after the contraction of the universe means that the available computational power, while still point-finite, will be increasing faster than the time remaining can decrease.
I don't know how much stock I put in the concept, myself, but at least somebody's thinking about it!
Read a little up on it - the guy may very well be bonkers, but it's certainly worth a look if you're into mind-bending metaphysics:
The original hypothesis made specific predictions (regarding the Higgs, if I remember correctly) which have turned out not to be true. Tipler revised accordingly, rather than doing something new, which seems like a bad sign, scientific-method-wise. :)
Who are you quoting? I'm not exceedingly well-versed on the subject, but I've read Kurzweil's books and I don't recall either of those ideas being assumed.
There was a time when a person could have the entirety of human knowledge on their bookshelf, but then Gutenberg came along and screwed that all up. Since then, even the most ambitious person has had to settle for less than complete knowledge. We rely on experts in their respective fields to make their knowledge palatable and useful for everyone else.
Is the singularity the point where this ceases to become possible, or the point where we rely on the expertise of machines? The latter seems possible, at least, but I don't understand how the former is supposed to come about. People place a pretty high value on remaining naive, and there will probably always be a market for that.
Fair point. I was referring to the canonical works (i.e. everything by Plato, but not everything about Plato), and was thinking about the period just before Gutenberg. The available body of knowledge was pretty irregular before Gutenberg (the burning of the Great Library, etc.), but monotonically increased afterwards.
My question stands, though: haven't we already hit our individual saturation point for knowledge? What is the Singularity, if not that?
That seems like an inversion of how I've always seen it described. As far as I understand it, the Singularity is when the rate of advancement outstrips our ability to keep up. If we already can't keep up, but technology fixes that sometime in the future, I don't know if that's the same thing.
For decades now, the way to keep up has been to focus on an ever-narrowing field of knowledge. I don't think that's controversial. One description of the singularity is that it's when the maximum intelligence of an agent (human, computer, whatever) begins to significantly increase due to increasing computing power. If we're using that definition, then it fits with your "saturation point" description, since we'd be increasing the amount of information that an agent can handle. The "singularity" aspect is from the perspective of an unmodified human, not the upgraded agents themselves.
I find the ideas of the Singularity movement greatly lacking in imagination. I think when we discover the true nature of consciousness, life, and evolution (which seem to be related), we'll be in for quite a fundamental philosophical surprise.
The field of cognitive science (lingistics + philosophy + computer science + neuroscience + biology etc.) has actually discovered the underpinnings more or less. Consciousness is based on our bodily interaction with the world. Thought uses experiential metaphors from basic physical interactions with the world such as light/dark, up/down, etc. Read "Philosophy in the Flesh" by George Lakoff.
Thought uses experiential metaphors from basic physical interactions with the world such as light/dark, up/down, etc.
That certainly seems part of it. I've read Lakoff, and second your recommendation to anyone interested in this stuff. I'd also recommend Pinker's "Stuff of Thought" (http://www.amazon.com/dp/0143114247/) for this topic.
AI will never come from digital machines
If you consider AI as limited to human cognition, sure. I don't think it's anyone's goal to clone human cognition, however - humans are full of biases, survival instincts, tribal intuitions, etc.
Then I think we can agree that a subset of human cognition could also be very interesting, and I imagine the subset of cognition which can be "hoisted" without an embodied mind is quite significant. For example, I think we can achieve the level of cognition of congenial deafblindness (those born without vision and hearing), which is pretty extensive.
Yes but simulating the brain will never be the same as the actual "wetware" in operation. To my understanding, the simplistic reason that hard AI has failed and always will - while it uses digital computation as a platform - is that its a HARDWARE deficiency. I'm generalizing but AI people all seem to be algorithm gurus who feel that the next algorithmic breakthrough will be "the one" to create an AI. I think what they miss is that the physical manifestation of brain and body are in fact required to be conscious. Disembodied consciousness cannot exist since conscious beings (animals) have complex biological systems known as bodies that enable what we view as intelligence.
Yes, computers are infinitely better at rule-based computation than animals. No, they cannot pass simple tests of consciousness that a mouse can, and theres really no prospect of it happening because a better algorithm is created; fact is, animals and even insects don't think based on algorithms. Algorithms are models of the real world, not the real thing.
Being conscious means using ones eyes/ears/mouth/nose/skin to feel, hear, smell, speak to interact with the world around you. It means having agency and not just checking a set of pre-written rules in order to decide what to do next. Its a biological system that enables that.
>Yes but simulating the brain will never be the same as the actual "wetware" in operation. To my understanding, the simplistic reason that hard AI has failed and always will - while it uses digital computation as a platform - is that its a HARDWARE deficiency.
Are you saying that hardware deficiencies are never resolved over time?
>I'm generalizing but AI people all seem to be algorithm gurus who feel that the next algorithmic breakthrough will be "the one" to create an AI.
I study AI at Stanford and I've never met anyone who thinks that.
>I think what they miss is that the physical manifestation of brain and body are in fact required to be conscious. Disembodied consciousness cannot exist since conscious beings (animals) have complex biological systems known as bodies that enable what we view as intelligence.
You need to justify that. The reasonable argument closest to what you are saying is that sensory stimuli are required for intelligence. This isn't hard for a computer to simulate.
>Yes, computers are infinitely better at rule-based computation than animals. No, they cannot pass simple tests of consciousness that a mouse can,
Like what?
>and theres really no prospect of it happening because a better algorithm is created; fact is, animals and even insects don't think based on algorithms.
This needs justification. If they aren't computing, then what are they doing? You're looking down into the dualism abyss...
>Algorithms are models of the real world, not the real thing.
No, algorithms are neither. Algorithms are sequences (trees, if you prefer) of instructions.
>Being conscious means using ones eyes/ears/mouth/nose/skin to feel, hear, smell, speak to interact with the world around you.
So are blind people less conscious? What about those with no sense of smell?
>It means having agency and not just checking a set of pre-written rules in order to decide what to do next.
What makes those two things mutually exclusive? When I make a decision I consult my values and my goals and attempt to make the decision that gets me closest to my goals without compromising my values. Where do my values come from? Some of them are based on instinct (i.e. genes, i.e. hard coded) and some come from parents/society. My goals? I'd say most of them are just milestones on the path to other goals, and then that there are a few fundamental goals that come from biology (e.g. I fundamentally want to reproduce, this means that I need to attract a mate and earn enough to support a family, this is made much easier by an good education, etc.).
>Its a biological system that enables that.
Why? And what makes you think we couldn't build a biological computer (other than by the traditional method ;-) )?
> Computers [...] cannot pass simple tests of consciousness that a mouse can
Like what?
Vision, for one. Also, running.
If they aren't computing, then what are they doing? You're looking down into the dualism abyss...
I don't think dualism is what he was thinking of, but rather Searle's Chinese Room argument. Coming from a CS-heavy Philosophy-light background, it took me a long time to wrap my mind around what Searle was saying there, but it's a pretty solid point nonetheless. The best way I've found to explain it in CS terms is that while theoretically, intelligence could be modeled using Turing machines, in practice, we will never have a machine fast enough to do so.
When I make a decision I consult my values and my goals and attempt to make the decision that gets me closest to my goals without compromising my values
We have no idea how you or I make decisions, and the notion that we can intuit how decisions are made is a huge road block in even beginning to think about decision-making. Lakoff, among others, does a great job explaining why "values", "goals", and "morals" are simply justifications we create after we've made a decision.
> And what makes you think we couldn't build a biological computer
We can, and we should - but let's agree that it won't be a Turing machine.
>>> Computers [...] cannot pass simple tests of consciousness that a mouse can
>>Like what?
>Vision, for one. Also, running.
Um, ever heard of a digital camera? Or if you mean interpreting images, guessing what, they can do that too. Computer vision is a subfield of AI (arguably; I believe some say it's signal processing). Also, blind people are still conscious, so...
And running? Sorry, I hate to take this tone, but WTF are you talking about? Robots can run, and running has nothing to do with consciousness.
>>If they aren't computing, then what are they doing? You're looking down into the dualism abyss...
>I don't think dualism is what he was thinking of, but rather Searle's Chinese Room argument. Coming from a CS-heavy Philosophy-light background, it took me a long time to wrap my mind around what Searle was saying there, but it's a pretty solid point nonetheless. The best way I've found to explain it in CS terms is that while theoretically, intelligence could be modeled using Turing machines, in practice, we will never have a machine fast enough to do so.
You've totally missed the point of the Chinese Room. The point is that something can appear to understand without actually doing anything we'd call understanding. In other words, ~Ax(OutwardlyIntelligent(x) -> ActuallyIntelligent(x)). It's an argument against the sufficiency of the Turing Test, that's it.
And saying we'll never have a machine fast enough? The only limit on the speed of computation is the speed of light (and the physical size of the universe limits parallelization). But these values also bound our brains. In fact, iirc neurons are actually pretty slow relative to transistors.
>>When I make a decision I consult my values and my goals and attempt to make the decision that gets me closest to my goals without compromising my values
>We have no idea how you or I make decisions, and the notion that we can intuit how decisions are made is a huge road block in even beginning to think about decision-making. Lakoff, among others, does a great job explaining why "values", "goals", and "morals" are simply justifications we create after we've made a decision.
Okay, maybe my deconstruction is correct and maybe it isn't. Whatever. Unless you can explain to me why no such deconstruction is possible, I have no reason to accept that deciding is not computation. In other words, if it's not computation, then what is it?
>> And what makes you think we couldn't build a biological computer
>We can, and we should - but let's agree that it won't be a Turing machine.
Of course it won't. It will be less powerful, because Turing machines have infinite memory and the universe is finite. It'll be a DFA, just like all current computers.
One that isn't close to being to process images the way rats can.
> Robots can run, and running has nothing to do with consciousness.
An animal can learn a new gait if it loses a leg. As far as I'm aware, no robot can do that. I was giving examples of what mice can do, not of consciousness.
> You've totally missed the point of the Chinese Room [...]
Good point. Good things I dropped the philosophy major :)
> The only limit on the speed of computation is the speed of light (and the physical size of the universe limits parallelization).
Also, our architecture. The point I've heard Searle argue (not the Chinese Room, I guess) is that the computers we build now don't have an architecture are as likely to simulate consciousness, given the constraints of the universe, as a convoluted system of telegraphs.
Unless you can explain to me why no such deconstruction is possible, I have no reason to accept that deciding is not computation. In other words, if it's not computation, then what is it?
Let me give my point another shot - deconstruction of consciousness is possible, but not in a way which can be simulated by our current machine architecture.
It'll be a DFA, just like all current computers.
Do you suppose a brain is a DFA? Which do you suppose has more states - all of the computers in the world, or a single mouse brain?
>One that isn't close to being to process images the way rats can.
Can you justify that? CV systems can track motion, detect objects, do OCR, etc. I don't know much about rats, but I'd imagine that modern CV systems are more powerful than rats.
>An animal can learn a new gait if it loses a leg. As far as I'm aware, no robot can do that. I was giving examples of what mice can do, not of consciousness.
Uh, okay. If you want to see organic-looking gait, look up Big Dog. Here's a video, deep linked to a part where it stumbles on ice and regains its balance: http://www.youtube.com/watch?v=cHJJQ0zNNOM#t=1m25s. In general, neural nets often exhibit organic-seeming behavior. In any case, I don't think that mammalian joint structure (which, and I'm just guessing here, is probably what creates the kind of motion we think of as organic) has anything to do with consciousness.
>Also, our architecture. The point I've heard Searle argue (not the Chinese Room, I guess) is that the computers we build now don't have an architecture are as likely to simulate consciousness, given the constraints of the universe, as a convoluted system of telegraphs.
Well, we could build different hardware, but it seems much more interesting to simulate a mind in software, in which case architecture doesn't matter.
>Let me give my point another shot - deconstruction of consciousness is possible, but not in a way which can be simulated by our current machine architecture.
Why not? Sure, our processors are a bit slow, but that's not really an argument for fundamental impossibility.
>Do you suppose a brain is a DFA? Which do you suppose has more states - all of the computers in the world, or a single mouse brain?
Not that it's especially meaningful, but I'd imagine all the computers in the world do. To try and give you a better answer, I googled "number of neurons in a mouse brain" and the first result was "IBM's BlueGene L supercomputer simulates half a mouse brain" from 2007: http://www.engadget.com/2007/04/29/ibms-bluegene-l-supercomp...
Of course it won't. It will be less powerful, because Turing machines have infinite memory and the universe is finite.
Aside - another place we disagree - how do you define "less powerful"? If you mean to say that a Turing machine can simulate a brain, I am not necessarily convinced...
The things that DFAs can compute are a subset of the things that Turing machines can compute (and the inverse doesn't hold). So Turing machines are more powerful than DFAs.
A Turing machine can surely simulate a brain. Every computer that could ever possibly exist is a DFA (or possibly an NFA? I don't really know much quantum computing), including the brain. The brain is a DFA. So of course an artificial DFA can simulate it.
if some sort of accident rendered you unable to use your eyes/ears/etc. for the sensory purposes you describe, would you then cease to be a human being? the whole body certainly plays a role in thinking, but the threshold between consciousness and its absence is defined by what goes on inside the brain.
"animals and even insects don't think based on algorithms" i think one could bring in some studies of insect behavior that suggest the opposite of this.
For those who like Kurzweil's ides, might I recommend books written by the author Greg Egan.
His books, in particular "Diaspora", discusses the story line of ai-humans that live to invent, create, and discover. Their journey takes them through 10^10 universes just to solve one problem: the destruction of the Earth.
Along the same lines, I'd recommend Accelerando by Charles Strauss. (http://www.accelerando.org/) It's another well written piece of post-singularity fiction that's fascinating in the ideas and implications that it presents.
Yes, this is the second topic today I've recommended it in. No, I am not a shill, it just came up twice.
Another mistake is the assumption that the universe can support "unlimited" intelegence. Their might be a lot of room at the bottom but nothings says computing power can't hit a fundimental limits at some point in the not to distant future.
PS: When people assume exponential growth continues indefinatly they tend to ignore the growth limiters that only apply as you increase the scale.