>It will be “nothing but clever programming… fake consciousness—pretending by imitating people at the biophysical level.” For that, he thinks, is all AI can be. These systems might beat us at chess and Go, and deceive us into thinking they are alive. But those will always be hollow victories, for the machine will never enjoy them.
Mostly agree with Koch, but I'd take it a step further...
There's major problems behind the concepts of AI and even intelligence itself - and it's difficult to articulate why. It’s as if these terms require aggrandizing to the point of impossibility or they lose all their apparent meaning. Which is why I feel we'll never achieve what we call (Strong/General) AI, or if we do, we will always find ways to be unimpressed by it...
I mean, is it that absurd to consider that the ideal concept beholden to intelligence isn't a reality - even in humans? If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain. We don't know what causes intelligence and have trouble coming up with an adequate definition for it; similar to the concept of life. For all we know we might be just highly complex biomechanical machines operating on stimuli, analogous to what current computers already do. Where's the fine line between making something conscious/unconcious?
Self-aware introspection and self-preserving strategy formation? I think the fact that human technology is meant to serve humans is the reason why definitions of AI and consciousness seem lacking. Even an amoeba has some self preservatory adaptive decision making.
The idea of 'I', myself,separate from all else. My decisions affect my fate.
You like others conflate "how","what" and "why". Whether or not it is biomechanics,electrochemicals or a projected hologram of quanta is "how", so is whether or not stimuli or measurement of stimuli form our reality perception.
A person has fainted, that person then wakes back up,you say:"the person lost consciousness but is now conscious". He was no longer in a self aware state where he would have made decisons to self-preserve and he is now back in that state. Regardless of intelligence or processing capacity, a self aware program that self preserves without explicit instructions for either functions is conscious.
>You like others conflate "how","what" and "why". Whether or not it is biomechanics,electrochemicals or a projected hologram of quanta is "how", so is whether or not stimuli or measurement of stimuli form our reality perception
I'm not conflating anything. I'm trying to understand the meaning behind phrases like "reality perception" - which convey little meaning without a definition. “What” “Why” & “How” are used to grasp at meanings.
See, words are for conveying meaning. When they have too broad of a definition, little information is conveyed (which is why phatic expressions and small talk suck). To better convey meaning, we form more stringent definitions and create new words. Notice however, the formation of new words and stricter definitions is a never ending pursuit, otherwise
if we could fully explain things we would end doing so.
For example, notice we started somewhere with a definition for life, and then years go by and it ends up being too ill-suited for discriminating life from non-life. So we add additional words to describe what makes life... life. Eventually we'll either have a well defined line between what makes something living and non-living, or we perpetually keep on tacking on existing words or new words to describe it. The same goes for intelligence.
I can only see two scenario’s:
Scenario A: We end up perpetually trying to define intelligence, and thus we’ll always struggle to differentiate intelligence.
Scenario B: we come to full stop and are able to fully explain everything (at least regarding intelligence). At which point intelligence will be fully differentiable.
Are we at scenario B yet? Then it should be easy to come up with a satisfactory test for AI, let alone know if it’s even a possibility. Yet here we are today still unsatisfied by the state of computer intelligence… This is what I mean in my OP by: “These terms require aggrandizing to the point of impossibility or they lose all apparent meaning.”
Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.
> Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.
If that is a programmed function then it does not fit my definition. If it was not made to be aware,it became aware as a result of learning information and adjusting its programming and it realized the difference between sleep mode and a system wipe(death) and adjusts its programming to prevent a death scenario then it is conscious.
As for the rest, I don't think I am part of "We" I have pretty strict,well understood and time tested definitions for life and intelligence,separate from consciousness.
There's no discernible difference between p-zombies and 'real' conscious beings. There's a good chance that we're all p-zombies and a distinction between zombie and real doesn't exist.
> If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain.
Sounds a lot like a magical argument. There's no evidence that anything about the way human minds work is fundamentally unexplainable.
>There's no evidence that anything about the way human minds work is fundamentally unexplainable.
Agreed. What I'm implying here is if we come circle to fully explaining how the brain works, then in theory we could predict how someone will function if given all input conditions (whether this is practical is another matter). If this is true, then wouldn't we just be biomechanical robots operating on stimuli? Would we be any more intelligent than say a rock, that similarly just reacts to physical stimuli? Or are we just a more complex rock?
> If this is true, then wouldn't we just be biomechanical robots operating on stimuli
Yes, that's right. Which is one of the big problems with this line of argument for many people -- how do you reconcile "biomechanical robot" with the concepts of "free will" and "moral decision-making"?
I'm personally agnostic on the issue (I don't believe science is able to answer this question yet -- or possibly ever.)
I do think it's interesting that you say "just a biomechanical robot." The "just" implies that being a robot -- a fancy rock -- in some way isn't enough. But in my mind, there's absolutely no (objective) reason to think of a human as any better or more important than a robot, or a rock.
I'm just trying to challenge people's assumptions of what makes something intelligent. To me, it's not precisely clear as to what makes something intelligent, and is why I think we've had trouble coming up with a satisfactory test for determining when a computer is intelligent.
It's pretty straightforward to trace our evolution back to single cells, but to rocks? And I don't think there is a clear idea of where viruses and stuff come in. I vaguely remember an article suggesting there might be a previously unknown mechanism used by the brain to handle memories that resembles (literal) viral machinery. Some think there might have been a biosphere before DNA, that used RNA. But presumably early life underwent transformations that wiped out the very first, whatever it was.
(see the "Origin of biological metabolism" section - lots of good new research coming out recently, including simulations of the very early processes predating life as such)
You’re the one with magical beliefs. Why should the brain or anything for that matter be explainable? Cause your brain has tricked you into believing it’s a universal understanding machine?
> There's no discernible difference between p-zombies and 'real' conscious beings. There's a good chance that we're all p-zombies and a distinction between zombie and real doesn't exist.
I think you're getting it wrong. There's no externally discernible difference between a normal person and a p-zombie, because the difference is in how they experience. I know I'm conscious, because I subjectively experience it, but I can't be sure about anyone else because they could be a p-zombie that has everything except that subjective experience.
Consciousness is not how a mind works. It is a state property of the machine. Much like how you use to define a program using a finite state machine. It has nothing to do with reality or perception
Well, both. We can’t explain qualia for example, and no AI (at least, no non-super intelligent AI) is going to give us insights there. Perhaps we can create an AI smarter than us that can actually tell us what is going on, even if they can’t experience qualia.
The fine line is around whether a machine can actually experience conscious perception, such as actually feeling pain, for example. Of course, there's no way to know...
Maybe analogous, maybe not. Analogy is in the eye of the beholder. There's no law of the universe that says that consciousness is equivalent to computation. Maybe consciousness is an emergent property of biological systems, along some dimension currently unknown to us.
Consciousness and computation are entirely up to what we define these words to mean. At some point we have to settle on a satisfactory definition of consciousness that fully differentiates it from computation, or they’re left with at least some overlapping (analogous) meaning.
If we dismiss consciousness as currently unknowable us, and thus undefinable (as your statement about dimensions alludes to), then how can we say assume with certainty, that we haven’t already achieved conscious AI?
Mostly agree with Koch, but I'd take it a step further...
There's major problems behind the concepts of AI and even intelligence itself - and it's difficult to articulate why. It’s as if these terms require aggrandizing to the point of impossibility or they lose all their apparent meaning. Which is why I feel we'll never achieve what we call (Strong/General) AI, or if we do, we will always find ways to be unimpressed by it...
I mean, is it that absurd to consider that the ideal concept beholden to intelligence isn't a reality - even in humans? If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain. We don't know what causes intelligence and have trouble coming up with an adequate definition for it; similar to the concept of life. For all we know we might be just highly complex biomechanical machines operating on stimuli, analogous to what current computers already do. Where's the fine line between making something conscious/unconcious?