Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a linguist (well, person with a degree in linguistics), I think Searle makes a very strong point, and I don't think it's at all fair to call it rebutted or disproven.

The core part of the room is a book where the operator looks up symbols coming in and is able to mindlessly copy the answer to the output. Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist. Human language in use does not work like that. But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.



> allows a person to interact with someone in a language they don't speak

Here is the flaw in your reasoning. The human who is following the directions in the book certainly isn't the consciousness we need to be thinking about any more than the mitochondria in our brains are conscious. In Searle's analogy, the human in that room is simply the power source. The consciousness is in the state of the room.

Searle's analogy also seems to hide the importance of state. The agent just looking up symbols in a book, and that page tells them what to do with the next symbol is essentially a combinational circuit, a static function. If those are the rules then I'd agree there is no consciousness there. But I'd also say it is impossible for a computer to translate C++ source code to machine language ... if it cannot retain state.

If Searle's room has enough rule books and means for storing enough state, then I'd say consciousness would be possible.

The best reverse argument is that if human consciousness is not the result of a naturalistic, physical process and requires some extra-physical process to make it all work, at some point the laws of physics must be broken which causes the purely physical brain to do something other than what the laws of physic dictate it should do. Where are the violations of physics?

Penrose waves his hands and says quantum physics, but that is not an explanation, just an obscuring smoke screen. How does the addition of apparently random events make something more likely to be conscious? Are there sufficient degrees of freedom for this extra-physical conscious agent to manipulate quantum fluctuations to force the physical outcome it desires in a consistent manner? If it is theoretically possible, is it practically possible to compute what quantum butterflies should flap their wings?


The lasting value in the Chinese room thought experiment is making argument explicit. It helps to clarify the problem. Searle makes two conclusions. 1) computers have no understanding of meaning or semantics and 2) human minds are not computational information processing systems.

Most modern criticism of these two conclusions point out that Seale's argument relies on strong intuition about what understanding and meaning means and he can't clarify them satisfactorily.

Chinese room being conscious is counter-intuitive. Today most philosophers of consciousness are more aware how easy it is to make 'slips in reasoning' when dealing with the counter-intuitiveness of the consciousness arguments.


Ah, well, yes, the C-word is pre-scientific term and those are always hard to deal with. But I think "understanding natural language" is one of the easier ones, at least inasmuch as we can resort to referring to the intentions of the speaker, and understanding being the congruence between that and the hearer's impression of it. And it just seems like a category error to say that any natural language processing system has anything like that, since the thing doing the understanding is so radically different to the one doing the speaking.


> any natural language processing system has anything like that

Chinese room that can respond verbally can incorporate more than natural language model. It can have spatiotemporal understanding as well or any other model that can be represented in Turing machine.


> Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist.

And that, right there, is a sufficient rebuttal to Searle's argument, as many critics have pointed out. The premise of Searle's argument is that such a setup could pass the Turing test, which is absurd. So his argument fails because it is based on an absurd premise.

> the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

That's a valid criticism of early AI and machine translation efforts, but Searle did not put forward his argument just as a criticism of those efforts. He put forward his argument as a claim that no efforts at AI that used digital computers, ever, could produce anything that was conscious the way humans are conscious. But, as above, his argument was based on an absurd premise, and, what's more, an attempt at building a conscious entity with digital computers does not have to satisfy that premise.


The room is not just an input-output system. It also has to have state.

Without state the whole argument falls apart.

> But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

As someone who studied Machine Learning 20 years ago, could you be a bit more respectful please?

First, pretty much nobody (working on serious ML) in that era thought they were going to build anything actually intelligent, conscious, or thinking.

Second, in this supposed era of boneheaded simplicity, a LOT of foundational stuff we use in today's AI was discovered and written.

I'm not sure how familiar you are with AI history, to name an example, we already were working on neural nets and discovered error backpropagation in the 70s/80s. Then came a long period during which neural nets somehow didn't really perform very well, and ML was looking at completely different types of classification algorithms (what's happening with SVMs today?).

If we had stopped there, you might have lumped in the early perceptron with the "boneheaded simplicity".

But instead we developed batch normalisation and a bunch of other techniques, and now neural nets are state of the art.


> As a linguist (well, person with a degree in linguistics), I think Searle makes a very strong point, and I don't think it's at all fair to call it rebutted or disproven.

I think you are about to disprove this point.

> The core part of the room is a book where the operator looks up symbols coming in and is able to mindlessly copy the answer to the output. Such a book (or whatever book-like device, i.e. a list of inputs with corresponding outputs) that allows a person to interact with someone in a language they don't speak is an absurdity, it cannot exist.

Yup, that's both true and a pretty convincing rebuttal of the Chinese Room.

> But the notion is not far from the bone-headed simplicity of early AI and machine translation efforts.

So, what? No one, anywhere claimed that those early efforts were self-aware or had understanding, nor are they what the Chinese Room is even notionally directed at: it was directed at the limit of possibility, not the then-current implementations.


I honestly don't understand how that's a rebuttal of the Chinese room. Obviously a person with such a fantastic device would not "understand Chinese" if they had it.


Consider a more realistic version of Chinese room. Bob gets a slip of paper, with written Chinese on it, and slips it into a black-box "Chinese room", which produces another paper with a perfectly written Chinese answer. Bob hands it back - it is pretty obvious that Bob doesn't need to have an inkling of Chinese.

Except, in this version, there's an actual Chinese speaker, Xin, sitting inside the room. It's clear that it's Xin, not Bob, who understands Chinese.

Now let's move back to the original Chinese room argument. It's clear that it's the room that understands Chinese - or, rather, the whole system composed of the rules (the room) and its executor (the person), but not the executor by itself. It only seems absurd because our real-life experiences make us presume that, when there's a room and a person, the person must be the more sentient part. IMHO the whole argument is a philosophical sleight of hand.


This is the correct answer.

However, when confronted with this "the system does understand" argument, I believe that Searle and his defenders fall back on the lack-of-qualia position. That is, there is no entity that would experience anything, and that therefore the ability to demonstrate "understanding" as the room does isn't a sufficient (or perhaps even necessary) condition of consciousness.

This point is trickier to rebut, because I think it's fair to acknowledge that our ability to imagine where there would be qualia in the case of the chinese room is more than a little limited. I think the only fair answers are either:

1. our imagination is too limited, but that doesn't allow us to conclude that there cannot be qualia

2. there are indeed no qualia associated with the Chinese Room, because consciousness is not required for full language processing.

(or both)


The problem with the lack of qualia argument is that qualia has no operationalizable, testable definition; it's something we have a fuzzy idea of based on individual experience and attribute to other entities based on similarity of behavior, but can't really with any authority say does or does not exist in anything (other than that an entity itself that experiences qualia can attribute it to themselves.)


I mean this is the limit of current state of the art AI, right?

We're seeing with things like GPT3 that an actual powerful language model needs to have a load of real world knowledge built in. But that knowledge is all based on experiences by others. From the way it talks (and generates poetry) you can tell that it is unable to synthesize new experiences. It can't come up with a description (or metaphor) of human experience that it hasn't been taught.

To be a convincing AI, you don't just need to replicate descriptions of experience from memory, but actually have new experiences as well.

Meaning that the Chinese room, in order to be able to demonstrate "understanding", will also require other inputs than just text. Otherwise it can't really understand the world, because it only knows about the world from hear-say.


I’ve not heard this argument before. Isn’t saying “there is no entity that would experience anything” begging the question? If the system is capable of understanding, why wouldn’t it also be capable of experience? That is what we see in animals.


No, this is the very heart of Searle's thought experiment. His point was that you could imagine a rule-based system for language translation and conversation that clearly was not conscious. The argument wasn't about whether or not it was possible to build such a system, but about whether or not such a system must necessarily be conscious.

Searle's claim was that it was clear from his thought experiment that such a system could exist without any understanding, and definitely without consciousness. Others have disagreed with his reasoning and his conclusion.


I must have misunderstood your earlier comment then. I thought you were saying that Searle responded to ”the system does understand” argument by saying “OK, but it’s still not conscious.” Now I think you are saying his response was “no, it doesn’t understand, it merely passes the Turing test; it can’t understand because it’s clearly not conscious.” Which again, seems to beg the question.


No, Searle's main response to the system response was to just dismiss it out of hand (at least in the responses from him that I read). Dennett was probably (for me) the most articulate of the "system responders" and I think Searle considered his POV to be a joke, more or less.


The Chinese Room is an argument that a Turing-like test is inadequate to prove understanding of a language because a combination of two things, neither of which have understanding, could conceivably pass it for every test scenario.

The fact that one of those two things is impossible (or, alternatively, is equivalent to a device encoding a full understanding of the language) negates the argument.


I see your point. It's been a very long time since I've read the original text, but I don't remember this being my impression of the argument being made, and that would indeed be a very simplistic one. But I might be mixing it up with some of the debates on semantics within linguistics, which is of course a slightly different topic.


But the room as a whole WOULD understand chinese. The person in the story is the mouth, not the brain, the look up book/device is the brain.


If such a device could exist, it would be the part which understood. If the questions were about facts and the person looked up the facts in a book, it wouldn't be the room as a whole that knew the facts. The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...


> The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...

Sure, but I think that's always the case. When you speak your entire body (at least head/neck/chest, but also cardiovascular system) are involved in "delivering" information but your tongue (and for that matter most organs involved) doesn't "know" the information. We still say that "you" as a person know something.

When a computer displays a result of some (cpu) calculation on the monitor, the gpu making electrical signals for the input of the monitor has no meaningful knowledge about the calculation, etc, but we still say "the computer" made the calculation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: