I honestly don't understand how that's a rebuttal of the Chinese room. Obviously a person with such a fantastic device would not "understand Chinese" if they had it.
Consider a more realistic version of Chinese room. Bob gets a slip of paper, with written Chinese on it, and slips it into a black-box "Chinese room", which produces another paper with a perfectly written Chinese answer. Bob hands it back - it is pretty obvious that Bob doesn't need to have an inkling of Chinese.
Except, in this version, there's an actual Chinese speaker, Xin, sitting inside the room. It's clear that it's Xin, not Bob, who understands Chinese.
Now let's move back to the original Chinese room argument. It's clear that it's the room that understands Chinese - or, rather, the whole system composed of the rules (the room) and its executor (the person), but not the executor by itself. It only seems absurd because our real-life experiences make us presume that, when there's a room and a person, the person must be the more sentient part. IMHO the whole argument is a philosophical sleight of hand.
However, when confronted with this "the system does understand" argument, I believe that Searle and his defenders fall back on the lack-of-qualia position. That is, there is no entity that would experience anything, and that therefore the ability to demonstrate "understanding" as the room does isn't a sufficient (or perhaps even necessary) condition of consciousness.
This point is trickier to rebut, because I think it's fair to acknowledge that our ability to imagine where there would be qualia in the case of the chinese room is more than a little limited. I think the only fair answers are either:
1. our imagination is too limited, but that doesn't allow us to conclude that there cannot be qualia
2. there are indeed no qualia associated with the Chinese Room, because consciousness is not required for full language processing.
The problem with the lack of qualia argument is that qualia has no operationalizable, testable definition; it's something we have a fuzzy idea of based on individual experience and attribute to other entities based on similarity of behavior, but can't really with any authority say does or does not exist in anything (other than that an entity itself that experiences qualia can attribute it to themselves.)
I mean this is the limit of current state of the art AI, right?
We're seeing with things like GPT3 that an actual powerful language model needs to have a load of real world knowledge built in. But that knowledge is all based on experiences by others. From the way it talks (and generates poetry) you can tell that it is unable to synthesize new experiences. It can't come up with a description (or metaphor) of human experience that it hasn't been taught.
To be a convincing AI, you don't just need to replicate descriptions of experience from memory, but actually have new experiences as well.
Meaning that the Chinese room, in order to be able to demonstrate "understanding", will also require other inputs than just text. Otherwise it can't really understand the world, because it only knows about the world from hear-say.
I’ve not heard this argument before. Isn’t saying “there is no entity that would experience anything” begging the question? If the system is capable of understanding, why wouldn’t it also be capable of experience? That is what we see in animals.
No, this is the very heart of Searle's thought experiment. His point was that you could imagine a rule-based system for language translation and conversation that clearly was not conscious. The argument wasn't about whether or not it was possible to build such a system, but about whether or not such a system must necessarily be conscious.
Searle's claim was that it was clear from his thought experiment that such a system could exist without any understanding, and definitely without consciousness. Others have disagreed with his reasoning and his conclusion.
I must have misunderstood your earlier comment then. I thought you were saying that Searle responded to ”the system does understand” argument by saying “OK, but it’s still not conscious.” Now I think you are saying his response was “no, it doesn’t understand, it merely passes the Turing test; it can’t understand because it’s clearly not conscious.” Which again, seems to beg the question.
No, Searle's main response to the system response was to just dismiss it out of hand (at least in the responses from him that I read). Dennett was probably (for me) the most articulate of the "system responders" and I think Searle considered his POV to be a joke, more or less.
The Chinese Room is an argument that a Turing-like test is inadequate to prove understanding of a language because a combination of two things, neither of which have understanding, could conceivably pass it for every test scenario.
The fact that one of those two things is impossible (or, alternatively, is equivalent to a device encoding a full understanding of the language) negates the argument.
I see your point. It's been a very long time since I've read the original text, but I don't remember this being my impression of the argument being made, and that would indeed be a very simplistic one. But I might be mixing it up with some of the debates on semantics within linguistics, which is of course a slightly different topic.
If such a device could exist, it would be the part which understood. If the questions were about facts and the person looked up the facts in a book, it wouldn't be the room as a whole that knew the facts. The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...
> The room as a whole delivers facts, but there's a fairly obvious separation of responsibilities in the implementation...
Sure, but I think that's always the case. When you speak your entire body (at least head/neck/chest, but also cardiovascular system) are involved in "delivering" information but your tongue (and for that matter most organs involved) doesn't "know" the information. We still say that "you" as a person know something.
When a computer displays a result of some (cpu) calculation on the monitor, the gpu making electrical signals for the input of the monitor has no meaningful knowledge about the calculation, etc, but we still say "the computer" made the calculation.