>It is disappointing to see you descending into something of a rant here.
I'm going to be frank with you. I'm not ranting and uncharitable comments like this aren't appreciated. I'm going to respond to your reply later in another post, but if I see more stuff like this I'll stop stop communicating with you. Please don't say stuff like that.
I could have, equally reasonably, made exactly the same response to your post. I will do my best to respond civilly (I admit that I have some failings in this regard), but I also suggest that whenever you feel the urge to capitalize the word "you", you give it a second thought.
Apologies, by YOU I mean YOU as a human, not YOU as an individual. Like we all generally feel that the quantitative tests aren't enough. The capitalization was for emphasis for you to look at yourself and know that you're human and likely feel the same thing. Most people would say the stuff like IQ tests aren't enough and we can't pinpoint definitively why, as humans, WE (keyword change) just feel that way.
That feeling is what sets the bar. There's no rhyme or reason behind it. But humans are the one who make the judgement call so that's what it has to be.
For your test I don't see it offering anything new. I see it as the same as my test but just extra complexities. From a statistical point of view I feel it will yield roughly the same results as my test. As long as the judge outputs a binary true or false on whether the entities are humans or ais.
Yes I did say we can't define understanding. But despite the fact that we can't define it we still counter intuitively "know" when something has the capability of understanding. We say all humans have the capability of understanding.
This is the point. The word is undefined yet we can still apply the word and use the word and "know" whether something can understand things.
Thus we classify humans as capable of understanding things without any rhyme or reason. This is fine. But if you take this logic further, that means anything that is indistinguishable from a human must fit into this category.
That was my point. This is the logical limit of how far we can go with an undefined word. To be consistent with our logical application of the word "understanding" we must apply to AI if AI is indistinguishable from humans. If we don't do this then our reasoning is inconsistent. All of this can be done without even having a definition of the word "understanding"
I think it may be helpful for me to say some more about how I came to my current positions.
Firstly, there have been a number of attempts to teach language to other animals, and also a persistent speculation that the complex vocalizations of bottlenose dolphins is a language. There is no consensus, however, on what to make of the results of the investigations, with different people offering widely disparate views as to the extent that these animals have, or have acquired language.
My take on these studies is that their language abilities are very limited at best, because they don't seem to grasp the power of language. They rarely initiate conversations, especially outside of a testing environment, and the conversations they do have are perfunctory. In the case of dolphins, if they had a well-developed language of their own, it seems unlikely that those being studied would fail to recognize that the humans they interact with have language themselves, and cooperate with the attempts of humans to establish communication, as this would have considerable benefit, such as being able to negotiate with the humans who exercise considerable control over their lives.
From these considerations, it seems to me that unless and until we see animals initiating meaningful conversations, especially between themselves without human prompting, it is pretty clear that their language skills do not match those of adult humans. This is what led me to see the value of a form of Turing test in which the test subjects demonstrate that they can initiate and sustain conversations.
A second consideration is that while human brains and minds are largely black boxes, we know a great deal about LLMs: humans designed them, they work as designed, and while they are not entirely deterministic, their stochastic aspect does not make their operation puzzling. We also know what they gain from their training: it is statistical information about token combinations in human language as it is actually used in the wild. It is not obvious that, from this, any entity could deduce that these token sequences often represent an external world that operates according to causes which are independent of what is said about the situation. An LLM is like a brain in a vat which only receives information in the form of a string of abstract tokens, without anything else to correlate it with, and it is incapable of interacting with the world to see how it responds.
From these considerations, therefore, it seems possible that, if LLMs understand anything, it is at most the structure of language as it is spoken or written, without being aware of an external world. I can't prove that this is so, but for the purpose of the arguments in this thread, and specifically the one in the first post that you replied to, all I need is that it is not ruled out.
Turning now to your latest post:
> For your test I don't see it offering anything new.
It is far from obvious that it will necessarily produce the same results as your test, and you have presented no argument that it will. If we are in the situation where one of these tests can discriminate between the candidate AIs and humans, then the only rational conclusion is that these candidate AIs can be distinguished from humans, even if the other test fails to do so.
> From a statistical point of view I feel it will yield roughly the same results as my test.
Throughout these conversations with me and other people, you have insisted that only quantitative tests are rigorous enough, but now you are arguing from nothing more than your opinion as to what the outcome would be. An opinion about what the quantitative results might be is not itself a quantitative result, and while you might be comfortable with the inconsistency of your position here, you can't expect the rest of us to agree.
> But despite the fact that we can't define [understanding] we still counter intuitively "know" when something has the capability of understanding. We say all humans have the capability of understanding... the word is undefined yet we can still apply the word and use the word and "know" whether something can understand things.
Good! This is a complete reversal from when you were arguing that understanding was not a valid concern unless it were rigorously defined.
> Thus we classify humans as capable of understanding things without any rhyme or reason. [my emphasis.]
If it were truly without rhyme or reason, 'understanding' would be an incoherent concept - a misconception or illusion. Fortunately, there is a rigorous way for handling this sort of thing: we can run a series of Turing-like tests, or simply one-on-one conversations, but only with human subjects, with multiple interrogators examining the same set of people and judging the extent to which they understand various test concepts. The degree of correlation between the outcomes will show us how coherent a concept it is, and the transcripts of the tests can be examined to begin the iterative process of defining what it is about the candidates that allows the judges to produce correlated judgements.
Once we have that in place, we can start adding AIs to the mix, confident that they are being judged by the same criteria as humans.
> But if you take this logic further, that means anything that is indistinguishable from a human must fit into this category.
Certainly not if the test is incapable of finding the distinction. The process I outlined above would be able to make the distinction, unless 'understanding' is not a coherent concept (but we seem to agree that it probably is.) Furthermore, as I pointed out above, one test capable of consistently making a distinction is all it takes.
I'm going to be frank with you. I'm not ranting and uncharitable comments like this aren't appreciated. I'm going to respond to your reply later in another post, but if I see more stuff like this I'll stop stop communicating with you. Please don't say stuff like that.