People said the same thing about the horseless carriage in the early days of the automobile; they could cite evidence of the superior dependability of a horse and buggy. Things eventually changed. Let's see how things shake out from here.
It means that you should be careful to not judge too quickly. Because there are many examples in the past of people clinging to the status quo and refusing to believe that new technology could actually supersede human capabilities.
it's fair to judge their current abilities. guessing about potential futures to stick up for their current inadequacy doesn't make a lot of sense, imo.
Except we've already seen people who do exactly that, and being wrong about the future over and over. I'll agree with you that it's fine (and helpful) to point out all the failings of current LLMS; the mistake is extrapolating that out too far and making a prediction about the future. Granted, it's just as common a human mistake to predict the future too optimistically, by believing there are no impediments to progress.
All i'm really arguing for is some humility. It's okay to say we don't know how it will go, or what capabilities will emerge. Personally, I'm well served by the current capabilities, and am able to work around their shortcomings. That leaves me optimistic about the future, and I just want to be a small counterbalance to all the people making overly confident predictions about the impossibility of future improvements.
There's no chance LLMs have a sufficient training set of effective therapists-patient interactions, because those are private. Ergo, there is no need to wait, it's DOA. Anything else is feeding into LLM hype. It's that simple.
Heh, it's that simple for someone who thinks the training regime and AI technology will not change further. The early horseless carriages had all kinds of stupid problems, and it would be very easy to pronounce them DOA. "Nobody is going to want to ride something so prone to breaking down", "a horse only needs food from the farm, not stuff drilled from the ground", etc. People don't have much imagination in such situations, especially when they feel emotionally (or existentially) attached to the status quo.
DOA doesn’t mean forever and always. But certain claims like the advent of living beyond 200, humans on Mars, etc can just be immediately dismissed outright for several decades. What you’re talking about is unsupervised LLM therapy. Even when my dentist used AI to read my X-rays, he’s overseeing everything. I’m fine sticking my neck out to say LLM therapy is DOA for the foreseeable future.
Your pronouncement goes against evidence cited in the article:
"people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma"
And that is about the present, not even what may come in the future. Not all therapy is life and death, and there are already signs that it's a good thing, at least in some limited domains.