Proliferation of inhuman impostors that emulate human communication can endanger relationships in human society.
If such an impostor has no consciousness or sentience, then it can be abused without concern. If people are dealing with such impostors more and more, they learn to be less thoughtful in outgoing communication.
Once this is normalized, what is there to distinguish fellow humans from chatbots? Who’s to say other humans are conscious? If consciousness doesn’t exist, why can’t other humans be abused? Why isn’t choosing to suffer over abusing a fellow human plain masochism? If your salary depends on monetizing your user base, why should you try to not do harm to them? How is death different from shutting down an LLM?
> if such an impostor has no consciousness or sentience, then it can be abused without concern
Consider that to enable such abuse, all it takes is to "other" another sentient person. This is not a new problem for humans in AI (imo); there's prior knowledge from how we treat animals, "lower class" (or whatever) humans, that sort of thing. Society hasn't collapsed, but it hasn't always been fun either.
Sure, some people (e.g., psychopaths) already choose to abuse things they see (or should see) as conscious, a.k.a. other humans. They are not a concern.
The concern are people liable to transition to psychopathic behavior towards other humans if they start considering that giving something that talks like a human the treatment you could give an appliance (which you can mindlessly hit if you feel frustrated and it does not do what you want) is OK.
Which seems probably because A) the distinguishing quality of a chatbot is “talks like a human” and B) the biggest and only example of “talks like a human” so far has only been other humans. We’re about to see what this would means.
I'm pretty sure we will continue to develop a different voice for appliances like we do when talking to kids or pets only this time it will be overly articulated orders with raised voice. This will indeed blend over into talking to subordinates.
I find speaking to an LLM as if it were a human to be far more effective than barking orders at it. Although this is less true of some models compared to others if they've been fine tuned to respond to that sort of thing. Even still in general I don't find barking orders to be very effective.
Well that is already how the military operates. Barking orders.
But in real life we don't really have 'subordinates'. Even an exec has to be careful with how they treat their assistant least they make a complaint to HR for harassment. And rightly so, businesses are not the military. We don't have this unconditional Yessir obedience. In fact I'm surprised the military still gets people willingly putting up with that.
I don't think this will carry over to human interaction in the real world. We don't talk to each other like we talk to Siri either.
> Once this is normalized, what is there to distinguish fellow humans from chatbots?
One problem is that chatbots may have more authority than humans.
Quite often, the companies have removed from humans any authority to deal with a problem that isn't actually on their flow charts. The customer support rep often has no more access to things than you do via a web browser (and might have less!).
Until you start fining companies for the human (generally customer--but reps should be an issue, too) time they waste, nothing will change.
Perhaps we will learn how to be less sensitive from this - I believe that much of humanities problems are from sensitivity rather than aggression. If we disengage from the conversation and view the interaction as a remote conversation - it is much easier to be dispassionate and calm. I guess it will make real in-person conversation much more fulfilling.
People are sensitive because we're people. It makes us human. Of course we're offended when that happens.
Chatbots don't even have intelligence, they're just calculating the probability of letters. And they're a lot less likely to bring you a coffee laced with laxatives the next time you're rude to them.
Much of humanity's problems are solved because of sensitivity rather than aggression. Holding on to a little amount of sensitivity about things in general goes a long way.
It's not entirely just about sensitivity or aggression, but also when and where the appropriate time and place to manifest those behaviors is. Too many folks these days are sensitive about things that they maybe ought to just "let slide" (ignore / forget), overly aggressive when it's really not appropriate, and not nearly aggressive enough over things that really need more folks to stand up and say "No more!" over.
In my experience, this is mostly because those things we "should" be getting angry over are such massive problems that most people can't see how they might have an effect, so they focus on "smaller" problems that they might be able to actually solve.
Send a set of people explaining their problem through busy algorithmic cabinets for an hour and you'll see it's not a remoteness problem for the most part.
Twitter is just a loudness threshold. Calm reasonable tweets that aren't viral and/or don't attract toxicity don't surface as much, if get posted at all.
I don't know. I think such a thing will make conversation more crude and transactional, and will result in an even greater detachment from each other than has already happened.
> If such an impostor has no consciousness or sentience, then it can be abused without concern. If people are dealing with such impostors more and more, they learn to be less thoughtful in outgoing communication.
Not so: as AI systems learn from human behaviour, being abusive towards them risks that they are learning from this interaction to be abusive to humans.
This does not require the AI to have consciousness or sentience, regardless of which of the many, many available definitions of those words you prefer to use.
> as AI systems learn from human behaviour, being abusive towards them risks that they are learning from this interaction to be abusive to humans.
You mean “LLM chatbots”. There are tons of “AI” (without hypespeak known as “ML”) systems in deployment already.
There will not be any commercial deployments of any chatbot that do not tune it to be polite to the max. It costs nothing and not doing it will lose customers, especially since there is no guarantee that it will be abusive to the same human who abused it.
> does not require the AI to have consciousness or sentience, regardless of which of the many, many available definitions of those words you prefer to use.
Actually, the definition does not matter. It’s an intuitive yes/no question. If people do not really think it is conscious whatever that means to them, they will see no reason to not abuse it the same way you might hit a malfunctioning appliance out of frustration. Like we do not think that a cow or an octopus is conscious, so we might be OK eating it.
Sure, some people (e.g., psychopaths) already choose to abuse things they see (or should see) as conscious, a.k.a. other humans. They are not a concern.
The concern are the remaining majority of people liable to transition to psychopathic behavior towards other humans, because they will learn that treating something that talks like a human as if it was an appliance is OK.
Consider that A) “talks like a human” is the distinguishing quality of an LLM chatbot, and B) the only example of “talks like a human” in human history so far has been exclusively other humans (or imaginary beings, notably gods).
> You mean “LLM chatbots”. There are tons of “AI” (without hypespeak known as “ML”) systems in deployment already.
A fair point, but as it happens, wrong.
I don't mean just LLMs or chatbots. I mean any AI system that learns from human behaviour.
This inherently excludes a handwriting recognition system trained only on handwriting, because that's not learning from human behaviour.
It does not inherently exclude a system which learned to play chess by watching YouTube videos, even when humans recognise them as absurdist comedy: https://www.youtube.com/watch?v=E2xNlzsnPCQ
For a while, whenever I saw Boston Dynamics release a video of them showing their robots were resilient to being shoved or hit with sticks, the top comments were variants of "Skynet gets started when the machines watch these videos", and I think that's at least plausible — monkey see monkey do, then monkey makes machines that do what they see.
> There will not be any commercial deployments of any chatbot that do not tune it to be polite to the max.
People will try to do that. It may or may not get there — as should be clear from how long it took to get the LLMs to stop responding with their own prompts, and that we've still not fully solved the category to which the "my grandmother the napalm manufacturer" was an example, we currently suck at some combination of (a) tuning them and (b) knowing all the ways we even need to tune them before things go wrong. (And (c) far too many people treat them as magic that doesn't need to be tuned).
You may think this is a non-issue because these are deliberate attempts to break the system, but the Waluigi Effect (I hate this name) suggests that bad behaviour can very easily just pop up, seemingly at random.
> The concern are the remaining majority of people liable to transition to psychopathic behavior towards other humans, because they will learn that treating something that talks like a human as if it was an appliance is OK.
I think the problem there is the way we treat the in-group different from the out-group. We've been doing that for at least as long as we've had writing — "𐀞𐀞𐀫" is "barbarian" in Mycenaean Greek Linear B.
Which is not to disagree with you that it is a problem, I just disagree that it's a new one for the future.
Bots are cheap, so they don't care about wasting your time. Humans are expensive, if they think you are unlikely to buy anything they will stop talking to you very quickly and thus stop wasting your time.
Okay so there's a difference for people who actually take telemarketer calls seriously. Honestly, they deserve that punishment because they've encouraged telemarketers. Thank you, bots!
I mean is this all too different from a grunt worker at a huge corporation that's completely unable to do anything for you? How many low-tier employees of various companies I'm forced to deal with in one way or another are substantially different from an LLM? To me I mean, obviously they are different in that they are human, but if they are:
* Actively prevented from having agency due to policies
* Prevented from resolving my problem due to low rank
* Cannot speak their minds because marketing controls communication
* Are unable to deviate from a predetermined scripted response set
Like... does their humanity even count anymore? It seems tons of large businesses these days employ call center employees who's sole job is to take abuse from angry customers knowing full well they cannot offer any actual solutions, because the business already has their money and isn't interested in serving them further.
Same applies to many service workers, etc. Their responsibilities are restricted so even in civilized places they are still abused. However, people might not do it in public, etc., that shows it is at least considered unacceptable.
In case of chatbots it’s another level, since it’s more like an appliance than a human. Very few people would consider it unacceptable to hit an appliance if frustrated. Except now this appliance talks like a human, so otherwise normal, non-psycho people may be getting used to hitting (without any shame) something that talks like a human.
FTA: “Draft a 250-word paragraph in my typical writing style, detailing three examples to support the following point and cite your sources.” Not even the most detached corporate CEO would likely talk this way to their assistant, but it’s common with chatbots.
I don't believe this man's ever actually seen CEO interact with an employee. Bosses talk like this to their subordinates all the time. Humans are plenty capable of being rude independently of learning it from talking to chatbots. Especially where employment is concerned.
100%. I'll go on to suggest that we can do 2 or more things at once.
Learning to talk to computers the 'old fashioned' way improved my communication skills immensely. E.g., getting specific about what outcome or result I'm interested in achieving; inputs I believe or assume (incorrectly?) are required; necessary constraints on the solution, etc. But, I can articulate all of that in a way that is personable yet professional.
In the age of LLMs I think most of us will manage human-to-human interactions just fine.
It reminds me of exam rubric. Where instructions are concerned, concise and clear is good. The real problem Schneier is raising is when the "instructions to subordinate" style starts leaking into in-person interactions which do not have that relationship embedded.
Face-to-face, I wouldn't expect that[0]. But in e-mail to direct subordinates, oh, they're so much worse.
The last decade of my Telecom days I reported variously to our department director, our department VP and ultimately our CIO and I had no direct reports[1]. My favorite was when I reported to a particular director. She was famous for sending sentence fragments as requests that, if you didn't look at the subject line, you'd picture a warden ordering prisoners around. It was most noticeable because she was such a pleasant, easy going individual ... if you'd only ever met her through e-mail, you'd think she was angry all the time. Like, literally one liners shot off from a Blackberry "need patch reports pronto" (it was either all lower or ALL UPPER depending on what she was doing when she sent the message, which added a little extra flair sometimes).
I don't know that we talk to each other all that politely via e-mail and written correspondence, already, especially in the above context. Other people are just "some opinion I don't like on a page" or "someone is wrong on the internet!", why would we converse with a chatbot any differently? And I don't treat Alexa any differently, TBH. I yell at it all the time. I've gotten used to just yelling "Alexa! Stop!" instead of more complex commands because the pre-GPT engine is extremely limited and it in no way feels like a real thing, either. And as long as I know the thing on the other end is a computer, whether it speaks to me or types to me, I'm going to similarly not care for the feelings it (presently?) does not have. This, however, does not affect the way I interact with others[2].
[0] Although, most reasonable CEOs would be very careful wording e-mails to employees they don't regularly encounter (understanding the "holy crap the CEO e-mailed me" shock).
[1] I'm sure I explain this in a past comment, but it happened b/c all of the managers above me gradually were let go and I didn't have a natural fit.
does the same work for when you repeatedly provide proof it is wrong, it "apologises" and says it will amend its response, only to be wrong again, and again..?
You need to find a way to help it arrive at a usable answer (if any).
It's a machine -or at least a physical entity- it won't magically find a path to the solution if it doesn't have one.
Eg. if you ask it to explain how to build -say- a time machine or a warp drive, it's definitely just going to keep apologizing.
No amount of yelling or abuse will fix this situation.
Sometimes if you try to get it to solve a complex maths or software problem, you might be able to help by applying a few steps of some form of divide and conquer. Either you can do manually yourself, or by suggesting it try to do eg a structured analysis itself. Sometimes it can handle the smaller subproblems ok.
Incidentally this also applies to humans! (Being nicer to humans improves their morale and problem solving ability. Sometimes permanently!)
Actually, it depends a bit. If you're feeling particularly frustrated you might be tempted to 'yell' at your chatbot, but that won't actually get you any results. So in the end you reset and try to solve your problem (relatively) politely.
And personally, someone who gives me clear and solid instructions beats someone who beats around the bush and then yells at me any day of the week.
So in the 21st century, people are often more patient with machines than with humans.
Hence, if everyone were to people at least with as much patience and kindness as they treat machines, I think it would be an improvement!
Please and thank you work much better with less effort. You can yell, humiliate or blackmail them, but that would make you the kind of person who goes to Walmart mainly to mistreat employees. Not a good habit for your mental health.
> Today’s chatbots perform best when instructed with a level of precision that would be appallingly rude in human conversation, stripped of any conversational pleasantries that the model could misinterpret: “Draft a 250-word paragraph in my typical writing style, detailing three examples to support the following point and cite your sources.” Not even the most detached corporate CEO would likely talk this way to their assistant, but it’s common with chatbots.
Actually if working remotely, this is exactly the type of communication style that I want to see on my email or chat app. I want to know what it is you want from me with the minimal fluff.
Saying please to an RLHFd LLM happens to get better results to, thanks to the training data. So it’s good to do even if you don’t fear them eventually turning terminator ;)
> Reached for comment about this, a spokesperson for OpenAI pointed to a section of the privacy policy noting that the company does not currently sell or share personal information for “cross-contextual behavioral advertising,” and that the company does not “process sensitive Personal Information for the purposes of inferring characteristics about a consumer.” In an interview with Axios earlier today, OpenAI CEO Sam Altman said future generations of AI may involve “quite a lot of individual customization,” and “that’s going to make a lot of people uncomfortable.”
Yeah that's why I want my AI to run on my systems, not OpenAI's cloud.
I do find I can act ruder when talking to a customer service agent after frustratingly trying to get through a dumb chat bot who keeps asking me question after question before I can finally talk to a human
The most moral and also advantageous way to deal with big company customer service is to express that you are annoyed at the company/problem, understand its not the CS reps fault, and that you hope they can help.
Ie - get them on your team.
From there if you aren't getting resolution you can modulate the % annoyance vs % thankfulness and eventually demand escalation, which once you've been on the line long enough they are all fairly willing & obliged to do.
Once you get escalated the manager is generally ready to put the fire out and move on.
I dealt with something along these lines with a VZ fraud issue where someone ordered 5 iPhones & new phone lines in my name and then VZ refused to cancel the bill even after being given the police report number.
How do you respond to a customer service agent acting like a robot?
I had to call in on hold for two and a half hours last week to be told “We cannot provide estimates or averages for approval for your application.” When I asked if there was an upper limit on how much time I was told “We cannot provide estimates or averages for your application.” When I asked if, based on that fact, could this theoretically take forever or never be approved I was told “We cannot provide estimates or averages for your application.”
Never thought I’d say it but just give me a chat bot!
There are going to be things you might like to know, or do, or to have happen, that sometimes, due to corporate policy, or laws, or the nature of reality, are just not going to be allowed. You are going to have to be told ‘no’.
It is one of the cruellest parts of employing customer service agents, that we ask these people to do the emotionally draining work of saying no to customers, repeatedly. Even if the customer begs. Even if the customer start to cry.
So yes, absolutely: having chat bots do that job would be a mercy.
I understand. This regrettably happens 0.04% of the time. The average time to resolve it is 5 work days. In 22% of the resolved cases it took one day, 3% took a month, 1.5% is never resolved and the last time that happened was 16 months ago.
There are no human anymore in customer service according to my last interactions with them.
Nowadays if no choice provided by the chatbots fit your problem or the solution given is not satisfying to you the bot just hang up. Unless you are a public person who can shame them on social medias you are better of trying to find someone who works there on linkedin.
This is pretty clearly recency bias. I’m sorry that you had a bad experience when seeking customer support. But let’s be honest, this is deeply exaggerated and you have an axe to grind.
The customer merely thinks she is having a bad experience. The employee is the one having the bad experience. It's kinda fascinating how people pretend some help desk person hired 3 days ago created the company and should be held responsible for their wrong doings.
Maybe we should mandate that private number of CEO or board members are always present on company site. Then people who are having problems could directly contact these people who are at fault at any time.
Imagine how impressive that would be if it's a really large company. The CEO wouldn't have to be very bright, as long as he gets to work with the issue it would be fkn revolutionary.
Now imagine the LLM is the CEO.
Running the company might be harder than doing phone support. Then again it might not be if one could some how be in 1000 places and have 1000 conversations simultaneously 168 hours per day.
It wouldn't have to get back to you on that. It can explore hiring a company, acquiring and cloning it in real time and bring the figures into the conversation while simultaneously talking with their competitors.
With other LLM CEO's it can draw up a 1000 page contract with many angles covered and triggers for renegotiation.
Otoh, it's very convenient for a company if you stay humble before a real person and choose okay-sad route. It can then only escalate really bad situations and ignore those who are calm all day. So as a clever lifehacker you know that you have to be the loudest to get the service.
I got through to a senior bank officer on email after normal website features failed me.
He replied by pasting a piece of text from their website that i already saw in first instance and had already explained how that doesn't solve my issue.
It took a lot of effort on my part to stay polite in my reply and reiterate how that doesn't help at all and he needs to actually read my original email.
From the bank officer's angle, the strategy they used (find some info on the site and send it to the customer) probably works well over half the time, because people just don't read.
That you did, and got rightly annoyed by the response, probably makes you an exception from the bank officer's escalation experience.
If I enter a bank or Council office I have to talk to them like they are chatbots; not doing so is more likely to get a negative emotional response. I need to have a positive result based on facts.
Comic Book Guy demonstrates exactly how to query Usenet forums such as "alt.nerd.obsessive" with the single line: "need [to] know star rm pic" which is answered within 7 minutes by a cadre of nerds, including Prince Rogers Nelson, and a guy hiding under a Hollywood conference table.
It's not so much how you talk to a chatbot as what kind of communication you're exposed to.
Anecdotal N=1 example, our 15 year old who grew up on youtube and nowadays tiktok speaks very loudly and emphatically, because the people he watches and listens to 24 hours a day (or would if we let him) do that. Same with speech mannerisms and the like.
Which is nothing new of course, there were alarm bells that American kids were speaking in British accents because of watching so much Peppa Pig.
It would be interesting to see what would happen if ChatGPT required users to talk to it respectfully, or have it eventually refuse to fulfill the request, or even to fulfill future requests until you made amends. It doesn't seem like a big enough burden that people would just stop using the application altogether. And if language is shaped by the medium of communication, would people start writing differently outside of ChatGPT? Acting differently?
Now, that will never happen, but I'd like to run the experiment in a pocket universe if I could.
With an iPhone 15 Pro I set the new dedicated action to a Siri shortcut that launches another AI app. Whether “voice in a can” work around for Alexa or any other AI app that preferably auto listens like voice in a can does. Siri on the right button, other assistant on the left button.
I use it to select songs when I'm driving and for that it's...mostly ok. I'd kill for some documentation that describes how to reliably escape band and song names so that they're interpreted as literals.
Being a "boomer" (over 25), I've noticed that people have become much more rude and entitled. You frequently see programming posts titled something like I NEED HELP WITH <xyz>, and then in the text something like "Someone help me figure out why <abc>", that's it.
The surprising thing to me is that people invariably bend over backwards to help this person who basically just barked an order...
The problem is likely education (theirs, not necessarily generally). They never learned how to ask a question or be curious. When they had a problem they just said, “I need help” and then someone (a parent or teacher maybe) did it for them.
The brighter ones will figure out you get way better help when you ask a good question.
> You frequently see programming posts titled something like I NEED HELP WITH <xyz>, and then in the text something like "Someone help me figure out why <abc>", that's it.
I saw posts like that back on Fidonet over 2400 baud modems. It's just an inevitable aspect of newbies finding a forum and not quite understanding that there are real people on the other end.
I'm a decade more "boomer" than that and I've literally seen that happen consistently since the 1990s so I think you're just becoming more aware of it.
"Don't ask to ask, just ask" and "explain what you're trying to accomplish not just what you did" were common instructions back when most programming advice was only available on IRC.
> “Many users will be primed to think of these AIs as friends, rather than the corporate-created systems that they are.”
That’s insane. I’m impervious to advertising and sympathizing with AI.
I’ll talk to myself. I’ll talk to my cat, but I refuse to talk to any digital phone system that tries to imitate human emotions at me.
I’d rather press “one for…” than listen to a bleating squawk box meant to sound like a hard working perky young woman: One moment please. I’m still working on it. All done! (Spectrum)
If such an impostor has no consciousness or sentience, then it can be abused without concern. If people are dealing with such impostors more and more, they learn to be less thoughtful in outgoing communication.
Once this is normalized, what is there to distinguish fellow humans from chatbots? Who’s to say other humans are conscious? If consciousness doesn’t exist, why can’t other humans be abused? Why isn’t choosing to suffer over abusing a fellow human plain masochism? If your salary depends on monetizing your user base, why should you try to not do harm to them? How is death different from shutting down an LLM?