This is scary because there have already been AI engineers saying and thinking LLMs are sentient, so what’s unreasonable could be a mass false-belief, fueled by hype. And if you ask a non-expert, they often think AI is vastly better than it really is, able to pull data out of thin air.
How is that scary, when we don’t have a good definition of sentience?
Do you think sentience is a binary concept or a spectrum? Is a gorilla more sentient than a dog? Are all humans sentient, or does it get somewhat fuzzy as you go down in IQ, eventually reaching brain death?
Is a multimodal model, hooked to a webcam and microphone, in a loop, more or less sentient than a gorilla?