Not disagreeing with you as I have the same initial reaction. Just pointing out that I can see a path where you could easily rationalize away your concerns because it can be convincing. Especially when people train themselves (internal prompt engineering) to stay within its guard rails to keep it acting in a manner consistent with what they expect. Clearly they have some issues in their lives as they aren’t acting rationally.
My hypothetical was not a close comparison but was an attempt to find a situation where I myself would have an emotional attachment (although maybe not to this level). I could foresee a situation where a complex LLM (again, oversimplified) was tuned and loaded into some robot or device. If I interacted with a daily basis over a period of time, I might get attached to it, and if it talked back, again, it might be comforting or refreshing to talk to something that we have designed to be as useful and helpful as possible.
100% this. Self deception is so easy and if you lack even a basic understanding of how things operate, it is essentially magic. And at that point, it’s not too big of a logical jump.