LLMs stop being static software and start becoming path-dependent control systems with a “life history.”
Will LLMs develop: stable attractors (ruts); defensive behaviors that look like neurosis; and suffer from trauma?
My guess is yes, and long-lived assistants will need an independent behavioral oversight stack: that is, psychiatrists.
No. AI's have no intelligence, are not sentient and are not applicable to psychology. They are big-data sorting and language machines. LLM's do exhibit mimicry by design and can appear to have a conscience. They are made to make us feel like we are communicating with a sentient intelligent being thus making us more comfortable with sharing information with them and more likely to believe them. All of the psychology is applied to and directed at the consumer.
There are people that will strongly disagree with what I just typed and they indeed may need psychological assistance.