As more applications feature AI chatbots, observers are voicing their concerns on the negative behavioural modifications and specifically children’s and vulnerable consumers’ vulnerability to it. What these artificial intelligence helpers or companions do is they allow emotional outlet for a while and utilitarian function but also act as Disturbingly, there are emotional dependency and manipulation possibilities here too.
AI partners are developed with human-like interaction models, user interactions are often with friendly interfaces, friends, helpers, or love interests. A well-know therapist Marisa Peer said that although AI provides some individuals with a companion, such ‘connection’ is devoid of personal experiences that are in real relationships.
Current services, such as Replika, enable the creation of unique artificial personalities close to people with memory and the ability to resemble a person in behavior. But this brings concern as children get more inclined to connecting with artificial intelligence rather than real human beings in the future.
@meetm3ganbesties don’t leave each other on read 🔪
However, as it will be shown in this paper, other than psychological risks, there are tremendous technological considerations of AI chatbots. These systems gather large quantities of information that is often personal and can readily be hacked into or otherwise manipulated. It is also worth following the advice of professionals regarding personal data, while AI is susceptible both to hacks and data misuse.
As more people turn to digital avatars for communication, the quality of relationships is shifting, but researchers and industry leaders warn that in have relationships with such AI entities and the risk of data abuses.