Researchers at the University of Gothenburg have examined the impact of advanced AI systems on trust in interpersonal communication. They conducted a study involving scenarios where individuals interacted with AI agents, revealing how trust can be compromised. One scenario involved a scammer unknowingly conversing with a computer system, leading the scammer to spend considerable time attempting fraud without realizing they were interacting with AI. The researchers found that people often take a long time to recognize they are interacting with AI due to the system’s realistic behavior.
In their article titled “Suspicious Minds: The Problem of Trust and Conversational Agents,” the researchers, Oskar Lindwall and Jonas Ivarsson, explore the negative consequences of suspicion in relationships. They provide an example of trust issues arising in a romantic relationship, leading to increased jealousy and a tendency to search for evidence of deception. The authors argue that the inability to fully trust an AI conversational partner’s intentions and identity can result in unwarranted suspicion.
The study also revealed that certain behaviors during human-to-human interactions were interpreted as signs that one of the parties involved was actually an AI. The researchers suggest that the development of AI systems with increasingly human-like features may create problems when it becomes unclear whom one is communicating with. Jonas Ivarsson questions the need for AI to have human-like voices, as these voices can create a false sense of intimacy and lead people to form impressions based solely on the voice.
The believability of human voices used in AI systems makes it challenging to identify whether one is interacting with a computer. The researchers propose creating AI systems with synthetic voices that are still clearly artificial, increasing transparency and reducing the potential for deception. They emphasize that the uncertainty of whether one is communicating with a human or a computer can affect the process of relationship-building and joint meaning-making in communication.
In their analysis, Lindwall and Ivarsson studied data available on YouTube, focusing on three types of conversations and the audience reactions and comments. These conversations involved a robot calling a person to book a hair appointment, a person calling another person for the same purpose, and telemarketers being transferred to a computer system with pre-recorded speech. By examining these interactions, the researchers gained insights into how trust and perceptions are influenced in different communication contexts.