Artificial intelligence tools like Apple’s Siri and Amazon’s Alexa don’t really try and change your mind too much, but maybe they should!
That is the idea behind new research being carried out by Dr. Samira Shaikh, an assistant professor in Cognitive Science at the University of North Carolina at Charlotte’s Department of Computer Science. She developed a smart chatbot that’s goal is not just to carry out a conversation with users, but to actually engage them in arguments and counterarguments with the specific aim of changing a person’s mind.
“Alan Turing’s question: ‘Can machines think?’ was one of the fundamental questions that interested me as I began my research in AI and natural language processing,” Shaikh told Digital Trends. “My goal in this work was to see if we could inch a few steps closer to answering Turing’s original question by merging the insights from social psychology and cognitive science with AI. I wanted to see if certain elements of human communication can be recreated effectively by an algorithm — and I chose the human behavior of persuasion specifically since it is nearly ubiquitous in human communication. This is a problem that needs solving along the way to achieving true machine intelligence.”
As we noted, chatbots that argue with you regardless of your query is not likely to find too much favor with customers, but Shaikh says there are still times such a tool may be useful.
“Sure, there are use-cases where you would want the agent to explore only a certain subset of topics, but there will still be potential for persuasive behavior in these scenarios,” she continued. “The conversation with an intelligent agent could center on topics including politics and social issues, or what brand of paper towels to buy next.”
Looking further into the future, as chatbots take on new roles as carers and confidantes, an argumentative AI could be incredibly useful. For example, it may be able to help you prepare for a job interview, or for lawyers to better hone arguments for a court case.
“This persuasive behavior is based on the theory of planned behavior, a well-established theory of social influence,” Shaikh said. “I adapted this theory to my work, where the communicator is attempting to persuade the receiver by sending them tailored messages and certain behaviors that are triggered in the agent depending on what has happened so far the conversation. Humans can do this quite effectively, generally speaking, the only difference is that in my framework, the communicator is a computer agent.”
Shaikh said her argumentative chatbot is fully operational and is being tested in a variety of scenarios designed to see if it can change a person’s mind. A preliminary paper on the work is published in the journal AI Matters.