When you have to choose a hair stylist, a dentist or a baby minder, you probably decide on how warm, friendly and affable the person is, not only whether he or she has a good reputation.
It turns out that these same considerations are in effect when people evaluate which artificial intelligence (AI) systems to use. Waze or Google Maps? Spotify or Apple Music? Alexa or Siri? Consumers choose between AI-based systems every day, but how exactly do they choose which systems to use?
People increasingly rely on AI-based systems to aid decision-making in various domains and often face a choice between alternative systems.
A recent study conducted by researchers from the Faculty of Industrial Engineering and Management at the Technion-Israel Institute of Technology in Haifa has shown that the “warmth” of a system plays a pivotal role in predicting consumers’ choice between AI systems.
We know what warmth is when it involves people. What about “cold,” non-human AI applications can be ‘warm’? In fact, this adjective refers to the AI systems’ perceived intent (good or ill) and competence – that is, the systems’ perceived ability to act on those intentions and on the choices they made.
These findings are similar to what is known of human interactions: warmth considerations are often more important than competence considerations when judging fellow humans. In other words, people use similar basic social rules to evaluate AI systems and people, even when assessing AI systems without overt human characteristics. Based on their findings, the researchers concluded that AI system designers consider and communicate the system’s warmth to its potential users.
Considering the amount of money and efforts spent on AI performance enhancement, one might expect competence and capability to drive users’ choices.
The research was recently published in the Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems under the title “The Effects of Warmth and Competence Perceptions on Users’ Choice of an AI System” and carried out by the Technion’s Zohar Gilad, Prof. Ofra Amir and Prof. Liat Levontin.
Most of the research done to date regarding “warmth” perceptions of AI-based systems addressed systems with a virtual or physical presence such as virtual agents and robots. The current study, however, focused on “faceless” AI systems with little or no social presence such as recommender systems, search engines and navigation apps. For these types of AI systems, the researchers defined “warmth” as the primary beneficiary of the system. For example, a navigation system can prioritize collecting data about new routes (benefitting the system) over presenting the best-known route or vice versa.
The researchers found that the system’s “warmth” was important to potential users even more than its competence and that they favored a highly “warm” system over a highly competent system.
This preference for “warmth” persisted even when the highly “warm” system was overtly deficient in its competence. For example, when asked to choose between two AI systems that recommend car insurance plans, most participants favored a system with low-competence (“using an algorithm trained on data from 1,000 car insurance plans”) and high-“warmth” (“developed to help people like them”), over a system with high-competence (“using a state-of-the-art artificial neural network algorithm trained on data from 1,000,000 car insurance plans”) and low-“warmth” (“developed to help insurance agents make better offers”). That is, consumers were willing to sacrifice competence for higher warmth.
So these findings are similar to what is known of human interactions: warmth considerations are often more important than competence considerations when judging fellow humans – people use similar basic social rules to evaluate AI systems and people even when assessing AI systems without overt human characteristics. Based on their findings, the researchers concluded that AI system designers consider and communicate the system’s warmth to its potential users.