"Trust me, I'm a robot” - Why asking isn’t enough

Trust is complicated. It can be hard to define why we trust certain people or information, as so much of that decision-making process is instinctive. How do those human cues and feelings translate to our interactions with robots? It’s not such a far-fetched issue to consider, as Head of Research and Service Design, Ed Houghton explains.

Midjourney AI urban robot

Increasingly, we are being asked to consider trusting new and emerging technologies in our towns and cities. AI and machine learning have a range of urban applications, from connected self-driving vehicles to smart refuse systems and IoT-based security cameras. But, however inventive, well intentioned or well-funded the technology, the major limiting factor in its successful rollout and adoption is trust. Are we prepared to trust these systems that have been designed to support us? If we don’t, the technological solution could fail – and when trust is broken, reputational damage can be difficult to shake.

Are we prepared to trust these systems that have been designed to support us? If we don’t, the technological solution could fail – and when trust is broken, reputational damage can be difficult to shake.
— Ed Houghton

DG Cities approaches projects by putting people at the centre of any innovation, and making sure services are useful, accessible and could improve people’s lives. That’s why, before we get carried away with the many potential benefits and uses of AI, we need to start from first principles and consider what makes us trust (or distrust) a new technology.

Research shows that to build trust, it’s important to think about five connected concepts:  

Reliability

Robots are seen as more trustworthy if they are reliable and present. In one study, a virtual and physical AI model were tested together to understand which was deemed more reliable when presenting the same information. The study found that physical robots are considered more intelligent than virtual systems such as chatbots, even when sharing the same information. [1]

Transparency

Seeing how technology works can help to build trust. Research shows that explaining how AI-based processes work can help to improve trust in their use, but only for simple procedures. One military wargame example showed that experienced staff developed trust when they could understand how AI-based decisions were being made, and were able to interrogate it. [2]

Personality

Appealing to the user and their unique needs can help to build trust, so tailoring more unique, personalised information can produce greater trust in a system. However, too much ‘personality’ can have a negative effect on trust in the tech, particularly for virtual AI. We like to see our own characteristics reflected. One experiment with virtual AI showed that mirroring different AI personalities (e.g., extrovert vs introvert) elicited positive trust outcomes when they matched the characteristics of the user [3], and these personalised responses were considered more persuasive.

Presence

AI systems with a physical presence, for example a robot based on an AI model, tend to garner higher levels of trust than virtual AI systems, like chatbots. Physical characteristics like human forms and characteristics can build trust, but it can be a hard balance to strike – too similar and they can create feelings of unease. [4]

Demonstrating trust is key, so simple design changes that take into account these principles can help to build valued services. It could mean creating a physical presence, such as a robot assistant instead of a digital chatbot, or simply ensuring that new tools and services are completely reliable before they hit the shelves.

Building trust will be an important outcome for technology developers, as well as those looking to use their services, like local authorities and developers. Whether designing a chatbot service to make customer services more efficient, or trialling a sophisticated self-driving system, evidence shows that it isn’t enough to simply ask people to trust you. Instead, it’s important to demonstrate that you can be trusted – because even in the fast-paced world of AI, trust can only be earned.

To find out how we have been exploring trust in the context of smart city tech and AI, take a look at some of our research into public attitudes to self-driving technology.


[1] Bainbridge et al, 2011. [Bainbridge, W.A., Hart, J.W., Kim, E.S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1):41–52.]

[2] Fan et al, 2008. [Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., & Endsley, M.R. (2008). The influence of agent reliability on trust in human-agent collaboration. ECCE’08: Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction, ACM International Conference ProceedingSeries,vol.369:1–8.]

[3] Andrews, 2012. [Andrews, P.Y. (2012). System personality and persuasion in human-computer dialogue. ACM Transactions on Interactive Intelligent Systems, 2(2):1–27.]

[4] Chattaramanetal, 2014. [Chattaraman, V., Kwon, W.-S., Gilbert, J.E., & Li, Y. (2014). Virtual shopping agents. Journal of Research in Interactive Marketing,8(2):144–162.]