Trust us – two little words that aren’t going to advance the self-driving industry

Today, our Director of Research and Insights, Ed Houghton will be joining a panel at the CAM Innovators day at the Institution of Engineering and Technology. He’ll be sharing insights from our recent work - talking about the need to demonstrate safety, evidence from our trials and surveys, the importance of engaging vulnerable groups and assurance.

Trust is central to relationships. Whether it’s with people, brands, services or technologies, trust radically shapes our behaviour and experiences. And with AI now becoming more and more prevalent in our lives, trust has a whole new dimension of complexity – is it possible to trust technologies that are, on the surface, behaving like a human? What happens when trust is broken?

The APA Dictionary of Psychology defines trust as “the confidence that a person or group of people has in the reliability of another person or group… the degree to which each party feels they can depend on the other party to follow through on their commitments.” In the case of self-driving then, trust isn’t only in relation to the vehicle – it’s also placed on the service provider. the originator or owner of the technology. And when it comes to commitments, there are key outcomes those using self-driving tech expect: as DfT research has shown, safety is paramount. By that token, when we talk about trust in self-driving AI, we’re essentially also talking about perceptions of safety.

Trust issues are particular to different industries

This is different to how trust is understood in other AI use cases. In banking, trust in chatbots is tied to issues such as fraud. In HR, trust is related to bias and discrimination. The focus of trust requires a different approach and strategy when engaging with customers, clients or users.

Across the board, however, there are a variety of factors that influence public trust in AI: traits such as personality, past experiences, technology anxiety/confidence, for example, shape public response. But so do the characteristics of the AI itself: reliability, anthropomorphism and performance, in particular, shape our views.

And it’s this last one – performance – is key in the self-driving space. In the absence of visible self-driving technology on our roads beyond trials, it’s difficult for the public to understand if the performance of a self-driving vehicle is up to scratch. There are few tangible examples out there to act as a baseline for us.

The context itself also plays a huge role. Driving or being a road user, in general, is a high-risk daily task that puts individuals at an increased risk compared to many other day-to-day activities. AI in a driving context is therefore subject to behaviour at increased risk, and it is a demonstrably difficult to develop AI at present to deal with complex driving scenarios.

Demonstrating safety – in every situation

These complex scenarios present a massive challenge to industry that we’re helping to understand more about. Complex ‘edge cases’ need better simulation, so AI can be taught how to deal with them – they also present huge risk, as they are often visceral, emotive experiences that describe the nature of incidents on our roads. Using these examples as a platform to build trust is a challenge, and could break trust in technology if dealt with incorrectly – but if safety can be demonstrated, it is likely to support acceptance of AI technology as a transformative factor of our future mobility system.

We’ve done many pieces of work over the years into public acceptance and trust, and are currently working on several projects on trust with a self-driving angle. DeepSafe, our work with Drisk.ai, Claytex, rfPRO and Imperial College is looking at trust in self-driving from the perspective of testing and demonstrating trustworthiness through the AI Driving Test. With our partners, we’re exploring if it’s possible to use driving test simulations to showcase how AI behaves around edge-cases using the very latest simulation technology, and the impact this has on trust. We’re exploring public attitudes and capturing their experiences of complex situations to help train the AI. This, we hope, will help us to develop an understand of how trust can be influenced by different types of information related to safety and the importance of demonstrating safe behaviours in building trust.

That’s why the self-driving industry, unlike banking, or other sectors, cannot rely on asking to be trusted, or saying they are trustworthy. Instead, the industry must demonstrate trust through safety – safety of users, safety of others on our streets, and in particular, safety of vulnerable groups. Only then can industry expect to see the mass adoption and acceptance of AI on our roads.

Interested to learn more? Get in touch or read more about our work in the sector and current project, DeepSafe.