How are UK banks defining AI assurance?

DG Cities and YouGov are partnering to explore the UK’s banking and financial services sectors’ understanding of (artificial intelligence) AI assurance. This study is being undertaken by DG Cities for the Centre for Data Ethics and Innovation, part of the Department for Science, Innovation, and Technology. We’re now seeking input from professionals working on AI in finance and banking to understand AI assurance of banking AI.

The UK financial services industry is a centre of activity within the growing AI sector, with many banks recognising the transformative potential of AI. From fraud detection to personalised wealth management, AI promises efficiency, speed, and innovative solutions. But amid all the excitement, a critical question emerges: are banking professionals ensuring the robust and trustworthy application of these powerful tools?

Assuring AI matters

Imagine an AI loan system biased against certain demographics. Unchecked, algorithms may perpetuate inequities and erode public trust in the financial system. Assuring AI involves rigorous testing to identify and mitigate potential biases, ensuring fair and ethical decision-making.

Financial institutions also handle hugely sensitive personal data. AI tools can offer enhanced data security measures to prevent breaches and misuse. To assure AI, banks need a comprehensive data governance framework, including encryption, access controls, and responsible data sourcing practices.

The global financial system is highly complex, with evolving regulations around how AI can and should be deployed. The practice of assuring AI can help leading companies to stay ahead of the curve by actively collaborating with regulators, and implementing robust compliance measures to avoid legal and reputational risks. AI presents significant opportunities, and there are more AI based tools in development in the UK that are likely to significantly impact the banking sector.

Banking, as in other regulated industries, requires effective assurance practices to build trust, and mitigate risk.

AI assurance is about measuring, evaluating and communicating the trustworthiness of AI systems, and whether they meet certain legal, ethical and technical requirements – for example, ensuring AI doesn’t bias customer applications, or create risks to sensitive customer data. The banking industry relies on consumer trust to operate effectively – AI has the potential to both reshape consumer engagement with banks, and enable banks to deliver fair and efficient services.

A man holding a bank card while typing on a laptop computer (photograph)

A real challenge however is how the banking and financial services sector defines and understands assurance terminology and approaches. Banking and finance are a truly international industry - it requires internationally agreed standards and approaches, as well as national level regulation. Assurance must support this, and provide clarity for firms which operate across borders. A common understanding is therefore critical, and this is the subject of this new study.  

Over the coming weeks, the DG Cities team is looking to speak to banking professionals in the UK and those developing AI services for the banking industry to understand their perspective on AI assurance language, and the meaning they derive from terms in use today. For consumers to trust the banking sector, it is critical that the sector trusts the AI tools it deploys – that’s the role of AI assurance, and it’s something we’re excited to speak to finance professionals about.  

To find out more, and to sign up for the study click here.