Assuring AI: UK Business Study

 
 

DG Cities and YouGov are working on a study to explore UK businesses’ understanding of AI assurance. This study is being undertaken by DG Cities for the Centre for Data Ethics and Innovation - part of the Department for Science, Innovation, and Technology. We’re now seeking input from business leaders to help understand their perspective.

The UK government recognises the importance of AI Assurance in ensuring the responsible development and use of AI technologies. The AI Regulation White Paper outlined a pro-innovation, proportionate, and adaptable approach to AI regulation, emphasising the need for trustworthy AI that is safe, reliable, fair, accountable, and explainable.

We are now conducting research to understand how UK businesses approach AI Assurance.

About the study

This study is being undertaken by DG Cities for the Centre for Data Ethics and Innovation to understand how AI Assurance terminology and techniques are being used by UK businesses. The work will help to ensure that a common language is developed that supports businesses to adopt and use AI safely.

DG Cities is conducting semi-structured interviews with business leaders from across key industries to share their views on current and future AI assurance techniques and terminology. If your organisation is developing or using/procuring AI tools and services, we’d like to speak with you.

To find out more and take part, please fill in the form. If you have any queries, contact Ed Houghton.

What is AI assurance?

To support the development of trustworthy AI, the UK government has published a Portfolio of AI Assurance Techniques, which outlines a range of tools and methods for assessing and improving the trustworthiness of AI systems. These techniques cover a wide range of aspects, including data assurance, compliance audit, formal verification, performance testing, certification, risk assessment, impact assessment, and impact evaluation.