
Artificial Intelligence (AI) is an unrivalled technology in its potential for positive economic and social change. For this to happen, there needs to be Trust in AI — from both consumers and businesses — to adopt the technology, which is why conferences like the IEC’s AI with Trust, on 16 and 17 May 2022, are essential.
The AI field is growing rapidly and there are many new organizations working with AI or growing their involvement with AI. These organizations need to find the insight they need to embed trust into AI.
The UK government published its National AI strategy in September 2021. The Strategy notes that the integration of standards in the government’s model for AI governance and regulation is crucial for unlocking the benefits of AI, in particular referencing the importance of the forthcoming AI management system standard – ISO/IEC 42001.
The key initiative in the strategy to make the standardization element happen is the announcement of a pilot AI Standards Hub (launching in 2022). There are three partners working with the government on this – The Alan Turing Institute (UK’s national institute for data science and AI), NPL (UK's national metrology institute) and BSI (UK national standards body).
The Hub has four functions:
- Tracking AI standards
- Convening, connecting and community building
- Education, training and professional development
- Thought leadership and international engagement
The Hub will provide a platform to advance discussions about AI standardization, educate, enable active participation AI-related standards development, and facilitate the use of relevant published standards.
The AI Standards Hub will cover the breadth of standards being developed in AI — including ISO/IEC SC 42 (the committee developing ISO/IEC 42001) and other key SDOs (standards development organizations) in AI. Although the majority of standards thus far have been cross cutting on topics such as governance of IT or bias, there is also a need for vertical standards, as is apparent from the “AI with Trust” programme.
In stakeholder market research that was conducted to specify the hub, both horizontal and vertical standards were viewed as valuable — but with a mix of perspectives on which was most important. It is likely that certain pioneering AI areas, such as healthcare or self-driving cars, will lead on sector-specific standards to build trust for specific use cases.
Alongside good standards and regulation, there is also recognition on the need to utilise conformity assessment to build trust in AI, building on the established global testing, certification and accreditation infrastructure. ISO/IEC 42001 is one such standard being written in such a way that it can be certified against, and many more such standards should come.
The draft EU AI Act, like the UK AI Strategy, points to such a conformity assessment approach to build trust in AI.
Ultimately, to build trust in AI there is a need for a discoverable and navigable framework of regulation, standards and conformity assessment that both cuts across sectors and deals with sector specific challenges.
Related content
A governance framework for organizations deploying AI systems
International AI events and meetings




Blog digest
Sign up to receive selected stories