The UK Government’s Department for Science, Innovation and Technology (DSIT) has launched the Fairness Innovation Challenge to drive the development of new socio-technical solutions to address bias and discrimination in AI systems. 

This competition aims to encourage the development of socio-technical approaches to fairness, test how strategies to address bias and discrimination in AI systems can comply with relevant regulation, and provide greater clarity about how different assurance techniques can be applied in practice.

Hands resting on a laptop keyboard with a graphic overlay representing fairness and compliance

The challenge was formally launched on Monday, 16 October 2023, and asked applicants to submit solutions with a focus on real-world examples.

Winning proposals will receive grant funding to develop their solutions over a one-year period. We are also delighted to deliver this challenge in partnership with The Equality & Human Rights Commission (EHRC) and The Information Commissioner’s Office (ICO), who will help guide winners through some of the legal and regulatory issues relating to fairness implications of AI systems, as well as using learnings from the challenge to shape their own broader regulatory guidance.

The UK Government’s Department for Science, Innovation and Technology (DSIT) has launched the Fairness Innovation Challenge to drive the development of new socio-technical solutions to address bias and discrimination in AI systems. 

This competition aims to encourage the development of socio-technical approaches to fairness, test how strategies to address bias and discrimination in AI systems can comply with relevant regulation, and provide greater clarity about how different assurance techniques can be applied in practice.

Hands resting on a laptop keyboard with a graphic overlay representing fairness and compliance

The challenge was formally launched on Monday, 16 October 2023, and asked applicants to submit solutions with a focus on real-world examples.

Winning proposals will receive grant funding to develop their solutions over a one-year period. We are also delighted to deliver this challenge in partnership with The Equality & Human Rights Commission (EHRC) and The Information Commissioner’s Office (ICO), who will help guide winners through some of the legal and regulatory issues relating to fairness implications of AI systems, as well as using learnings from the challenge to shape their own broader regulatory guidance.

Outcomes and objectives

Ensuring AI systems are built and used fairly can be challenging but is hugely important if the potential benefits of AI are to be realised. 

Recognising this, the government’s white paper “A pro-innovation approach to AI regulation” proposes fairness as one of five cross-cutting principles for AI regulation. Fairness encompasses a wide range of issues, including avoiding bias that can lead to discrimination. 

A hand moving one of five wooden blocks each containing an icon representing fairness. The wooden block being moved has an icon with the justice scales
A group of happy and enthused colleagues discussing a topic around a meeting room table. A female is leading the discussion with a male smiling in the background

This issue has been a core focus for CDEI since we were established in 2018. Our 2020 “Review into bias in algorithmic decision making” provided recommendations for government, regulators, and industry to tackle the risks of algorithmic bias. In 2021, we also published the “Roadmap to an Effective AI Assurance Ecosystem”, which set out how assurance techniques such as bias audit can help to measure, evaluate and communicate the fairness of AI systems. Most recently, we published our report “Enabling responsible access to demographic data to make AI systems fairer”, which explored novel solutions to help organisations to access the data they need to assess their AI systems for bias. 

Over this period, bias and discrimination in AI systems has been a strong focus across industry and academia, with significant numbers of academic papers and developer toolkits emerging. However, organisations seeking to address bias and discrimination in AI systems in practice continue to face a range of challenges, including:

  • Accessing the demographic data they need to identify and mitigate unfair bias and discrimination in their systems.
  • Determining what fair outcomes look like for any given AI system and how these can be achieved in practice through the selection and use of appropriate metrics, assurance tools and techniques, and socio-technical interventions. 
  • Ensuring strategies to address bias and discrimination in AI systems comply with relevant regulatory frameworks, including equality and human rights law, data protection law, and sector-specific legislation.
A pen and notepad resting on a silver laptop keyboard
A view of the Houses of Parliament and Big Ben across the River Thames in London at dusk

The Challenge will seek to address these issues by supporting the development of novel solutions to address bias and discrimination in AI systems. Winning solutions will implement socio-technical approaches to fairness, provide greater clarity about how different assurance techniques can be applied in practice, and test how strategies to address bias and discrimination in AI systems interact and can comply with relevant regulation.

Winning Solutions

We are delighted to announce that DSIT will fund the development four solutions to address bias and discrimination in higher education, healthcare, finance and recruitment. Winners will each receive up to £130,000 to develop their solutions.

The Open University will develop a solution to improve the fairness of AI systems in higher education. The project will create a Responsible AI framework to support universities in the UK and worldwide to create similar systems in-house.  

The Alan Turing Institute will create a fairness toolkit for SMEs and developers to self-assess and monitor fairness in Large Language Models (LLMs) used in the financial sector. 

King’s College London will design a solution to address bias and discrimination in healthcare. The project will mitigate bias in early warning systems used to predict cardiac arrest in hospital wards, based on the CogStack Foresight model. 

Coefficient Systems Ltd.’s solution will focus on reducing bias in automated CV screening algorithms that are often used in the recruitment sector but can have biased outcomes. 

Partners

The CDEI leads the Government’s work to enable trustworthy innovation using data and AI as part of the Department for Science, Innovation and Technology (DSIT).

CDEI will deliver the challenge with our delivery partner Innovate UK and in partnership with UK regulators, the Equalities and Human Rights Commission (EHRC) and the Information Commissioner’s Office (ICO). 

Innovate UK is the UK’s national innovation agency. Innovate UK supports business-led innovation in all sectors, technologies and UK regions. We help businesses grow through the development and commercialisation of new products, processes, and services, supported by an outstanding innovation ecosystem that is agile, inclusive, and easy to navigate. 

The Equality and Human Rights Commission is Great Britain’s national equality body and has been awarded an ‘A’ status as a National Human Rights Institution (NHRI) by the United Nations. The EHRC works to help make Britain fairer by safeguarding and enforcing the laws that protect people’s rights to fairness, dignity and respect.

The Information Commissioner’s Office (ICO) is the UK regulator for Data Protection and Freedom of Information, with key responsibilities under the UK General Data Protection Regulation (GDPR), Data Protection Act 2018 (DPA) and Freedom of Information Act 2000 (FOIA). The ICO’s role is to uphold information rights in the public interest. AI is a priority area for the ICO, with guidance being regularly published to support responsible innovation whilst protecting individual rights and freedoms.