Overview
Once you have planned and prepared for your AI implementation, make sure you effectively manage risk and governance.
Governance when running your AI project
Safety
Governance in safety is important to make sure the AI tool or service shows no signs of bias or discrimination. Consider if:
- the algorithm is performing in line with safety and ethical considerations
- the AI tool or service is explainable
- there is an agreed definition of fairness implemented in the AI tool or service
- the data use aligns with the Data Ethics Framework
- the algorithm’s use of data complies with privacy and data processing legislation
Purpose
Governance in purpose makes sure the AI tool or service is achieving its purpose and business objectives. Consider if:
- the AI tool or service solves the problem identified
- how and when you will evaluate the AI tool or service
- the user experience aligns with existing government guidance
Accountability
Governance in accountability provides a clear accountability framework for the AI tool or service. Consider:
- if there is a clear and accountable owner of the AI tool or service
- who will maintain the AI tool or service
- who has the ability to change and modify the code
Testing and monitoring
Governance in testing and monitoring makes sure a robust testing framework is in place. Consider:
- how you will monitor the AI tool or service’s performance
- who will monitor the AI tool or service’s performance
- how often you will assess the AI tool or service
Public narrative
Governance in public narrative protects against reputational risks arising from the application of the AI tool or service. Consider whether:
- the project fits with the organisation’s use of AI
- the AI tool or service fits with the organisation’s policy on data use
- the project fits with how citizens/users expect their data to be used
Quality assurance
Governance in quality assurance makes sure the code has been reviewed and validated. Consider whether:
- the team has validated the code
- the code is open source
Managing risk in your AI project
Risks | How to mitigate |
---|---|
Project shows signs of bias or discrimination | Make sure your AI tool or service is fair, explainable, and you have a process for monitoring unexpected or biased outputs |
Data use is not compliant with legislation, guidance or the government organisation’s public narrative | Consult guidance on preparing your data for AI |
Security protocols are not in place to make sure you maintain confidentiality and uphold data integrity | Build a data catalogue to define the security protocols required |
You cannot access data or it is of poor quality | Map the datasets you will use at an early stage both within and outside your government organisation. It’s then useful to assess the data against criteria for a combination of accuracy, completeness, uniqueness, relevancy, sufficiency, timeliness, representativeness, validity or consistency |
You cannot integrate the AI tool or service | Include engineers early in the building of the AI tool or service to make sure any code developed is production-ready |
There is no accountability framework for the AI tool or service | Establish a clear responsibility record to define who has accountability for the different areas of the AI tool or service |