Appetite
A strong appetite for using AI is reflected in the fact that all respondents are actively exploring or experimenting with it. Many have AI-powered instances live now. These are mostly internal, but some are public facing.
All are excited by the possibilities. This is also tempered by caution for many.
Organisations’ situations and stances in terms of AI is varied.
- Some hint that the ‘hype’ might be premature and that the space is insufficiently mature. One mentioned the lack of commercially available products. Another was open-minded about using AI but was content to wait and see whether AI offers practical solutions for any problems identified during current service reviews.
- Some are mostly focused on establishing robust governance, policies, and guidance to manage the adoption of AI technologies and the disruption it may cause for their organisations. These have a lesser focus on potential use cases at this stage, although they are giving this some consideration.
- Some smaller organisations described a deliberately slow and steady strategy initially, with a view to speeding up later. These were more likely to be researching how other organisations in their sector are applying AI.
- Others have a strong focus on potential use cases. These are more likely to have live AI instances already or be running proofs-of-concept. Many of these organisations have quite concrete plans for applying AI across a range of use cases.
Use cases
We heard about a range of current applications for AI:
- Personal productivity: this is the use of publicly available tools like ChatGPT by individual members of staff. This appears to be fairly widespread. Staff use AI for things like content generation. This has led to many putting guidance in place for the use of these tools. Several also mentioned that their developers used these tools for coding support.
- Cyber security: several organisations mentioned the sheer volume of threats their networks experience. They described how AI-powered threat detection allows them to handle this in a way that would not be viable with a human-only team.
- Analysis of large qualitative data sets: some organisations are using AI to help them process large quantities of data, for example consultation responses. For example, AI is used for sentiment analysis, summarisation, or the extraction of common themes.
- Automatic Welsh translation: many are experimenting with this. Those who are also use human translators for final editing and quality assurance.
- Staff-facing chat bots to act on queries: at least one organisation uses a chat bot that understands natural language requests and takes action in response, although this is limited to a narrow domain.
- Public-facing chat bots: several organisations – local authorities and arm’s length bodies – are using public-facing chat bots. However, they were explicit about these being fairly basic, rule-based bots. The implication is that these aren’t considered to be ‘real’ AI. Some accept natural language input and attempt to extract meaning from it, but then return a pre-programmed response.
- Computer vision: 2 organisations mentioned the use of computer vision, in which the system can identify and understand objects in images or video feeds. One example was attached to CCTV and only reached proof-of-concept stage, while the other is in live use for remote sensing.
Some recurring themes emerged when exploring planned use cases:
- Using AI-powered chat bots to simplify services for users: several organisations described ambitions in this space, both internally for staff and externally for customers. These bots would take natural language input and then take action in response; for example, provide information, carry out simple transactions or guide the user through completing more complex transactions.
- Predictive analysis: several organisations described using AI analysis of a data stream to make predictions and support pre-emptive action. 2 organisations described explorations with academic institutions into applying this to internet-connected sensors in people’s homes.
- Centrally coordinated personal productivity: most are interested in introducing personal productivity tools in a more structured way across their organisations, through the licencing of specific software for staff.
- Managing public-facing email inboxes: one local authority with a high number of inboxes is considering using AI-powered RPA to read incoming emails, understand the gist of what the email is about, decide what the next stage should be and then move it to the correct back-office team to be acted on.
Benefits
Respondents did not directly articulate the benefits of adopting AI within their organisations as readily as they did with RPA. This reflects the less mature state of the technology. However, we can infer benefits from the use cases.
Broadly, organisations are hoping to realise the same benefits that they are from automation. However, with AI, this is for tasks that are not just effort intensive, but that also require analysis, judgement and decision making.
Accuracy or reduction in errors is one area where doubts remain about AI. However, in domains with sufficiently large and well-formed data sets to learn from, AI is outperforming humans in this respect.
For example, the Ibex Galen cancer diagnosis system – which prioritises the most urgent cases for review by clinicians to speed up diagnoses – saw a 13% increase in prostate cancer detection during Betsi Cadwaladr Health Board’s trials.
Barriers
- Many organisations spoke of a desire to use Microsoft Copilot – due to the widespread use of the Microsoft Technology Stack – but find the cost of the minimum licence requirement prohibitive.
- Some point out that the lower quality of AI-generated Welsh in comparison to English may be a barrier in a bilingual country to the adoption of public-facing generative AI, in which the AI directly produces outputs for public consumption (e.g. for use in intelligent chat bots).
- Some acknowledge that AI instances will need data to train with and that the data quality in their organisations may not be good enough. This is due to differences in data standards across organisational boundaries.
- Some say their senior leaders are nervous about the risks of AI and that this may slow adoption.
- Some acknowledge that simply knowing where to start and what to apply AI to is an initial hurdle they need to overcome.
- Finally, the usual barriers of funding it and having the capacity to resource it applies to AI as much as any other technology.
Risks and challenges
- Most organisations are finding establishing adequate governance, policies, and guidance a challenge.
- Many are concerned about the ethical risks, whether AI will exhibit bias, and how to be transparent with the public about an organisation’s use of AI. However, several don’t believe the ethical risk is any greater than it already is with more established technologies such as search engines.
- Many are concerned about the accuracy of any AI-generated output and foresee needing to test applications thoroughly to feel confident in what is passed back to the service user.
- Some are worried about protecting personal data when AI is used to manipulate or process it, and the risk of sensitive information being inadvertently passed to users. Some directly referenced information governance as a barrier.
- Some foresee challenges in preparing their organisations for both the level and pace of change that this transformational technology may bring with it. Others are concerned with the temptation to move too fast in order to not be ‘left behind.’
Approaches and resourcing
None of the organisations we spoke to described established, repeatable approaches to deploying AI-powered technologies. Again, this reflects the relative immaturity of this emerging technology.
Many are seeking support and guidance on tried and tested applications of AI that are relevant and valuable to their sector.
Some are making plans for resourcing AI and intelligent automation in house. Examples include:
- Appointing single FTEs within organisations dedicated to AI.
- Creating broad, transformation-focused centres of excellence or innovation teams, focused on exploring and demonstrating potential applications.
- Upskilling in-house teams to work on new technologies.
Interestingly, no respondents mentioned working with external partners at this stage. This contrasts with RPA, where all respondents mentioned using third parties.
General findings
The research gathered some insights that apply to both automation and AI.
- Most organisations emphasised the importance of being user centred and using these technologies to improve the user experience for both staff and the public.
- Most also stressed that they wanted to avoid being technology led, i.e. focusing too much on seeking out problems to apply RPA and AI to. Their preference is to be service, or problem led, i.e. identifying issues within a service, and then selecting the appropriate way to mitigate that issue based on the service context and the user needs.
- Many reported the importance of gathering case studies and using them to tell the story to support the case for change within their organisation. We heard repeatedly about the power of case studies to demystify the technology, demonstrating both what’s possible and the measurable value that can be achieved.