There has been a deluge of articles around AI over the past few months and like many other organisations we are both guilty of producing content on this theme as well as voraciously consuming it. From the Public Sector Team perspective, now is an incredibly exciting time to be reading about Generative AI with the International Summit on AI Safety taking place this week at Bletchley Park, it may well represent a golden opportunity for the UK to climb back up the UN’s E-Government rankings.
Last week, the Guardian’s political correspondent Kiran Stacey penned an insightful article about the use of AI across central government departments and the NHS. From e-gates and facial recognition to detecting fraud and sham marriages, the references given in the article are just a smattering of the wider adoption of AI across the public sector. Far from being a fluff piece about the benefits of automation, the article is a cautionary tale of the risks of bias and discrimination that are so often referenced when discussing generative AI. In our own public discussion on Generative AI held as part of the Leeds Digital Festival, ethics took centre stage. Panellists, including techUK’s Katherine Holden and Leeds City Council’s Richard Irvine, pressed the need for vigorous guardrails to curtail the risk of biassing public service outcomes. You can catch up with the full discussion here:
Another moral allegory in the article was the case of Dutch tax authorities which employed AI to identify potential childcare benefits ultimately leading to a significant financial hardship for tens of thousands of families. Headlines of businesses and governments placing too much faith in the capabilities of technology are not uncommon. The most notable example is the Horizon scandal where employees of the Post Office were wrongly accused of theft, fraud, and false accounting due to discrepancies in the Horizon accounting software provided by Fujitsu.
In an article penned last year by Cyber-Duck founder and CEO Danny Bluestone, one of the core reasons why the Horizon scandal played out the way it did, according to Danny, was because of a culture of information asymmetry. A key takeaway from his article is that the introduction of complex technologies is only successful when the right mix of stakeholders are at the decision table. This requires both good availability of those essential DDaT professions as well as a leadership who understands the urgency of having cross-functional teams involved and empowered during the decision-making process.
The Government is making big strides to overcome the risks of information asymmetry by having the right skill sets both on hand and at the decision-making table. Programmes to upskill leadership in digital have been in motion for some time and as it concerns more acute skills deficits, DDaT recruitment drives, changes to the DDaT pay framework, and secondment schemes are desperately seeking to plug the gap. The creation of knowledge hubs such as the MOD’s Defence Artificial Intelligence Centre (DAIC) are similarly useful mechanisms to provide more surface area for knowledge flow with industry and academia.
This multi-pronged approach is helping secure digital know-how from both the perspective of the purchaser and the practitioner. Returning to our discussion last month on the subject of Generative AI, we learnt that Leeds City Council has a long tradition of putting its best foot forward when it comes to technology. The adoption of in-house ML and AI skills has helped Richard Irvine and his team not only start experimenting but also leverage the best of commercial developments in Generative AI. To fully capitalise on the benefits of technology, Richard and his team are also committed to actively communicating the risks and rewards of these technologies to non-technical staff.
It would be foolish to assume that every public sector organisation is able to embrace emerging technology in the same way, and this presents another risk. The risk of information asymmetry across the public sector at large. This not only disadvantages organisations less able to rapidly upskill but also results in front-runner institutions working in silos. Benefits which could be amplified risk falling short of their full potential.
It’s critical that organisations fully engage with the information governance, security and ethics of Generative AI and through organisations such as Socitm we are seeing an encouraging development of shared resources in this space. Of equal importance is the opportunity for leaders and technology practitioners to actively engage and dissect the use cases and share findings from their respective sandboxes that are emerging across the public sector. Providing forums for this work to be shared, and for insights to be probed is essential if we are to mitigate the risk of AI malappropriation.
So as we consume the flurry of papers, op-eds and events in the run-up to the international summit, our attention should be drawn not only to the theoretical discussions on international norms but to the practical case studies and stories that illustrate the journey from zero to AI. I’ll certainly be keeping a watchful eye on these narratives and celebrating the real-world successes as I discover them.