AI in government: Key takeaways from our lunch and learn event

What a phenomenal turnout at our AI in government lunch & learn event. It was inspiring to see such strong cross-government representation, with colleagues from various central government and local authorities coming together.

What stood out most wasn’t just the level of interest in AI but the openness of the conversation. 

We had talks, networking over drinks and pizza, and some really cool workshops showing the potential of AI. 

Antonio Weiss and I kicked things off with short talks on the progress of AI in government.

I looked at the “reality gap” created by AI’s rapid evolution. In particular how governance is struggling to keep pace with a technology that can facilitate mass deception at scale – and what we can do to best to address this challenge. 

Antonio highlighted the dangers of “magical thinking” when it comes to generative AI. He said its success depends on human decision-making, careful assessment, and sticking to the basic fundamentals like understanding the problem AI solves, the return, and its benefit. 

Register: If you’re working for a public sector organisation, join our community and attend the next AI lunch and learn session

AI in government workshops

In demos run by our Applied AI teams, attendees explored how to detect synthetic media, saw how static images can be manipulated, and learned how AI can help identify data privacy risks.

Image animation framework

An interactive demo of our image animation framework showed how bad actors can manipulate publicly available photos to impersonate someone — and the challenges of detecting such content. The session highlighted the importance of training detection models on relevant datasets to spot these fakes effectively.

This led to a discussion on trust, digital identities, and the need for security measures in an age where people can create realistic facial animations quickly and easily.

Detect synthetic media

We also presented an internal proof-of-concept that detects deep fakes and synthetic media.

The system combines Vision Language Models, dynamic detection agents, fact-checking, sentiment analysis, and human oversight to identify and explain potential fakes in detail.

The demonstration sparked discussion about confidence thresholds, modular system design, and the need to combine AI detection with human judgment to stay ahead of evolving threats.

Data risk evaluation framework

Attendees also explored a proof-of-concept we’re currently developing with a government client to spot risks not visible to human reviewers using agentic AI.

A document may appear low-risk on its own. But when cross-referenced with public data, private details like identities and locations can emerge. The system gives users a risk report and highlights where information could be at risk.

The attendees discussed how redaction alone might not be enough and why organisations need to proactively test their own materials before publication or sharing.

READ: An AI readiness roadmap for decision makers — our white paper provides practical insights on how to integrate AI quickly.

Key insights from discussions with attendees

My biggest takeaway from our discussions with civil servants at the event is that the conversation has fundamentally matured. 

We are no longer debating “whether” AI matters. The focus is squarely on “how” to operationalise it safely, fund it properly, and scale it responsibly across fragmented systems. The hype has given way to a sharp focus on implementation and architecture.

For me, three dominant themes clearly emerged:

1. Governance and data are the real blockers: 

The hesitation in government isn’t about AI’s technical capabilities but rather accountability and risk exposure. Attendees stressed that AI ambition is currently constrained by foundational data maturity, poor data quality, and highly siloed systems. 

2. A hunger for joined-up infrastructure:

There is a massive appetite for unified, cross-government AI infrastructure. Moving away from “app fatigue,” attendees proposed centralised ideas like a single national parking app and a visionary “GOV.UK Earth” concept to give UX teams a real-time view of citizen journeys.

3. AI as a service design engine:

We are shifting our view of AI from pure automation to a powerful tool for service design and accessibility. The consensus is that AI should be used to surface pain points, check content consistency, and truly understand how citizens interact with services.

It is clear that we must move from isolated “bolt-on” experiments to integrated, citizen-centric infrastructure. 

Thank you to everyone who joined us. We are more excited than ever to partner with you to adopt AI safely and effectively in the public sector. 

Attend our next AI lunch and learn session, join our community to find out more.

If you want to know know more about the demos mentioned in this piece, please get in touch with me using the contact for below

Related insights

Thanks for joining us! We’ll keep you informed with regular updates.

Subscribe to our monthly newsletter, including updates about our content and events

Consent(Required)
This field is for validation purposes and should be left unchanged.