 
PICTFOR roundtable: Is regulation a handbrake or a catalyst to public trust and innovation in AI?
How do we build smarter public services with AI and address the legitimate questions people have around fairness, transparency, and accountability?
This was a key thread during The Parliamentary Internet, Communications and Technology Forum (PICTFOR) roundtable on AI and Public Services at the House of Lords recently. The roundtable more broadly explored how digital innovation could ease pressures on budgets and staff, while making public services more responsive and accountable.
Chaired by Lord Clement-Jones, the conversation featured policymakers, academics, and industry leaders. It also explored how regulatory frameworks can both enable and constrain innovation, and how public trust shapes the adoption of AI-driven systems.
The regulation trap
A major talking point was the dual nature of regulation as an essential safeguard and a potential inhibitor of innovation.
Lord Willets, Chair of the Regulatory Innovation Office, opened the discussion by emphasising that current regulatory approaches often impede technological adoption rather than facilitate it.
For example, classifying AI as a high-risk medical device can make certification extremely difficult and slow down the adoption of potentially life-saving tools, or even act as a blocker.
Instead of asking “How do we regulate AI?”, he suggested we ask, “How can AI help regulation?” He offered a suggestion of using AI to analyse regulatory frameworks and offer guidance to start-ups, SMEs, and regulators to navigate complex compliance landscapes.
Professor Neil Lawrence, Professor of Machine Learning at University of Cambridge, dismissed fears of an “AGI takeover”. He argued that meaningful AI regulation should address real-world issues, not speculative concerns, and shaped in consultation with problem owners rather than dictated by Whitehall or tech corporations. Public anxiety about AI, he said, reflects existing power imbalances between Big Tech, government, and citizens — not AI of itself.
The consensus was for an intelligent balance between innovation and protection, with the ultimate goal of moving towards smarter, adaptive regulation.
Download white paper: Unlocking government efficiency: An AI readiness roadmap for decision makers
Trust — a systemic issue
Trust in technology and AI is simply a reflection of broader trust in institutions and people.
This trust is nuanced. Public trust is high for data sharing between trusted professionals (like medical clinicians), noted Rob Tabb, Programme Lead for Public Sector Innovation at the Liverpool City Region Combined Authority.
But trust plummets when data is with less-transparent entities, such as private service providers or pharmaceutical companies, even if there is a perceived public benefit.
Effective governance of AI depends on trustworthy actors as much as it does on trustworthy algorithms.
As Professor Gina Neff, Executive Director of Minderoo Centre for Technology & Democracy and Professor of Responsible AI at Queen Mary University of London, put it: “The question isn’t ‘do people trust AI?’ It’s ‘do people trust the people behind AI?’”
AI merely exposes existing fractures. If society has a trust deficit, it can even undermine well-regulated systems.
Trust is continuously earned through engagement, inclusion, and visible ethical behaviour.
Focus on problem
We all know the saying: when all you have is a hammer, everything looks like a nail. The giddiness surrounding AI has led many to fall into a classic trap; the solution drives the approach, not the problem. As Occam’s razor states, the simplest solution is likely the best.
Professor Aled Owen from the University of Southampton challenged the idea that AI is automatically the best solution: “AI is seen as the solution to all problems — but what is the problem we’re actually solving?”
Setting generic AI objectives to drive adoption can make things worse; if success is measured by the instances of AI technologies deployed, the focus shifts to technology rather than desired outcomes.
Overreliance on complex AI systems without a thorough understanding of the real problem can also deepen mistrust and disempower users. Paradoxically overly complex AI systems could negate the operational efficiency savings through increased system support overhead as well as increased cyber security risk.
There was consensus that AI governance should prioritise public interest, with clear accountability for those developing and deploying systems.
Kickstart your AI project: Assess your organisation’s readiness, get your digital infrastructure in order and start integrating AI
The narrow window of opportunity
The window to shape AI’s impact responsibly is narrow. Baroness Elliott drew a parallel to social media: “We missed the opportunity to properly regulate social media — now the genie is out of the bottle. We must not repeat that mistake with AI.”
She warned that delayed regulation risks entrenching inequality, including the potential removal of entry-level jobs for young people through automation.
Meanwhile, Founder of Labour: Women in Tech, Samantha Niblett MP, expressed concern that society’s capacity for critical thinking is diminishing.
As deepfakes and synthetic media become more convincing, the public’s ability to discern truth erodes. And that poses an existential threat to informed democratic decision-making and trust in evidence.
Roundtable reflections
Here are the key reflections I came away with from the roundtable.
Adaptive regulation: Current AI regulation is defensive and risk-averse. Future frameworks must be adaptive and enable innovation without sacrificing safety. Meanwhile, the window to shape AI’s impact is narrow — early, proportionate regulation which can be calibrated based on learnings is essential.
Trust beyond tech: Effective governance of AI depends on trustworthy actors (institutions, developers, policymakers) as much as trustworthy algorithms. It’s also about clear boundaries, transparency, and perceived fairness. Trustworthy algorithms can be misused by untrustworthy actors and undermined by untrustworthy data.
Prioritise problem over solution: Get close to people who have the problem. AI might not be the answer. Sometimes the simplest solution, or existing tools, are the most effective.
The information battleground: Regulation and education must evolve together; trust cannot survive in an information ecosystem polluted by misinformation.
The clear takeaway? AI will succeed in public services only where citizens trust the institutions and the people behind it.
The benefits are significant. But they’re conditional on regulation that sets an ethical framework, not one that acts as a constraint.
What next?
- Download the roadmap: Our new whitepaper, “Unlocking Government Efficiency: An AI Readiness Roadmap for Decision Makers,” offers practical guidance on integration, readiness assessment, and planning a way forward for AI in public services.
- Speak to me: If you have questions about the event or want to explore how our expertise can accelerate your AI deployment, feel free to reach out for a quick chat.
Related content
- 
			
		
		  AI-First vs AI-Assist: Which AI workflow works best to train public sector developers?
- 
			
		
		  Unlocking government efficiency: An AI readiness roadmap for decision makers
- 
			
		
		  Kickstart your government AI project
- 
			
		
		  How CyberFirst interns shaped Zaizi’s AI training programme
- 
			
		
		  AI-First vs AI-Assist: Early-career developer training in the public sector
- 
			
		
		  All green! Why passing the government Service Standard assessment matters more than ever
