
Which AI workflow works best to train public sector developers? Our Cyberfirst interns find out
We welcomed two CyberFirst bursary students for a seven-week internship exploring how artificial intelligence can support the work of early-career developers.
The programme was part of our broader effort to answer a critical question: what role should AI play in training the next generation of developers, especially for public sector projects?
We set out an experiment to test two AI-enabled workflows:
- AI-First: developers create apps primarily through natural language prompts, curating and debugging outputs.
- AI-Assist: developers use AI tools embedded in the workflow (e.g. GitHub Copilot) to provide context-sensitive suggestions while retaining control.
WHITE PAPER: AI-First vs AI-Assist: Early-career developer training in the public sector
Starting out: ethics and expectations
The internship began with a packed first week of workshops — from the ethics of using AI in code to Git basics, secure design, accessibility, and testing.
One lively debate set the tone: should we trust AI-generated code if we don’t fully understand it?
That question ran through the projects that followed. It shaped how the interns thought about responsibility and the role of a developer in an AI-enabled world.
Project 1: AI-First (Weeks 2–4)
The students used Firebase Studio — Google’s browser-based tool that lets you build apps quickly. Coupled with AI assistant Gemini, it can generate much of an app’s code by simply using prompts.
They set out how to build the ‘Kitchecker’ app — a dashboard to track and verify kit and equipment against a required checklist. The goal was to let AI scaffold an end-to-end solution while the interns curated, debugged, and refined.
They quickly saw the upside: rapid prototyping, automatic database schemas, and generated documentation. But they also ran into limits: Gemini errors, caching problems, modal collapse, and scaling issues. Debugging took longer than writing the code itself.
As one intern put it: “Debugging, untangling, and understanding the AI’s code was the hardest part.”
By the midpoint demo, they had a working proof-of-concept but also a clear appreciation of the risks when AI takes the lead.
Project 2: AI-Assist (Weeks 5–7)
The second project flipped the model. Using GitHub Copilot (GPT-4.1), the interns digitised a government form following GDS guidelines, complete with question routing and validation.
This time, they were the primary authors and Copilot worked in the background. The difference was stark: fewer internal errors, smoother debugging, and more confidence in the final product.
One intern reflected: “AI-assist gave me a much better understanding of how the code worked. I felt like I was still driving, with AI there to back me up.”
At the final ‘show and tell’ at the end of week seven, they presented a functional proof of concept. More importantly, as they had more control, they could clearly explain the design decisions behind their work.
What the interns learned
Several key insights emerged across the two projects:
- AI-First — eye-opening but brittle: Great for prototyping, but it introduced complexity and technical debt that was hard to manage.
- AI-Assist built confidence: Both interns said it gave them greater long-term skills and a better grasp of logic and structure.
- Critical reflection became a strength: In their self-evaluation survey, both rated themselves 5/5 for spotting when AI was wrong or overcomplicating things. Engaging with AI allowed them to sharpen that aptitude.
- Debugging and setup were challenging. The students struggled with AI-first code debugging and setting up databases.
- And the most valuable lesson? “How to use AI tools to their greatest effect — but still think for myself.”
Why we did this and what it means for public sector AI
This internship wasn’t just about giving two students project experience. It was designed as a structured experiment to compare AI-first vs AI-assist development workflows, and to learn what each means for:
- code quality and maintainability in public sector applications.
- training pathways for early-career developers.
- mentorship models in an AI-enabled world.
- long-term skills retention when developers work alongside AI.
What we saw confirmed our instincts: AI-Assist is the stronger foundation for junior developer training. It helped the interns develop confidence and critical thinking while still reaping the productivity benefits of AI.
These insights inform our white paper on AI in junior developer training. It will shape how we design mentoring, onboarding, and safe AI usage practices for future cohorts.
As much as this was a learning journey for the interns, it was also a learning journey for us.
Their energy, questions, and reflections will help us — and the wider public sector community — build a training culture where AI is not a crutch but a catalyst for better, more thoughtful developers.
Read our white paper ‘AI-First vs AI-Assist: Early-career developer training in the public sector‘ to find out more about our experiment.
Related Content
-
Latest updated carbon reduction plan: Committed to Net Zero by 2050
-
Escaping government legacy technology is like Tetris… but not as you know it
-
AI can’t fix a broken foundation – here’s how tackling government legacy unlocks it
-
Digitisation and legacy modernisation: Setting the foundations for government AI
-
The great legacy escape: Ditch the spreadsheets, drop the paper
-
The great legacy escape: How outdated systems and processes still hold government back