top of page
Solirius Reply - LOGO RGB.png

Insights

AI in action 1: Supporting service teams through the Service Standard

  • Writer: Matt Hobbs
    Matt Hobbs
  • Jul 3
  • 5 min read
Banner for 'AI in Action 1: Supporting Service Teams Through the Service Standard' by Matt Hobbs, on a blue background with a white network graphic.
AI in action 1: Supporting service teams through the Service Standard by Matt Hobbs

As digital public services evolve, so must the tools we use to build them. This series explores how Artificial Intelligence (AI) can responsibly support UK government service teams in meeting the Government Digital Service (GDS) Service Standard. From user research to accessibility testing, performance monitoring to service assessments, we’ll examine where AI can complement human expertise, enhancing delivery without compromising trust, transparency, or inclusion.


Overview

  • What is the Service Standard?

  • What is the Service Manual?

  • What is a Service Assessment?

  • Wrapping up

  • About the author


Welcome to a series exploring how Artificial Intelligence (AI) can support UK government service teams in meeting the Government Digital Service (GDS) Service Standard. As digital public services continue to evolve, so too must the tools and methods used to build them. AI, when applied thoughtfully and responsibly, has the potential to enhance delivery, improve user outcomes, and support those working in government to focus on what matters most: meeting real user needs.


This series will explore how AI can play a role in supporting service teams at every stage of the service lifecycle, from discovery to live, and how it can complement the Service Manual’s practical guidance. Whether through natural language processing, data analysis, accessibility testing, or helping teams with performance monitoring, we’ll consider both current capabilities and future possibilities.


This is not a call to automate everything, nor to substitute human judgement, but to embrace new tools in a way that strengthens delivery and accountability across government.


Before we continue, let me cover a couple of important points...


What is the Service Standard?

The UK Government Service Standard is a set of points designed to help teams create and run effective, user-centred digital services. Maintained by the Government Digital Service (GDS), it ensures that public services are accessible, efficient, and meet user needs.


The standard promotes practices such as understanding users, using agile methodologies, testing services with real users, and making services secure and accessible. It's used throughout the development lifecycle to ensure quality and consistency across UK government digital services.


What is the Service Manual?

There may be a few readers who've never heard of the Service Manual. So, here's a brief history and overview. The UK Government Service Manual was introduced as part of the Government Digital Service (GDS) initiative, launched in 2011 to improve digital public services. Continuously updated, it reflects evolving best practices and legal requirements, ensuring government services remain effective and accessible for all users.


The Digital Service Standard and the Service Manual are the foundations for what you need to complete a Service Assessment.


What is a Service Assessment?

A UK government Service Assessment is a structured evaluation process designed to ensure that digital services meet government standards before they go live or progress through key stages of development.


Approval stages in a Service Assessment:

UK government services typically go through 3 key Service Standard assessments:


1. Alpha assessment

  • Conducted at the end of the Alpha phase

  • Focuses on whether the service team has researched user needs, developed and tested prototypes, and has a plan for the Beta phase

  • Core evaluation criteria: user research, design, technology choices, and feasibility


2. Beta assessment

  • Conducted at the end of the Beta phase

  • Evaluates whether the service has been tested with users, can handle expected demand, and meets accessibility and security standards

  • Some departments may also decide to run a private beta for certain services, testing them with a small group of invited users

  • In some cases, a service may remain in the Beta stage for an extended period

  • Core evaluation criteria: performance, scalability, accessibility, data security, and readiness for live deployment


3. Live assessment

  • Conducted before a service moves from Beta to Live (full public availability)

  • Ensures the service is sustainable, meeting user needs, and is continuously improved

  • Core evaluation criteria: performance monitoring, governance, data management, and ongoing user feedback integration


Service Standard criteria

Each assessment evaluates against 14 Service Standard points, some of these include:



To progress to the next stage, service teams must pass these assessments. If unsuccessful, they are expected to resolve the issues highlighted and reapply for a future assessment.


Wrapping up

Some people might see using AI in the Service Standard, and Service Assessment process as “cheating” because if AI does all the work, what’s left for the service team to do? But really, AI is just a tool to help things run more efficiently and save the UK government time and money.


It’s not about replacing human expertise. It’s also important to remember that AI can sometimes get things wrong (what’s called a “hallucination”), so it’s critically important that teams sense-check what AI produces instead of just accepting it at face value.


Now that we’ve outlined the purpose and structure of the Service Standard and the role of service assessments, we’re ready to dive into the practical side, where and how AI can help.


In the next post, we’ll begin exploring each of the 14 Service Standard points in turn, starting with what is arguably the most critical: understanding users and their needs.


We’ll look at how AI can assist user researchers, support data analysis, and improve how teams gather insights, without losing the nuance or empathy that human researchers bring. So please stay tuned!


About the author 

My name is  Matt Hobbs — Principal Engineer (Frontend) and Guild Lead at Solirius Consulting, currently embedded in HMCTS.


Before joining Solirius, I spent six years at GDS, leading on frontend development and shaping strategy across accessibility, performance, and digital best practice. 


I also wrote a series of blog posts documenting the performance improvements made to GOV.UK — covering everything from HTTP/2 and jQuery removal to Real User Monitoring. Well worth a read if you’re interested in practical, real-world frontend engineering in the public sector.



Contact information

If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab).

Comments


bottom of page