40 results found with an empty search
- AI in action 3: Supporting Service Teams through the Service Standard Strengthen Delivery
AI in action 3: Supporting Service Teams Through the Service Standard Strengthen Delivery In this third post of the series, building on our introduction to AI in public service delivery and our exploration of how AI can directly support teams , we now shift focus to delivery. We explore how AI can support service teams in structuring their work, iterating effectively, and safeguarding services. The Service Standard points covered here (points 6 to 10) focus on what it takes to run a high-functioning digital service team: building multidisciplinary teams, adopting agile methods, improving frequently, ensuring security, and defining success. These are not abstract ideas, they are the operational backbone of trustworthy, responsive government services. AI offers new possibilities across each of these areas. Whether it’s helping teams collaborate more effectively, assisting agile planning, surfacing insights from user feedback, or detecting security threats in real time, AI can be a critical partner in strengthening delivery. The opportunity here is not to replace human expertise, but to reduce friction and empower teams to focus on strategic, high-value work. Let’s explore where current AI tooling is already adding value, and where future innovation might fundamentally reshape how government teams deliver services. Point 6. Have a multidisciplinary team Service Standard summary : The Service Standard says for point 6 that a multidisciplinary team is essential for creating and operating a sustainable service. Such a team should encompass a diverse mix of skills and expertise, including decision-makers who are integrated into the team to ensure accountability and swift responsiveness to user needs. The composition of the team should align with the current phase of service development and include members familiar with relevant offline channels and necessary back-end system integrations. Additionally, the team should have access to specialist expertise, such as legal or industry-specific analysis, and ensure that any collaboration with contractors or external suppliers is sustainable. Existing AI tooling: Automated Meeting Summaries & Action Items: Tools like Otter.ai or Microsoft Teams AI can transcribe meetings, highlight key decisions, and assign action items, helping teams stay aligned regardless of discipline. Role-Aware Knowledge Management: AI platforms like Notion AI or Confluence AI can organise knowledge tailored to different roles (e.g. developers, designers, policy experts), making information more accessible and contextual. Cross-Team Communication Aids: AI chatbots can bridge knowledge gaps by answering team questions on project-specific jargon, legal requirements, or tech architecture, which is helpful for non-specialists. Candidate Matching for Team Building: AI-driven HR tools (e.g. HireVue , Eightfold ) can recommend candidates with diverse skill sets to fill gaps in multidisciplinary teams. Design & Prototyping Assistants: Tools like Figma AI and Uizard can help non-designers contribute to early-stage prototypes, encouraging more inclusive collaboration. Sentiment & Collaboration Monitoring - AI-powered analytics in tools like Slack or Microsoft Viva can flag potential communication breakdowns or burnout risk in teams. Future AI innovations: Dynamic Team Composition Engines : AI could analyse project goals, current team skills, and workload to recommend optimal team structures in real time, like a "squad optimiser". Context-Aware AI Team Members: Intelligent assistants that understand team dynamics and contribute proactively across disciplines, e.g., prompting legal implications during a design discussion. Automatic Skill Gap Detection and Training: AI could assess ongoing work and suggest micro-learning tailored to individuals, helping multidisciplinary teams skill up fluidly. Cross-Discipline Language Translators: Real-time AI translators that convert technical, legal, or policy jargon into plain English (and vice versa) to improve shared understanding. Virtual Co-Pilot for Interdisciplinary Projects: A unified AI tool that supports collaboration across product, design, policy, and development, suggesting decisions, alerting the team to blockers, and keeping the team aligned with service standards. Point 7. Use agile ways of working Service Standard summary : GOV.UK Service Standard's point 7, "Use agile ways of working," advocates for creating services through agile, iterative, user-centred methods. This approach emphasises early and frequent exposure to real users, allowing teams to observe usage patterns, gather data, and continuously adapt the service based on insights gained. By avoiding comprehensive upfront specifications, agile methods reduce the risk of misaligning with actual user needs. Service teams are encouraged to inspect, learn, and adapt throughout the development process, maintain governance structures aligned with agile principles to keep relevant stakeholders informed, and, when appropriate, test the service with senior stakeholders to ensure alignment with strategic objectives. Existing AI tooling: Automated User Research Analysis: AI tools like Dovetail or Aurelius can rapidly analyse qualitative user research (e.g. interviews, surveys) to identify common themes and pain points. Smart Backlog Management: Tools like JIRA with AI-powered suggestions can help prioritise backlog items based on effort, value, and historical data. Natural Language Stand-up Summaries: AI assistants (e.g. Slack bots , Otter.ai ) can summarise daily stand-ups, meetings, or sprint reviews into concise updates for teams and stakeholders. Test Automation with AI: AI-driven testing tools (e.g. Testim , Mabl ) can create and maintain tests automatically as the UI evolves, helping teams iterate faster without breaking things. Code Review and Pair Programming Support: AI code assistants like GitHub Copilot can help developers write and review code more efficiently, speeding up delivery during sprints. Sentiment Analysis on User Feedback: Tools like MonkeyLearn can process large volumes of feedback to gauge user sentiment, helping prioritise improvements based on emotional impact. Future AI innovations: Agile Sprint Advisor : A smart assistant that analyses team velocity, blockers, and mood to suggest sprint goals, story point estimates, and optimal team composition. Real-Time Adaptive Agile Frameworks : AI could dynamically tweak your agile methodology (e.g. Kanban vs. Scrum hybrids) based on real-time metrics, user behaviour, and team health. Predictive Stakeholder Alignment : AI systems might proactively detect potential stakeholder misalignments and suggest communication strategies or demos at optimal times. Automated Prototype Iteration : AI might soon be able to auto-generate and refine prototypes from user feedback and usage analytics without needing a full sprint cycle. Behavioural Coaching for Agile Teams: Future tools could offer personalised coaching to team members based on communication patterns, participation, and stress signals. Autonomous Discovery Research: Advanced AI could independently identify emerging user needs by scanning behaviour data , online forums, and support tickets, feeding insights directly into discovery backlogs . Point 8. Iterate and improve frequently Service Standard summary : Point 8 emphasises the necessity of continuously iterating and improving services to remain responsive to evolving user needs, technological advancements, and policy changes. It highlights that services are never truly 'finished' and that ongoing enhancements go beyond basic maintenance, addressing underlying issues rather than just symptoms. This approach ensures services stay relevant and effective throughout their lifecycle without requiring complete replacement. Existing AI tooling: User Feedback Analysis Tools: Tools like MonkeyLearn or Thematic use AI to quickly analyse open-ended user feedback, surfacing common issues, trends, or sentiments. A/B Testing Automation: Platforms like VWO (Visual Website Optimizer) or Optimizely can use AI to run and evaluate A/B tests more efficiently, suggesting winning variants faster. Anomaly Detection: Services like Datadog , New Relic , or custom Machine Learning (ML) models can automatically detect abnormal patterns in usage or errors, signalling areas needing improvement. Chatbots & Virtual Assistants: AI-powered chatbots (like those from Intercom or Drift ) collect valuable data on where users get stuck, revealing real-time insights to inform iterations. Predictive Analytics: Tools like Tableau with Einstein AI or Power BI with Azure ML can help forecast future issues or trends based on historical user behaviour. Automated Usability Testing: Platforms like PlaybookUX or Maze use AI to analyse tester behaviour, highlighting UX issues that might not be obvious in manual reviews. Future AI innovations: Autonomous UX Optimisation : Future AI systems may automatically redesign or tweak interfaces in real time based on live user behaviour, eliminating the need for manual iterations. AI Co-Pilots for Product Managers: Think of a GPT-style assistant that could read feedback, usage data, and roadmap priorities to suggest or even schedule iterations proactively. Generative UI/UX Design: Generative AI could likely evolve to create user interface variations tailored to different user segments on the fly, reducing design iteration cycles. Proactive Problem Prediction : With advanced behaviour modelling, AI could predict where users are likely to face issues before they even occur, allowing teams to preemptively fix them. Real-Time User Research Agents: AI personas simulating user behaviour at scale could become a core testing method, replacing or supplementing traditional usability studies. Fully Autonomous Service Improvement Agents: Eventually, AI agents might manage continuous delivery pipelines , observe live service metrics, and autonomously deploy safe micro-improvements without any human intervention. Point 9. Create a secure service which protects users’ privacy Service Standard summary : Create a secure service which protects users' privacy section of the GOV.UK Service Standard emphasises the importance of identifying security risks, threats, and legal responsibilities associated with government digital services. To create a secure, privacy-protecting service, GOV.UK Service Standard point 9 requires teams to identify and manage security risks and legal duties. Teams must follow "Secure By Design" principles: get senior leader buy-in on risks, resource security for the full-service lifecycle, vet third-party software, and research user-friendly security measures. They must also handle data securely, continuously assess risks, work with risk teams, manage vulnerabilities, and regularly test security controls. Existing AI tooling: AI-Powered Threat Detection: Tools like Darktrace and Microsoft Defender for Endpoint use machine learning to detect unusual activity, helping identify security breaches in real-time. Anomaly Detection in User Behaviour: Services like Splunk or Elastic Security use AI to flag suspicious access patterns, reducing insider threat or compromised credential risks. AI for Secure Code Review: Tools like GitHub Copilot Security or Snyk use AI to help identify insecure code, vulnerable dependencies, or bad practices in real-time during development. Automated Data Classification & Masking: AI tools can classify sensitive data (e.g., Personally Identifiable Information) automatically and apply masking or redaction rules, e.g., BigID , DataRobot , or AWS Macie . AI-Powered Identity and Access Management: Adaptive access systems use AI to determine access levels dynamically based on context (location, time, behaviour), such as Okta or Ping Identity . Natural Language Processing (NLP) for Policy Compliance: Tools like OpenAI’s GPT or Regtech solutions can help audit privacy policies, terms of service, or user-facing content to ensure alignment with laws like GDPR. Future AI innovations: Self-Healing Infrastructure: AI-driven systems that could automatically detect and patch vulnerabilities without human intervention, reducing exposure time from days to minutes. Privacy-Preserving AI (e.g., Federated Learning + Differential Privacy) - Services trained on user data across decentralised devices without transmitting raw data, could enhance user privacy. Proactive Legal and Ethical Compliance Bots: AI agents capable of continuously scanning systems and processes for legal, ethical, and policy compliance , could update teams on potential issues in near real-time. AI-Assisted Threat Simulation: Intelligent adversarial testing (like an AI-powered “ red team ”) that dynamically tries to break your service using the latest cyberattack techniques. AI-Guided Secure UX Design: AI that evaluates user flows and recommends privacy-enhancing alternatives, like less intrusive authentication methods or better consent models . Conversational Security Assistants: AI copilots for security teams that can answer complex security questions, simulate risks , and suggest best practices based on the service's architecture and data flows. Point 10. Define what success looks like and publish performance data Service Standard summary : GOV.UK Service Standard's point 10 emphasises the importance of defining clear success metrics for government services and publishing performance data. By identifying and tracking appropriate metrics, service teams can assess whether their services effectively address intended problems and identify areas for improvement. Publishing this data promotes transparency, allowing the public to evaluate the success of services funded by public money and facilitating comparisons between different government services. Existing AI tooling: Automated Data Dashboards: AI-powered platforms like Power BI with Copilot , Tableau with AI insights , or Google Looker Studio can automatically generate dashboards, detect anomalies, and offer natural language querying to help teams understand performance in real time. Natural Language Summarisation: Use AI (like ChatGPT or Claude ) to translate complex performance data into easy-to-understand reports or public summaries, helping teams publish accessible data to the public. Predictive Analytics: Tools like Amazon Forecast , DataRobot , or Azure ML can forecast trends and help services set realistic success metrics based on historic performance and current patterns. User Feedback Analysis: NLP tools (like MonkeyLearn or ChatGPT-based classifiers) can scan user feedback (from surveys, social, support tickets) to extract key themes or satisfaction metrics that feed into definitions of success. AI-assisted Goal Tracking: Project management tools with AI (like Asana , ClickUp , or Notion AI ) can help define, track, and surface progress toward performance goals using task data and milestones. Future AI innovations: Real-time Adaptive Performance Models: AI systems that could dynamically redefine success criteria based on changing user behaviour, policy changes, or emerging technologies, like self-adjusting KPIs that evolve with service use. AI Explainability Dashboards: Fully transparent AI dashboards that could not only present data but explain why a metric matters, how it's calculated, and its impact, customised per audience (public, team, leadership). Conversational Public Portals: Public-facing AI bots that could let citizens ask questions like “How well is this service performing?” and receive personalised, up-to-date, natural language responses with supporting data. Autonomous Policy Feedback Loops: AI that links performance data with policy implications , automatically surfacing suggested reforms, service design tweaks, or investment areas based on effectiveness data. Cross-Service Benchmarking AI : A tool that could automatically compare services across departments or regions, highlighting strengths and weaknesses, and recommending tailored success metrics based on peer performance. Having looked at how AI can strengthen delivery through smarter team dynamics, continuous iteration, and proactive security, we’re now ready to explore the technology foundations that support these services. In the next post, we’ll examine the final four Service Standard points—choosing the right tools and technology, making source code open, using open standards and shared components, and operating a reliable service. These points drive sustainability, interoperability, and resilience. We’ll assess how AI can help teams make better technology decisions, write and maintain open-source code, ensure compliance with standards, and build services that are robust and scalable. If delivery is about the rhythm of a good team, these next points are the instruments they need to play in tune. Join me as we explore how AI can help choose, build, and run government technology more effectively. Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .
- AI in action 2: Supporting Service teams through the Service Standard Operational foundations
AI in action 2: Supporting Service teams through the Service Standard Operational foundations by Matt Hobbs In this second post in the series, we begin to explore how artificial intelligence can directly support teams in meeting the UK Government Service Standard. If you missed it, you can read the first article here (opens in a new tab). By aligning the capabilities of current AI tools, and those on the horizon, with the needs of service teams, we can start to see a clear path where AI acts as a multiplier for quality, consistency, and speed. We’ll examine the first five points of the Service Standard, which focus on understanding users, solving whole problems, providing joined-up experiences, simplifying services, and ensuring accessibility for all. These points sit at the very heart of designing inclusive and effective public services. While AI is not a silver bullet, its responsible and deliberate use can free up a team's precious time and resources to focus more deeply on strategy, empathy, and continuous improvement. The use of AI can do this by taking on the “heavy lifting” of data analysis, pattern recognition, and user insight generation. Let’s look at where AI is already making an impact, and where future innovation could lead us next. Where AI can help Point 1. Understand users and their needs Service Standard summary : This point of the GOV.UK Service Standard emphasises the importance of developing an in-depth understanding of users and the issues they face. By focusing on the user's context and the issues they are trying to solve, rather than preconceived solutions, service teams can effectively meet user needs in a simple and cost-effective manner. This approach involves conducting user research, creating quick prototypes to test hypotheses, and utilising data from various sources to gain comprehensive insights into user difficulties. In the sections below, I outline solutions service teams can use now, along with future opportunities for AI to support them. Existing AI tooling : AI-Powered User Research Analysis: AI-driven tools like Dovetail and Affectiva can analyse qualitative user research data (interviews, surveys, feedback etc) to identify patterns and trends. Chatbots and Conversational AI: Tools like ChatGPT , Intercom , or Drift can collect real-time user queries, providing insights into common pain points and unmet user needs. AI-Driven Sentiment Analysis: AI tools like Lexalytics or MonkeyLearn can analyse social media, feedback forms, or customer support interactions to detect emerging issues. Predictive Analytics for User Behaviour: Platforms like Google Analytics with AI insights or Amplitude use AI to predict user needs based on past behaviours. A/B Testing Optimisation: AI-powered A/B testing platforms like Optimizely can help refine service designs by automatically analysing user interactions and determining the best-performing options. Future AI innovations : AI-Generated User Personas : Instead of manually creating personas, AI could dynamically generate data-driven personas based on real-time user interactions. Autonomous User Research Assistants: AI-driven digital assistants could conduct real-time user research, asking adaptive questions based on previous responses. AI-Powered Prototyping : Future AI tools could automatically generate prototypes based on user behaviour data, helping teams iterate on designs more quickly and efficiently. Emotion Recognition and Adaptive UX: While AI technologies like facial recognition or voice analysis could potentially detect user frustration or satisfaction and adapt the service experience, it is essential that the privacy implications of such data collection are rigorously evaluated before any implementation is considered. AI-Enhanced Accessibility Testing : AI could simulate how users with disabilities interact with a service, automatically reporting improvements for accessibility back to the service team. Point 2. Solve a whole problem for users Service Standard summary : This section of the GOV.UK Service Standard focuses on designing services that address users' complete needs by collaborating across teams and organisations. This approach ensures services are intuitive and cohesive, minimising the complexity users face when interacting with multiple government services. Service teams are encouraged to understand constraints, scope services appropriately, and work openly to promote collaboration and reduce duplication. The goal is to create user journeys that make sense without requiring users to understand the internal structures of government. Existing AI tooling : Conversational AI and Chatbots : AI-powered virtual assistants (e.g., GOV.UK Chatbots, ChatGPT ) can provide seamless, 24/7 support, guiding users through complex government processes and reducing the burden on call centres. Automated Case Management and Routing : AI can analyse user queries and automatically direct them to the correct department or resource, ensuring faster and more accurate service delivery. Predictive Analytics for Service Demand : AI models can predict spikes in service demand and help allocate resources efficiently, improving responsiveness and planning. Personalised Service Recommendations : AI can analyse user data to offer tailored services (e.g., suggesting benefits or permits based on life events like moving homes or starting a business). Intelligent Document Processing : AI can extract and verify information from documents (e.g., passports, certificates) to accelerate application processes. Future AI innovations : Cross-Agency AI Integration: AI could unify multiple government services into a single, user-friendly interface, so citizens don’t have to navigate separate systems. Voice-Enabled Government Services : AI-powered voice assistants could allow users to access services through speech, making services more accessible. AI for Policy and Decision Support: AI could analyse patterns in service usage and citizen feedback to recommend improvements or policy changes. Proactive Citizen Support: AI-driven services could anticipate user needs (e.g., reminding users about expiring licences or upcoming payments) and send timely notifications. Bias Detection and Ethical AI: AI systems could be developed to ensure fairness and reduce biases in government services, improving trust and inclusivity. Point 3. Provide a joined up experience across all channels Service Standard summary : Point 3 of the Service Standard emphasises designing government services that seamlessly integrate across all channels, online, phone, paper, and face-to-face, to ensure accessibility and a consistent user experience. It highlights the importance of empowering service teams to address issues across any channel, involving frontline staff in user research, and utilising data from both online and offline interactions to drive continuous improvements. Additionally, it stresses that strategies to promote digital adoption should not hinder access to traditional channels. Existing AI tooling : AI-Powered Chatbots and Virtual Assistants: Tools like IBM Watson Assistant , Google Dialogflow , and OpenAI's ChatGPT can provide consistent, automated support across websites, mobile apps, social media, and messaging platforms. They can also escalate issues to human agents when needed. Omnichannel Customer Experience Platforms: AI-driven platforms like Salesforce Service Cloud , Zendesk AI , and HubSpot AI unify interactions across email, chat, phone, and social media, ensuring users receive consistent responses across all channels. AI-Based Sentiment and Intent Analysis: Tools like Google Cloud Natural Language API and AWS Comprehend analyse customer feedback from various sources to identify pain points and improve service design. Automated Document and Form Processing: AI-based OCR tools (e.g., Adobe Sensei , ABBYY FlexiCapture ) extract and process information from paper forms or scanned documents, allowing users to switch between offline and digital channels seamlessly. AI-Powered Call Centre Support: AI tools like Google Contact Center AI and Five9 Intelligent Cloud Contact Center transcribe , analyse, and route calls to the right agents while maintaining a record of previous interactions. Future AI innovations : Context-Aware AI Agents: Future AI assistants could remember user interactions across channels (web, phone, in-person) and pick up conversations where they left off, offering a truly seamless experience. AI-Powered Real-Time Translation and Accessibility : AI tools could automatically translate conversations across languages in real-time (e.g., advanced Google Translate AI ) and enhance accessibility by transcribing voice conversations to text instantly for deaf users. Personalised AI Service Recommendations: AI-driven recommendation engines could analyse a user's past interactions and predict their next needs, proactively suggesting the best service channels and steps to take. Unified AI-Powered Digital Identity Verification : Future AI systems could securely verify users across different platforms using biometric authentication, facial recognition, and behavioural analysis, allowing for a smooth transition between online and offline services. AI-Driven Predictive Support : AI could analyse historical data to predict when users might need assistance and proactively offer solutions before they even reach out for help. Point 4. Make the service simple to use Service Standard summary : 'Make the service simple to use' emphasises designing government services that are intuitive, accessible, and easy for users to navigate. It stresses the importance of understanding user needs, removing unnecessary complexity, and ensuring services work for everyone, including those with disabilities or low digital skills. Services should be tested with real users, provide clear guidance, and avoid technical jargon to create an intuitive experience. Existing AI tooling : Intelligent Chatbots and Virtual Assistants: AI-powered chatbots provide 24/7 support across web, mobile, and voice channels. AI-Powered Search and Auto-Suggestions : AI enhances search by predicting user intent and dynamically suggesting relevant content. Automated Accessibility Enhancements : AI generates captions, text-to-speech, and real-time translations to improve accessibility. Smart Form-Filling and Data Auto-Completion : AI pre-fills forms and error-checks inputs to reduce mistakes. Personalised User Experiences : AI-driven content recommendations tailor service instructions based on user preferences. AI-Powered Process Automation and Self-Service : AI assists users in complex processes, reducing manual effort. Predictive User Support and Proactive Assistance : AI anticipates issues and provides relevant help before problems arise. Conversational Voice Interfaces and Multimodal Interactions : AI-powered voice assistants enable hands-free interaction with services. AI-Based Sentiment and Frustration Detection : AI analyses feedback and chat logs to identify user pain points. Fraud Detection and Security Simplification : AI-powered ID verification and fraud detection streamline authentication. Future AI innovations : Emotionally Aware Chatbots: AI could detect frustration or tone and adjust responses accordingly. Context-Aware Search: AI could understand past interactions to auto-filter irrelevant results. Dynamic Accessibility Adjustments: AI-powered interfaces could adapt layout and readability based on cognitive load or disabilities . Predictive and Adaptive Forms: Forms could dynamically adjust based on user needs, for example: reducing unnecessary form fields on digital interfaces. Fully Adaptive Interfaces: AI could modify interface layouts , font sizes, and navigation based on user behaviour. AI-Driven Digital Assistants for Task Completion: AI could submit documents and complete applications on behalf of users. AI-Powered Nudges: AI could guide users to complete key tasks based on previous behaviour patterns . Multimodal AI Interactions: AI could seamlessly switch between voice, text, and gestures depending on user preference . Real-Time Emotion Detection for Support Teams: AI could alert teams when users are struggling, allowing instant intervention. Biometric AI for Seamless Security: AI could enable password-free authentication through facial recognition or speech recognition . Point 5. Make sure everyone can use the service Service Standard summary : The GOV.UK Service Standard's fifth point, "Make sure everyone can use the service," emphasises designing services that are inclusive and accessible to all users , including those with disabilities, legally protected characteristics, limited internet access, or low digital skills . Service teams are advised to meet accessibility standards for both online and offline components, conduct user research with diverse participants, and provide appropriate support to ensure no user is excluded. Existing AI tooling : Automated Accessibility Testing: Tools like axe , WAVE , and Google's Lighthouse , enhanced with AI, help detect accessibility issues in real time (e.g., missing alt text, poor contrast). AI-Powered Transcription & Captions: Services like Google Speech-to-Text , Otter.ai , or Microsoft Azure can provide real-time subtitles and transcripts for audio/video content, improving accessibility for deaf or hard-of-hearing users. Language Translation & Simplification: AI tools like DeepL or Google Translate assist by translating content into multiple languages, while GPT-based tools can simplify complex text, making information more accessible to users with low literacy levels or cognitive impairments. Voice Assistants & Conversational Interfaces: AI-driven chatbots (e.g. on GOV.UK or NHS sites) can guide users through processes using plain language or voice interaction, helping those with visual or motor impairments. Personalisation Engines : AI can adapt interfaces to user preferences, like increasing font sizes, contrast, or offering keyboard-only navigation modes, based on learned behaviours. Future AI innovations : Real-Time Inclusive Design Feedback: AI design assistants could offer proactive suggestions during development to flag accessibility concerns or recommend more inclusive design patterns. Emotion and Intent Detection: Advanced AI could detect user frustration or confusion through sentiment analysis (e.g. tone of voice, facial expressions) and offer adaptive support instantly. Dynamic UI Generation: AI could auto-generate personalised interfaces based on a user’s device, environment, or abilities, creating a “design-for-one” approach at scale. Augmented Reality (AR) for Navigation: AI-enabled AR could help users with visual or cognitive impairments navigate complex public spaces or digital services using voice-guided overlays. Multimodal Accessibility Agents: Future AI assistants may seamlessly switch between text, voice, visual, and gesture inputs/outputs to match users' preferred interaction mode in real time. As we’ve seen, AI is already playing a role in how service teams understand users, simplify experiences, and deliver inclusive services. Whether it’s enhancing user research, supporting accessibility, or helping create joined-up services, AI has clear potential to amplify the points behind good service design. In the next article, we’ll turn our attention to the next group of Service Standard points, those that deal with team structure, agile practices, iteration, and security. These are the operational foundations that support successful delivery, and we’ll explore how AI can support multidisciplinary collaboration, continuous improvement, and safe, secure digital services. Join me again in the next article as we continue to map the intersection between AI and service excellence, coming soon. Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .
- Lessons from the Cabinet Office GitHub Copilot Trial
Lessons from the Cabinet Office GitHub Copilot Trial by Cameron Browne Drawing on lessons from the recent Cabinet Office Github Copilot trial, Cameron shares practical advice on how to use AI assistants as a powerful tool for learning and delivery, while ensuring you remain the pilot. AI assistants like GitHub Copilot are changing the way we work. They can be powerful tools, but they also have their limitations. I recently participated in the Cabinet Office Trial for GitHub Copilot. The trial was part of a wider government initiative to explore how AI code assistants can support digital delivery teams in organisations like the Ministry of Justice. It marked a shift towards encouraging and promoting responsible AI practices. As a QA Engineer currently working at His Majesty’s Courts and Tribunals Service (HMCTS), I used Copilot to help write and maintain automated test suites. I was provided with access to the GitHub Copilot AI code assistant in my development environment for 4 months, along with training in prompt engineering. Prior to this trial I had no experience with Copilot , Codex , or Large language models (LLMs). My focus for this article will be providing practical advice for using AI code assistants. I believe this is useful for anyone who is currently trying to navigate the fast-paced world of constantly changing and improving AI assistants. These tips are not only applicable to AI code assistants, but also any AI chatbot you may use, and I believe they will stay relevant as the AI landscape changes. Like any tool, there is a right way and a wrong way to use it. Welcome message for participants of the GitHub Copilot trial. Tip 1 - AI is a great teacher Use AI to onboard and learn faster AI assistants can act like personal tutors — and no question is too simple. For example, you can ask: “Does this project have automated accessibility tests?” This has been really helpful for me as an early career QA engineer who has previously moved from a project with Java developers to one with Ruby and Python developers. It helps me get up to speed quickly and navigate the project, even if there are technologies I haven’t worked with before. To get better answers, set the scene. Give Copilot a role and explain your experience level. For example, “I am a QA with 1 year experience with test automation and 2 months experience in Cucumber, you are a senior dev, teach me how this test suite works”. This tailors the response to your experience level. Other ideas for tailoring your assistant: Ask it to be your paired programmer to help figure out a bug Ask it to be your assistant and write documentation for you Feed it documentation and ask questions about it Finally, in the Copilot Chat, you can go back and clarify points. “I understand this , but not this. Explain it to me more simply”. As a QA engineer, you are constantly exposed to new technologies, so it’s important to keep learning. AI assistants have the potential to accelerate our learning and help us stay up to date as the tech landscape evolves. An example of highlighting a section of code and prompting 'in-line' in the code editor. Tip 2 - Concise context = Quality responses Keep prompts focused and remove clutter Your context is everything you send in your AI request (what the AI sees). The more unnecessary information you send to the chatbot, the more tokens you will use, and the more confused the response is likely to be. It also takes longer to generate your response and it is worse for the environment*. *The use of large AI prompts can be bad for the environment because running AI models consumes significant energy, contributing to carbon emissions. Clear and concise prompts lead to better results. I’ve found 2 key ways to achieve this: 1. Limit the unnecessary information you send with your request. When you prompt AI, you want to indicate relevant code: Open only relevant code files, close irrelevant ones. Autocomplete Copilot will use your open files to understand the context of your work and offer suggestions. Choose the right prompt method for the task. Consider if you should highlight a section of code and prompt ‘in-line’ when you want a focused response based on a specific section of code or a single file. Or use Copilot chat when your question requires a broader context across multiple files. Picking the right method helps control token usage and ensures more accurate results. You can use the @project tag in Copilot chat (see image) - this will send your entire project in the request, but it’s worth noting that this is more context than you will likely need. 2. Keep the Copilot Chat history relevant: Copilot uses your whole chat thread as context — keep it clean and focused. Start a new conversation for new tasks to refresh your context window. Delete irrelevant responses within your current chat history (the bin icon, which is also demonstrated in the image). In short, manage your context well and the quality of responses generated will be better. An example of a previous prompt in GitHub Copilot Chat. The bin icon is circled to show how to delete an irrelevant prompt and response from your chat history. Tip 3 – AI can't read your mind… yet Don’t expect AI to guess — show, iterate, and refine While AI assistants are incredibly powerful, they're not mind-readers. Problems tend to arise when you expect AI to just know what you require and let it make assumptions. To get the best out of your AI assistant, you need to be crystal clear about your requirements; here’s how: Examples are your best friend Want your AI to write code that matches your team's preferences for readability and maintainability? Show, don’t tell. Whether it's the specific formatting of your tests or the naming conventions for different scenarios, providing examples is a huge time-saver. You can indicate a file with an example, or even paste some example code directly into your prompt. It's much quicker than typing out all your requirements. For instance, instead of a lengthy explanation, you can simply say: "…look at the end_to_end.feature file for examples of the naming conventions to use for different test scenarios". Open a dialogue and iterate Think of your interaction with AI as a conversation. Don't just accept the initial response you get. If something isn't quite right, ask Copilot why it made certain choices. If you have a different preference, don't be afraid to prompt further. A prompt like, "I don’t like this, can you structure it this way instead to make it a bit more readable and consistent with the other tests… " can work wonders. Iterating with Copilot Chat is a much quicker way to refine your output. Start with a general request, and then get more specific to improve the results. I find that refining the response using a chain of prompts is a much more productive way to work, rather than trying to strike gold with your first prompt. Often, it's the first AI response that helps you remember things you forgot to include in your first prompt. Maybe one day Copilot will be able to just read our thoughts, but for now, mastering clear communication, using plenty of examples, and embracing iteration are key to unlocking its full potential. Tip 4 - You’re the pilot Stay in control of your code This tip is perhaps the most simple but most important, and the one that really stuck with me. Remember, the tool is called ‘Copilot’ for a reason; you should be in control. AI assistants in all their forms are great for offering suggestions, but they shouldn’t be making your decisions. Copilot should never be handed a big, complex task for you to then just copy in the finished code. You can use Copilot to do complex things, but break complex tasks into steps, that way you can keep track of each step being taken and each decision made. You should fully understand everything you copy from AI because you’re the one responsible for the changes you make. While it can be tempting to copy and paste from Copilot without analysing every line of code, ‘ vibe coding ’ can only get you so far if you don’t understand the changes you’re making. If your AI tool is taken away, you should still be able to do your work. Use Copilot as a tool, not a crutch. You should still be able to work without it. Pro tips for staying in the driver’s seat Let Copilot help you break down complicated work into smaller steps E.g. - “I need to increase the coverage of my e2e tests to include a new user journey - break down this task into smaller steps based on my current e2e test coverage.” Have Copilot explain its work and help you understand it so you stay in control The image humorously suggests that while AI can quickly generate code, it may lead to even more time spent debugging. It’s easy to lose track of ownership when Copilot is doing the typing, but the decisions still need to be yours. You should be involved in each step and understand the changes you make. Otherwise you’ll spend more time debugging AI code than you would’ve spent doing the task yourself. Wrapping up GitHub Copilot can… Teach Understand your level of experience Follow clear instructions Brainstorm ideas Debug error messages and find the route of problems Iterate on responses Speed up your work GitHub Copilot cannot… Keep your context relevant Read your mind to know what you want Replace you as the pilot Take responsibility for its work Useful resources: Github Docs Prompt Engineering: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat Github Copilot Cheat Sheet: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/github-copilot-chat-cheat-sheet Contact information If you have any questions about our AI initiatives, Quality Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .
- Unlocking the web: start your journey into digital accessibility
A look at how we can follow inclusive practices to ensure equal access to digital services for everyone. Guided by standards such as the Web Content Accessibility Guidelines (WCAG) and legislation, organisations should prioritise accessibility from the outset. Through rigorous testing, user feedback loops, and continuous improvement we can drive progress in accessibility. Overview What is digital accessibility Who benefits from digital accessibility? Legal standards and guidelines Shift Left accessibility Testing, auditing and user feedback Progress over perfection Contact information What is digital accessibility? Digital accessibility ensures there are no barriers for individuals when using digital services. This makes accessibility a functionality issue. Simply put, if the service is not accessible it is not functional. Although there are legal requirements to highlight the importance of accessibility, it goes beyond legal compliance checklists and is centred on creating inclusive digital spaces that everyone can use. Who benefits from digital accessibility? Web accessibility benefits everyone. When digital spaces are built with accessibility in mind the result is faster, easier and more usable services. Importantly, this makes the service accessible for people with permanent, temporary and situational disabilities. People may have accessibility needs across the following areas: Cognitive Visual Auditory Motor Speech Visual representation of disability types such as cognitive, visual, auditory, motor, and speech. Source: https://www.esri.com/arcgis-blog/products/arcgis-storymaps/constituent-engagement/building-an-accessible-product-our-journey-so-far/ Take time to understand your users and understand their experiences on your services. Not every user will have the same needs, and some users' requirements may conflict with others. Providing options and alternatives will allow you to create more inclusive digital spaces with reduced barriers for your users. Legal standards and guidelines Equality Act 2010 As far as legal requirements go, the Equality Act 2010 states that there is a ‘ a duty to make reasonable adjustments’ for those who classify as ‘disabled persons’. Government requirements Under the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018 all public services have further defined accessibility requirements which are to: meet level AA of the Web Content Accessibility Guidelines (WCAG 2.2) as a minimum work on the most commonly used assistive technologies - including screen magnifiers, screen readers and speech recognition tools include disabled people in user research have an accessibility statement that explains how accessible the service is - you need to publish this when the service moves into public beta As a minimum, it is required that public services meet basic requirements, but even for non-public services it is good practice to follow these guidelines. In doing so, you begin to make your digital service an accessible space for all. WCAG The Web Content Accessibility Guidelines (WCAG) serve as the internationally recognised standards for web accessibility. WCAG provides guidelines organised into four principles: Perceivable, Operable, Understandable, and Robust (POUR). Following these guidelines enhances the overall accessibility of your web content. Perceivable: Provide alternatives for non-text content, captions, and sufficient colour contrast for text. Operable: Ensure keyboard accessibility, sufficient reading time, and avoid content causing discomfort. Understandable: Use clear language, consistent navigation, and offer input assistance. Robust: Employ valid code, adhere to web standards, and avoid browser-specific features. Currently, web content should adhere to the WCAG 2.2 (2023) standards . The recent version introduces 9 new guidelines (6 A & AA) and removes one (4.1.1 Parsing) . Meeting the WCAG 2.2 guidelines will mean you will also meet the previous versions of the guidelines. Shift Left accessibility Visual representation of shift left activities that involve security, testing and operations processes earlier on in the dev cycle including throughout plan, code, build, test, release, deploy, operate and monitor phases. Source: https://blogs.vmware.com/cloud/2021/05/11/shift-left-platform-teams/ Accessibility should not be the responsibility of a single person/role but of the whole team. This involves baking accessibility in from the start, from the initial idea through to sign off. This implements a ‘Shift Left’ approach which encourages earlier accessibility reviews, involving all on the team from product owners through to release. A shift left approach embeds accessibility into the process so that it is not just an afterthought or a bottleneck to releases. It also prevents an excess of accessibility tech debt items that tend to remain at the bottom of the backlog. Testing, auditing and user feedback A large part of creating accessible services is to regularly test the service using automated testing tools and manual assessments (including testing with assistive technology). At Solirius we have several Accessibility specialists who are continuously working to implement, build and maintain accessible and inclusive services. Testing needs to be carried out in parallel to regular user testing to ensure you better understand real experiences for users and are not just building services to meet compliance. Progress over perfection Accessibility is a vast area with many specialisms, and can initially feel overwhelming. But it’s important to remember that even small accessibility considerations are a start and can go a long way for users. Don’t let the pressure of perfection stop you from getting involved and learning about accessibility. Lean on your peers and figure out how you can tackle challenges together, it is a learning curve for many but we all start somewhere. Summary Prioritising web accessibility ensures that your services are inclusive and usable for all users. By implementing a shift left approach, utilising the Web Content Accessibility Guidelines (WCAG) and involving users with a variety of needs, you can create a more inclusive digital landscape. Remember, accessibility is an ongoing journey involving everyone, and continual efforts to improve will help create digital services that benefit all. Contact information If you have any questions about accessibility or you want to find out more about what services we provide at Solirius please get in touch .
- Meet the Team: Ayesha Saeed
Ayesha shares her journey to becoming an Accessibility Lead at Solirius as well as insight into her top tips and interests. Meet Ayesha Saeed, a Senior Accessibility Specialist with over 5 years of experience working in accessibility on a range of products in both public and private sectors. She has a wide variety of experience including conducting audits, delivering training, and building implementation plans with teams, through to app accessibility and consulting. How did you get involved in accessibility? I have a QA background and so I started my accessibility journey by conducting accessibility audits, which prompted me to begin learning about accessibility principles and user-focused design. I really enjoyed learning about accessibility and all the different specialisms within it. I studied Social Anthropology at university so I enjoyed learning about people and understanding the numerous ways people interact with technology. I went on to work on a government project where I learnt lots about the laws surrounding digital accessibility; GDS standards and WCAG compliance. I expanded my experience to mobile apps, gaining invaluable insights into the nuances of mobile accessibility and learning more about guidelines for iOS and Android platforms. I began to cultivate a culture of accessibility on the projects I worked on, educating my team, working to ensure that accessibility considerations were no longer an afterthought. Currently, I am an Accessibility Lead at Solirius working on another government project, managing several services and ensuring they have the necessary guidance to deliver accessible services. I support on testing practices, writing Accessibility Statements and working with teams to build roadmaps to make their services accessible. I also deliver training sessions to empower services to integrate accessibility principles in the early stages of development and help to motivate them to sustain their efforts throughout the process. What are your interests? I like to cook a lot and enjoy taking my mum’s classics and turning them into veggie friendly versions using my homemade seitan. I also like to keep active by swimming regularly and (occasionally) attempting yoga. I’ve also gotten into crocheting recently and enjoy seeing what I can make. Top accessibility tip? Don’t feel like you need to know it all! Digital accessibility is such a rich subject and can be difficult to grasp when you are new to it. Just remember to be patient with your learnings, reach out to peers, read about accessibility and try to get involved with the accessibility communities for support. Your small changes can have a huge impact! Top accessibility resource? The A11y Slack - It’s a great community of accessibility specialists and advocates who are friendly and open to help. It is free and open to all, and you can join at web-a11y.slack.com . Contact information If you have any questions about accessibility or you want to find out more about what services we provide at Solirius please get in touch .
- Breaking barriers: digital inclusion in government services
Breaking barriers: digital inclusion in government services In this article, Piya discusses the importance of creating government services that are accessible to everyone . Government accessibility standards exist to ensure that a wide range of people can use government services on both web and mobile applications. Importantly, accessibility is a shared responsibility, and Piya lists resources that offer guidance on integrating accessibility into the development of services. Overview: GOV.UK requirements Meeting WCAG 2.2 Testing with assistive technology User research with disabled people Accessibility statements GOV.UK design system DWP resource GOV.UK requirements The government accessibility requirements state that all services must meet the following criteria to ensure that all legal requirements regarding public sector websites and mobile applications are met: Meet level AA of the WCAG 2.2 (Web Content Accessibility Guidelines) at a minimum Work on the most commonly used assistive technologies - including screen magnifiers, screen readers and speech recognition tools Include disabled people in user research (including cognitive, motor, situational, visual and auditory impairments) Have an accessibility statement that explains how accessible the service is (published when the service moves to public beta) Reaching these requirements ensures that services meet the legal requirements as stated by Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018 . In addition, we can ensure that we are creating more inclusive digital services for users with diverse needs. Meeting WCAG 2.2 WCAG 2.2 is based on 4 principles, that emphasise the need to think about the different ways that people interact with digital content: perceivable: recognising and using the service with senses that are available to the user. operable: finding and using content, regardless of how a user chooses to access it. understandable: understanding content and how the service works. robust: content that can be interpreted reliably by a wide variety of user agents. For example, users might use a keyboard instead of a mouse or rely on a screen reader to have content spoken aloud. The WCAG 2.2 principles apply to all aspects of your service (including code, content and interactions), which means all members of your team need to understand and consider them. It is important to conduct regular accessibility testing using a range of automated and manual tools as early as possible to ensure your design, code, and content meet WCAG 2.2 AA requirements (all A and AA criteria). Testing with assistive technology To meet the government service standard, testing should be done across the following assistive technologies and browsers throughout development, ensuring that the most commonly used assistive technologies are tested and work on the service before moving to public beta: JAWS (screen reader) on Chrome or Edge NVDA (screen reader) on Chrome, Firefox or Edge VoiceOver (screen reader) on Safari TalkBack (mobile screen reader) on Chrome Windows magnifier or Apple Zoom (screen magnifiers) Dragon (speech recognition tool) on Chrome Low vision user using a screen magnification tool to increase the text size on a webpage to allow them to see the content clearly. Source: Digital Accessibility Centre (DAC) https://digitalaccessibilitycentre.org/usertesting.html It is a shared responsibility to make sure services are compatible with commonly used assistive technologies as testing across these combinations should be done throughout all stages of development; when planning new features, when designing and building new features, and testing. For more information on how to test with assistive technology, see testing with assistive technologies . User research with disabled people Inclusive user research is essential for creating user-centred services that meet the needs of all users, including those with disabilities and diverse backgrounds. By involving a varied group of participants early on, teams can identify and address usability and accessibility barriers, enhancing the design, functionality, and content to benefit everyone. This approach encourages continuous improvement, ensuring government services evolve with users' needs. Ultimately, inclusive user research builds trust by showing a commitment to accessibility, making services more usable and welcoming for a broader audience. Accessibility statements Accessibility statements are required to communicate how accessible a service is. This includes stating the WCAG compliance level, explaining where the service has failed to meet guidelines (and a roadmap of when this will be fixed), contact information and how to report accessibility issues. Government services should follow a standard accessibility statement format to maintain consistency. GOV.UK Design System (GDS) The GOV.UK design system (GDS) has many reusable components that are utilised across government services. Each component shows an example, an option to view the details on how to implement the component, as well as research regarding the component's usability and what kind of issues users have faced. Any known accessibility issues are also highlighted and based on this research, some components are labelled ‘experimental’ as some users may still experience issues navigating them. Services must proceed with caution when adopting these components, and carry out rigorous manual, assistive technology and user testing to ensure that the implementation is accessible and WCAG guidelines are met. Example of where to find accessibility research on the GDS details component, under heading ‘Research on this component’. Source: Government Design System (GDS) details component - https://design-system.service.gov.uk/components/details/ Summary Overall, government services must ensure they are creating services that are regularly tested and work with users who have a range of access needs or assistive technology requirements including: Reviewing, understanding, and meeting GOV.UK and WCAG 2.2 standards Implementing accessible components that can be accessed by assistive technology Ensuring accessibility is the whole team’s responsibility when developing a service Regularly testing with users with disabilities Providing an accessibility statement to inform users where the service does and does not meet accessibility guidelines Accessibility should be considered from the start as retrofitting costs more time and resources, and results in your users not being able to use your service. DWP resource: The Department for Work and Pensions (DWP) accessibility manual is a great resource for guidance on testing, accessibility best practices throughout service development and details on how each member of the team can integrate accessibility. DWP Accessibility Manual home page Source: GOV.UK - Accessibility in Government - https://accessibility.blog.gov.uk/2021/05/27/why-weve-created-an-accessibility-manual-and-how-you-can-help-shape-it/ Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch .
- 6 common accessibility mistakes in design—and how to fix them
6 common accessibility mistakes in design—and how to fix them by Philena Bremner In this article, Philena discusses the importance of designing accessible experiences that cater to a diverse range of users, as well as for temporary or situational challenges. She touches on why accessibility is not just a technical requirement but a design principle that benefits everyone. Philena highlights six common design mistakes that hinder accessibility and provides practical solutions to create more inclusive, user-friendly designs. Why accessibility in design matters Design isn’t just about making things look good—it’s about making sure everyone can use your product or service. Think about it: you’ve probably struggled with low contrast on your phone in bright sunlight or found it hard to navigate a cluttered website when you’re in a rush. Accessible design makes things easier for everyone. But accessibility isn’t just about following guidelines - it’s also about understanding real user needs. That’s why user research and feedback on design decisions are essential to ensure designs truly meet the needs of diverse users. By listening to feedback and testing with people who have a range of abilities and experiences, designers can identify barriers and create solutions that work for everyone. So, let’s look at some common design mistakes and how you can avoid them to create a better experience for all users. Mistake 1: Low contrast text Let’s start with one of the most obvious issues - low contrast. Sure, it might look stylish to have light grey text on a white background, but can anyone actually read it? Now, imagine someone with a visual impairment trying to make sense of that. But here’s the thing: low contrast isn’t just an issue for those with impaired vision. Think of someone trying to read on their phone outside in the sun, with the screen reflecting glare—contrast matters in that scenario too. Don’t example of low contrast text with light grey text on a light grey background, making it hard to read. Do example of high contrast text with dark grey text on a lighter grey background, making it clear and easy to read. How to get it right: Aim for a contrast ratio of at least 4.5:1 for normal text. Use tools like the WebAIM Color Contrast Checker to test your designs. Think of contrast as a universal design principle—if it’s easier for someone with a visual impairment, it’s easier for everyone. Mistake 2: Relying only on colour to convey information Think about a form where the only indication of an error is a red outline. For someone who’s colourblind, that red outline might not even register. The same problem happens when colour alone is used to convey important information, like in charts or buttons. Accessibility isn’t just about catering to specific disabilities, it’s also about ensuring clarity for everyone. Whether it’s a person with colour blindness or someone trying to interact with your design in less than ideal lighting, relying solely on colour can be a problem. Don’t example of two forms side by side showing an error relying solely on colour to convey information. On the left, the perspective of a user without colour blindness shows a red border around the email field to indicate an error. On the right, the perspective of a user with colour blindness (Deuteranopia) shows the same form where the red border is not distinguishable, making the error unclear. Do example of two forms side by side showing an improved design where errors are supplemented with icons and text. On the left, the perspective of a user without colour blindness shows an email field with a red border, an error icon, and the text 'Enter your email.' On the right, the perspective of a user with colour blindness (Deuteranopia) shows the same form where the error icon and text are clearly visible, ensuring the error is understandable without relying on colour alone. How to get it right: Always supplement colour with icons, text, or patterns. For example, instead of just using a red outline for errors, add a symbol and text that clearly explains the issue and how to fix it. Use a colour-blindness simulator during the design process to ensure your work is still clear without colour. Be aware that blindness simulators will never replace real user feedback. Ensure you test your designs with diverse users. Mistake 3: Complex layouts that confuse users We’ve all been there—landing on a website that’s so cluttered and chaotic that we have no idea where to look. For someone with cognitive disabilities or attention issues, this kind of layout can make navigation nearly impossible. But even without a disability, a complex layout can be frustrating. Picture yourself trying to book a flight on a crowded train, with limited time and attention—simplicity and clarity become lifesavers. Don’t example of three pages showing a complex and inconsistent layout. The panels have inconsistent button placements, varied spacing, and misaligned elements, making navigation and readability difficult. Do example of three panels showing a simple and consistent layout. The panels have aligned elements, consistent button placements labeled 'Continue,' and uniform spacing, making navigation clear and easy to follow How to get it right: Use a clear visual hierarchy with headings and subheadings that guide users. Make important information easy to find with a clean layout, such as grouping related elements together to create an intuitive flow. Use consistent spacing, fonts, and alignment to reduce cognitive load. Keep consistency across pages, so users don’t have to relearn how to navigate every time. For example, place the primary action button, like "Continue" or "Submit," in the same location across all pages and use consistent labelling to avoid confusion. Mistake 4: Text that’s too small or difficult to read Tiny text is a big problem. Whether someone has low vision or is trying to read on a small screen in a bumpy car ride, small, illegible text makes for a frustrating experience. Readable text benefits everyone. Imagine you’re trying to skim an article on your phone during your commute—clear, bold text that’s easy to read helps you grasp the key points. Don’t example showing text that is tiny and hard to read, with a decorative font that reduces readability Do example showing text with a larger font size and a clear, easy-to-read typeface for better accessibility. How to get it right: Use a minimum font size of 16px for body text. Keep line length between 45 to 75 characters for better readability. Choose fonts that are easy to read, with good spacing between letters and lines. Some fonts that are considered accessible include: Arial, Calibri, Century Gothic, Helvetica, Tahoma, Verdana, Tiresias, and OpenDyslexic. Again, it is important to get real user feedback to see what works for your users. Mistake 5: Missing image descriptions For someone using a screen reader, images without descriptions are a black hole of information. They can’t see what the image is trying to convey, so they miss out on key content. Alternative text or alt text can provide that context for users by describing images for users who can’t see them. But alt text isn’t just for screen reader users. What about someone with a slow internet connection? While they’re waiting for the images to load, they can still understand what’s there if you’ve provided alt text. Don’t example showing an unclear alt text description for an image with a purpose. The image of mountains and a sun is labeled with the file name '12344545767.jpg,' which does not provide meaningful context. Do example showing a clear alt text description for an image with a purpose. The image of mountains and a sun is described as 'Simple illustration of mountains and the sun,' providing meaningful context. How to get it right: Always include meaningful alt text for images that convey information. Avoid purely decorative images, or if they are not needed make sure they’re marked as such by using empty alt text ( alt="" ). Alt text should reflect the image’s purpose and context in relation to the surrounding content, for example if you use ‘simple illustration of mountains and a sun’: On a page about travel destinations it could be: “Illustration of a mountain range at sunrise, representing a peaceful travel location.” On a page about design inspiration it could be: “Minimalist mountain and sun illustration showcasing simple design concepts.” Think of alt text as part of the story you’re telling—don’t leave users in the dark. How to write good alt text for screen readers Mistake 6: Incomprehensible data graphs Complex data visualisations can be a headache for users, especially those with assistive technology or those who are colourblind. Labels that are too small or graphs that rely solely on colour can make it difficult to understand what’s being presented. But this isn’t just a challenge for users with disabilities. Anyone trying to read a graph on a small screen or in a distracting environment will appreciate clear, easy to understand visuals. One simple way to make graphs more accessible is to incorporate patterns or textures in addition to colour. For example, instead of only using red and green in a pie chart, you can add stripes or dots to differentiate between sections for users who struggle with colour perception. Don’t example of two pie charts relying solely on colour to convey information. On the left, the perspective of a user without colour blindness shows sections in orange, purple, and pink labeled 'Pass,' 'Fail,' and 'Not applicable.' On the right, the perspective of a user with colour blindness (Achromatopsia) shows the same chart in grayscale, making it impossible to distinguish between sections. Do example of two pie charts with additional patterns and labels to supplement colour. On the left, the perspective of a user without colour blindness shows the chart with colours, patterns, and text labels indicating '24% not applicable,' '45% pass,' and '31% fail.' On the right, the perspective of a user with colour blindness (Achromatopsia) shows the same chart with patterns and text labels, ensuring the data is still understandable without relying on colour. How to get it right: Provide clear, concise summaries of data trends. Label graphs and charts clearly, with text and visual cues like patterns. Use high contrast colours and provide alternative formats, like tables, for users who prefer text-based information. For image-based graphs, provide clear alt text or captions that describe the data and key insights, ensuring the information is accessible to screen reader users. Designing for everyone At the end of the day, accessibility is about making sure everyone has equal access to services and products. By avoiding these common design mistakes, you’re not just helping people with disabilities—you’re creating a better experience for anyone who might be in a permanent, temporary or environmental situation where good design means accessible design. Take action When designing services or products, ask yourself: is this accessible for everyone? Start making these changes today, and be sure to conduct user accessibility testing along the way - you may be surprised by small changes that improve the overall user experience for everyone. Additional resources To further enhance your accessibility design skills, explore these valuable resources: Accessibility - Material Design WebAIM: Web Accessibility for Designers Stark - Contrast & Accessibility Checker | Figma Accessible fonts and readability: the basics How to write good alt text for screen readers Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch .
- Let’s talk accessibility: why we need proxy users
Have you ever been in a situation where you’re keen to test the accessibility of a service, but your target users haven’t communicated any accessibility needs? Sree (Sreemoyee), our Principal User Researcher, discusses how you can advocate for diverse user needs and ensure inclusive design on your projects. In a recent project, our data-fluent user group did not declare any accessibility needs, which led our team to consider skipping accessibility tests. Recognising the importance of catering to future users with accessibility needs and staying ahead of evolving user requirements, I turned to an ‘Accessibility Lab’, a database of proxy users with accessibility needs curated by our client’s User Centered Design (UCD) team. Who are proxy users in the context of accessibility testing? Proxy users, though not part of the primary user group, share comparable digital skills and accessibility needs that make them useful contributors to inclusive design. For my education-centric project, the Department for Education (DfE) Accessibility Lab was the ideal resource, featuring primarily teachers as proxy users who had signed up to be contacted for accessibility testing. Importantly, these teachers were not users of the service we were testing, ensuring unbiased perspectives without preconceptions. Venn diagram illustrating the intersection of Target users and Proxy users, highlighting shared traits in the overlapping area: comparable digital skills and accessibility needs. How I prepared for accessibility testing with proxy users: hot tips We opted for remote testing to accommodate the preference and availability of the proxy users. This decision necessitated adjustments to ensure effective testing. Clearly communicating the necessary information I communicated with the participants through emails and video calls, reassuring them that no prior knowledge of the service was necessary. Before the remote testing sessions, I provided them with the project background, outlining the goal of evaluating service accessibility. Throughout, I encouraged open communication, emphasising to participants that we are testing the service and not them, encouraging candid and honest feedback. Tailoring the usability tests It was important to familiarise myself with the specific accessibility needs of the proxy users to understand each person’s unique requirements. When testing with a participant with dyslexia who reported finding traditional text-heavy interfaces challenging, I asked them to describe their current environment and any assistive technologies they might use for dyslexia. During the test, I focussed on their interaction with fonts, line spacing, and visual cues to assess their content comprehension. Crafting guided interactions In remote sessions, I asked participants to use their main device and specified the browsers. Recognising potential challenges faced by proxy users who are unfamiliar with the service, I provided extra guidance and prompts, to enhance clarity in task understanding. For example: Original prompt: “Start the data submission journey and go through it as you normally would.” Guided prompt: “Start the data submission journey by selecting option x on the homepage, and if you encounter any difficulties, feel free to ask for guidance.” Observing and enquiring As the remote setting made it more difficult to pick up on non-verbal cues, I used screen-sharing tools to observe participants’ facial expressions and gestures as they navigated through the webpages. I encouraged them to think out loud and share their preferences and dislikes. With their consent, I recorded the sessions for later review. I observed closely for signs of difficulty and asked open-ended questions, such as: “How did you feel navigating through that section?” “How would you describe your experience using this feature?” Engaging with empathy Mindful of potential challenges faced by users with cognitive impairments, I approached remote testing with patience and empathy. I gave extra time for understanding, adjusted the testing environment based on their real-time feedback, and strategically built in breaks and buffers within the testing schedule. One participant made what was my favourite request: “Mind if I take a break to cuddle my cat?” Using relevant tools and technologies I facilitated the use of tools and assistive technologies as per user need to make the testing process smoother and more accurate. During a session, noting the need for screen magnification, I provided proxy users with the option to adjust the interface’s font size and contrast settings. Would I recommend accessibility testing with proxy users? Absolutely. The Project Leads observed these research sessions firsthand and described them as “eye-opening” and “fascinating”. But why? The pros of accessibility testing The benefits of conducting accessibility testing with proxy users are nuanced and varied: Tech-debt mitigation In the absence of actual users with declared accessibility needs, accessibility testing with proxy users encourages the adoption of inclusive design and development practices from the outset - the foundation that a truly user-centered service is built upon. In testing, visually impaired users highlighted issues with cluttered screens and excessive scrolling. Their feedback revealed that the approach of cramming information into a small screen made it hard for users with visual challenges to understand the content. Frustrated user staring at a laptop, stating: ‘A busy screen is hell.' The insight from users with accessibility needs, together with feedback from our target users, prompted us to simplify the homepage, making it cleaner and more straightforward, reducing cognitive load. We validated these changes through further testing to ensure enhanced usability. Proxy users, with their unique needs, enable us to spot and fix accessibility issues early, helping avoid the accumulation of technical debt and costly retrofits later in its development journey. Ethical inclusivity Engaging with diverse users is vital for inclusivity. When real users don’t declare accessibility needs, proxy users guide us in understanding diverse experiences. It’s not a checkbox exercise; it’s our ethical duty to ensure digital services are equitable for everyone. During testing, one proxy user emphasised the importance of truly grasping diverse user needs, stating: “I want options, not assumptions… It’s awfully good of you and your team to reach out to understand my experiences.” A proxy user stating “I want options, not assumptions.” Enhancing user experience through unbiased perspectives Proxy users, especially those unrelated to the service or product being tested, bring a fresh perspective to the table. They offer insights without the bias of prior knowledge or experience, helping us see our product objectively. Their feedback acts as a powerful tool to uncover potential blind spots and create a more user-friendly experience. Compliance with accessibility standards Conducting accessibility testing, alongside accessibility audits, helps us meet the Web Content Accessibility Guidelines (WCAG) 2.2, which is based on 4 design principles: perceivable, operable, understandable, and robust. A four-piece jigsaw puzzle representing the four design principles: perceivable, operable, understandable, robust. In structuring the guidelines as principles instead of technology; the WCAG accentuates the need to understand how people interact with digital content, ensuring that the service is accessible, identifying areas for improvement, and reducing legal risks while promoting ethical design and development practices. Specific educational insights In the instance of our education focussed project, testing with the proxy users who were primarily teachers gave us valuable insights into the unique accessibility needs of education providers. Their feedback enabled us to develop and refine our service to align with the real needs of those in the sector. The cons of accessibility testing with proxy users While the benefits of involving proxy users are significant, it’s essential to acknowledge potential risks: Representation gap Proxy users, while sharing comparable accessibility needs, may not fully represent the experiences of the target user group. To address this, it’s essential to complement proxy user insights with targeted feedback from users with disabilities to bridge the representation gap. Availability Finding suitable proxy users for recruitment can be a challenge, potentially causing testing delays. In my project, this risk was mitigated by leveraging the client’s Accessibility Lab, a database of proxy users, which was readily available, preventing potential recruitment challenges and minimising testing delays. Intermediary role Proxy users, as intermediaries, may unintentionally filter or misunderstand information because they might not fully grasp the nuances of the target user group’s experiences. To counter this, I structured testing sessions with extra guidance and prompts to minimise the risk of misinterpretation. In conclusion Effective leveraging of proxy users in accessibility testing requires a balanced approach. While their insights are invaluable for inclusive design and early issue detection, it’s important to supplement their feedback with testing from actual users with disabilities whenever possible. Combining both approaches ensures a thorough evaluation of accessibility and usability. See you folks on the inclusive side! Key takeaways Inclusive design: Proxy users can play a crucial role in ensuring inclusive design for diverse user groups, especially when there are no declared users with accessibility needs in the user research pool Strategic decision-making: Gaining insights into accessibility needs of a diverse audience can enable data-driven informed choices. Communication is key : Clear communication before and during testing sessions, and encouraging open feedback creates a conducive testing environment. Tailoring testing session : Adapting usability tests to address specific accessibility challenges enables a focused assessment of user interactions with the service. Testing with empathy and flexibility: Prioritising users’ needs and conducting tests with patience and empathy are crucial. Maintaining a balanced approach : While proxy user insights are invaluable, supplementing feedback with testing from actual users with disabilities ensures a comprehensive evaluation of accessibility and usability. Useful resources Understanding WCAG 2.2 WCAG 2.2 Map Testing for accessibility Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch . This article was originally posted by Sree on medium.com .
- WCAG 2.2 one year on: Impact on government services
WCAG 2.2 one year on: Impact on government services by Ayesha Saeed After over a year of the release of WCAG 2.2 what should you be doing as a government service? Ayesha one of our Accessibility Leads answers some key questions you may have for how to implement WCAG 2.2 if you haven't already started. Overview: What is WCAG? Overview of the changes What are the new guidelines? Key questions on WCAG 2.2 Looking forward Useful resources What is WCAG? The WCAG (Web Content Accessibility Guidelines) ( opens in a new tab) are universal guidelines that are used by public bodies to ensure accessibility is built into digital services. The WCAG guidelines are broken down by levels: Level A: Must do, basic requirements (legally required for public sector). Level AA: Must do, removes further significant barriers (legally required for public sector). Level AAA: Specialised support, most comprehensive. Meeting the WCAG guidelines is one part of meeting legal accessibility guidelines as a government service (both public and internal users). Check out Piya’s article on government requirements (opens in new tab) from earlier in our accessibility series for details. You can also see understanding accessibility requirements for public sector bodies (opens in new tab) for a comprehensive breakdown. Overview of the changes The latest official version of WCAG 2.2 was published on 5th October 2023. This replaces the previous version, 2.1, which was published in 2018. WCAG 2.2 builds on and is compatible with WCAG 2.1, with added requirements. One success criterion, 4.1.1 Parsing, was removed in WCAG 2.2 as it was deemed redundant. WCAG 2.2 also addresses aspects related to privacy and security in web content. There are 9 further A, AA and AAA guidelines to be aware of including; focus management, dragging movements, target size, consistent help, redundant entry, and accessible authentication. 6 of the new criteria are A and AA level which are what government services are legally required to meet for WCAG 2.2, bringing the total of A and AA guidelines to 55. You can see the full details of the changes on the W3 website for the WCAG 2.2 introduction (opens in new tab) . What are the new guidelines? Level A and AA: 2.4.11 (AA): Focus Not Obscured (Minimum): focus states must not be entirely hidden. A graphic of a good example of two popup bubbles overlapping. You can partially see the focus on the popup behind. 2.5.7 (AA): Dragging Movements: functionality must not rely on dragging. Alternatives such as buttons for left and right should be provided. A graphic of a good example of a dragging function, with left and right arrows on either side. A hovering mouse shows how you can use the buttons and the dragging feature. 2.5.8: Target Size (Minimum) (AA): there can only be one interactive target in a 24px by 24px area. A graphic of a good example of icons where there is only one interactive element in a 24px by 24px area. 3.2.6: Consistent Help (A): help mechanisms must appear in the same place on each page. A graphic of a good example of two screens next to each other, with the help function located in the same top right hand corner on both. 3.3.7: Redundant Entry (A): users must not be required to re-enter the same information, unless essential such as for security purposes. Provide an option to automate the input for the same information twice if required twice. A graphic of a good example of the option to use the same details for an address so a user does not have to enter the same information twice. In this example there is a checkbox to say the billing address being input is the same as the your address input. 3.3.8: Accessible Authentication (AA): authentication must not require a cognitive test (exceptions for object recognition or personal content). For example, provide compatibility with a password manager so a user doesn't have to input or transfer information for authentication. A graphic of a good example of giving users several options for authentication e.g through the use of a password manager. Level AAA: 2.4.12: Focus Not Obscured (Enhanced): focus states must not be hidden at all. A graphic of a good example of two popup bubbles. You can fully see the focus on the popups and they do not overlap. 2.4.13: Focus Appearance: focus indicator must meet a contrast ratio of at least 3:1 and at least 2 px in thickness that goes around the item . A graphic of a good example of a clear focus around a button, with contrast of a minimum of 3:1 and 2px thickness. In this example a black outline is used on a light grey background. 3.3.9: Accessible Authentication (Enhanced): authentication must not require a cognitive test, with no exceptions. For example, provide compatibility with a password manager so a user doesn't have to input or transfer information for authentication. A graphic of a good example of an authentication form with no cognitive test or Captchas to login. Key questions on WCAG 2.2 Q1: Does meeting WCAG 2.2 ‘break’ my accessibility progress? A site that meets WCAG 2.2 will also meet 2.1 and 2.0. Q2: When do I start building and testing for WCAG 2.2? Testing your service against WCAG 2.2 should be incorporated as soon as possible if you haven't already started. You should aim to conduct regular accessibility testing (manual, automated and against assistive technologies), so you can maintain an accurate understanding of how compliant your service is and prevent any surprises when it comes to a yearly audit. Do not rely solely on an annual audit to accessibility test your service, as this is only a snapshot in time and does not reflect ongoing maintenance of accessibility. If it has been at least a year since your service was last audited, or it was audited against WCAG 2.1, you will need to conduct an audit again. You should also continuously conduct usability testing to ensure your service is meeting the needs of real users, and not just WCAG. Q3: Do I need to update my Accessibility Statement? You should reassess your service for WCAG and other legislation compliance every year, and update your accessibility statement to reflect this. As it is over a year since WCAG 2.2 was released, all services should now be testing and updating their accessibility statement to the WCAG 2.2 guidelines. Q4: When will GDS start monitoring? The GDS Monitoring Team started testing sites against the new WCAG 2.2 success criteria from 5th October 2024. Find out more information at changes to the public sector digital accessibility regulations (opens in new tab) . Q5: When will the GOV.UK Design system be updated? The GOV UK Design System Team have reviewed WCAG 2.2 (opens in new tab) and updated the design system, and included these changes in the latest GOV.UK Frontend v5.0.0 (opens in new tab) . They have also provided guidance on how to meet WCAG 2.2, and which components, pages and patterns will be affected. Q6: How is my accessibility automated testing impacted? You should continue to use automated tools such as pa11y and aXeCore to support testing in build pipelines. For aXeCore, you can tag which level you want your tests to run against, so make sure you add ‘wcag22’ to cover the new guidelines. Find out more at Axe-core 4.5: First WCAG 2.2 Support and More (opens in new tab) . Semi-automated tools such as Wave and aXe can still also be used to pick up some accessibility issues. Automated/semi-automated tools do not cover all WCAG 2.2 guidelines so it is important to continue to test manually, with assistive technology and with real users. Looking forward WCAG 3.0 (opens in new tab) is currently a Working Draft and aims to provide guidance to build for users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. WCAG 3.0 also aims to support a wider range of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. Content that conforms to WCAG 2.2 A and AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Ensuring you factor in regular maintenance is paramount to keeping accessibility up to date. And remember, WCAG does not cover every scenario. Test with your users and conduct regular user research. Useful resources WCAG 2.2 and what it means for you (Craig Abbott) (opens in new tab) Obligatory WCAG 2.2 Launch Post (Adrian Roselli (opens in new tab) What WCAG 2.2 means for UK public sector websites and apps (GDS - YouTube) (opens in new tab) Testing for WCAG 2.2 (Intopia - YouTube) ( opens in a new tab) WCAG 2.2 Explained: Everything You Need to Know about the Web Content Accessibility Guidelines 2.2 ( opens in a new tab) Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch .
- AI in action 1: Supporting service teams through the Service Standard
AI in action 1: Supporting service teams through the Service Standard by Matt Hobbs As digital public services evolve, so must the tools we use to build them. This series explores how Artificial Intelligence (AI) can responsibly support UK government service teams in meeting the Government Digital Service (GDS) Service Standard. From user research to accessibility testing, performance monitoring to service assessments, we’ll examine where AI can complement human expertise, enhancing delivery without compromising trust, transparency, or inclusion. Overview What is the Service Standard? What is the Service Manual? What is a Service Assessment? Wrapping up About the author Welcome to a series exploring how Artificial Intelligence (AI) can support UK government service teams in meeting the Government Digital Service (GDS) Service Standard . As digital public services continue to evolve, so too must the tools and methods used to build them. AI, when applied thoughtfully and responsibly, has the potential to enhance delivery, improve user outcomes, and support those working in government to focus on what matters most: meeting real user needs. This series will explore how AI can play a role in supporting service teams at every stage of the service lifecycle, from discovery to live, and how it can complement the Service Manual’s practical guidance. Whether through natural language processing, data analysis, accessibility testing, or helping teams with performance monitoring, we’ll consider both current capabilities and future possibilities. This is not a call to automate everything, nor to substitute human judgement, but to embrace new tools in a way that strengthens delivery and accountability across government. Before we continue, let me cover a couple of important points... What is the Service Standard? The UK Government Service Standard is a set of points designed to help teams create and run effective, user-centred digital services. Maintained by the Government Digital Service (GDS) , it ensures that public services are accessible, efficient, and meet user needs. The standard promotes practices such as understanding users , using agile methodologies , testing services with real users , and making services secure and accessible . It's used throughout the development lifecycle to ensure quality and consistency across UK government digital services. What is the Service Manual? There may be a few readers who've never heard of the Service Manual. So, here's a brief history and overview. The UK Government Service Manual was introduced as part of the Government Digital Service (GDS) initiative, launched in 2011 to improve digital public services. Continuously updated, it reflects evolving best practices and legal requirements , ensuring government services remain effective and accessible for all users. The Digital Service Standard and the Service Manual are the foundations for what you need to complete a Service Assessment. What is a Service Assessment? A UK government Service Assessment is a structured evaluation process designed to ensure that digital services meet government standards before they go live or progress through key stages of development. Approval stages in a Service Assessment: UK government services typically go through 3 key Service Standard assessments: 1. Alpha assessment Conducted at the end of the Alpha phase Focuses on whether the service team has researched user needs, developed and tested prototypes, and has a plan for the Beta phase Core evaluation criteria : user research, design, technology choices, and feasibility 2. Beta assessment Conducted at the end of the Beta phase Evaluates whether the service has been tested with users, can handle expected demand, and meets accessibility and security standards Some departments may also decide to run a private beta for certain services, testing them with a small group of invited users In some cases, a service may remain in the Beta stage for an extended period Core evaluation criteria : performance, scalability, accessibility, data security, and readiness for live deployment 3. Live assessment Conducted before a service moves from Beta to Live (full public availability) Ensures the service is sustainable, meeting user needs, and is continuously improved Core evaluation criteria : performance monitoring, governance, data management, and ongoing user feedback integration Service Standard criteria Each assessment evaluates against 14 Service Standard points , some of these include: Understanding user needs Designing for everyone (inclusivity) Making the service simple and accessible Using open standards and scalable technology Ensuring security and privacy To progress to the next stage, service teams must pass these assessments. If unsuccessful, they are expected to resolve the issues highlighted and reapply for a future assessment. Wrapping up Some people might see using AI in the Service Standard, and Service Assessment process as “cheating” because if AI does all the work, what’s left for the service team to do? But really, AI is just a tool to help things run more efficiently and save the UK government time and money. It’s not about replacing human expertise. It’s also important to remember that AI can sometimes get things wrong (what’s called a “ hallucination ”), so it’s critically important that teams sense-check what AI produces instead of just accepting it at face value. Now that we’ve outlined the purpose and structure of the Service Standard and the role of service assessments, we’re ready to dive into the practical side, where and how AI can help. In the next post, we’ll begin exploring each of the 14 Service Standard points in turn, starting with what is arguably the most critical: understanding users and their needs. We’ll look at how AI can assist user researchers, support data analysis, and improve how teams gather insights, without losing the nuance or empathy that human researchers bring. So please stay tuned! About the author My name is Matt Hobbs — Principal Engineer (Frontend) and Guild Lead at Solirius Consulting, currently embedded in HMCTS. Before joining Solirius, I spent six years at GDS, leading on frontend development and shaping strategy across accessibility, performance, and digital best practice. I also wrote a series of blog posts documenting the performance improvements made to GOV.UK — covering everything from HTTP/2 and jQuery removal to Real User Monitoring. Well worth a read if you’re interested in practical, real-world frontend engineering in the public sector. Why we focus on frontend performance Speeding up GOV.UK with HTTP/2 How GDS improved GOV.UK ’s frontend performance with HTTP/2 (Case Study) Making GOV.UK pages load faster and use less data How Real User Monitoring will improve GOV.UK for everyone What we’ve learned from one year of Real User Monitoring data on GOV.UK The impact of removing jQuery on our web performance A Request For Comments (RFC) for enabling HTTP/3 on GOV.UK Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .
- Solirius Reply attends the Reply Xchange
Ayan Kar and Hamid Ali-Khan presenting at the Reply Xchange Earlier this month, Solirius attended our very first Reply Xchange, a high‑energy event designed to explore the latest in technology, innovation, and digital experience. Hosted by Reply, the day brought together clients, partners, and teams from across the network for a packed programme of expert talks, interactive demos, and collaborative discussion. The goal: to connect people and ideas, share what’s working, and inspire bold thinking for the future. Solirius was proud to contribute by presenting on the role of AI in delivering complex data migrations, a critical enabler for transformation programmes across government. We showcased how AI can enhance the accuracy, speed, and scale of migrations, reduce manual effort, and improve long‑term data quality and governance. Our talk delivered by Ayan Kar, Data Engineering Lead and Hamid Ali-Khan, Head of Engineering focused on some key AI themes: The criticality of modernisation of applications How AI can significantly enhance data migration accuracy and efficiency The use of AI tools to improve development productivity The importance of the decommissioning of legacy systems in the journey for the best outcome in data migrations The Xchange left a strong impression on the Solirius team in attendance, leaving them feeling not only energised by what’s possible, but more connected to the broader Reply community. It’s clear there’s real momentum and we’re motivated to accelerate our AI capabilities, deepen collaboration across the network, and adapt innovative Reply solutions to better serve the needs of our public sector clients. From intelligent data services to AI-assisted delivery and decision support tools, we see a huge opportunity to unlock value and deliver lasting impact through thoughtful, human-centred innovation. Members of the Solirius Reply team at the Reply Xchange Contact information If you would like to see the full presentation or speak with our Engineering or AI practitioners on how we can support your transformation efforts, please reach out to Ayan Kar or Hamid Ali-Khan via our contact form here (opens in a new tab) .
- Lessons in accessibility: A day at the DfE Accessibility Lab and conversations with the experts
At the DfE Accessibility Lab, our colleagues Sree (User Researcher) and Claire (UX Designer) explored how assistive technologies are used—and where they can fall short when services aren’t designed with everyone in mind. One crisp spring morning, as the sun finally pushed through the grey weight of winter, a user researcher, Sree, travelled from Newcastle and an interaction designer, Claire, journeyed from London, converging in Sheffield. Their destination: the Department for Education’s (DfE) Accessibility Lab. Their goal: to understand how digital services function for those who navigate the world differently. Inside the Accessibility Lab: Where digital barriers become visible From left to right: Claire, Sree and Jane at DfE’s Accessibility Lab, Sheffield We expected a technical demonstration—a run-through of tools and accessibility best practices. What we got was something much more human: a window into the lived experience of those who rely on assistive technologies daily. Guided by Jane Dickinson, an accessibility specialist at DfE, we explored tools like Dragon, JAWS, ZoomText, and Fusion. Jane not only explained how they work but showed us how easily they can fail when services aren't built with accessibility in mind. Insights from testing with assistive tools Dragon: Voice recognition for hands-free navigation Dragon voice control lets users navigate computers hands-free. But if clickable elements aren’t properly coded as buttons, Dragon can’t find them. Jane demonstrated how Dragon struggled with buttons on a DfE service and the BBC homepage as they weren’t coded as such. Dragon couldn’t recognise the “click button” command as the button was invisible to the tool - highlighting a major gap between design and code. JAWS: Screen reader for non-visual navigation JAWS relies on well-structured content: heading levels, labelled buttons, and descriptive links. Jane showed how generic links like “Read more” or “Download” confuse JAWS users due to a lack of individual distinction or missing ARIA labels, making browsing chaotic and frustrating. As Jane put it: “If a page isn’t structured properly, it’s a nightmare to navigate.” ZoomText: For low vision users ZoomText is a magnification tool that helps users navigate visually. However, it requires users to hover or click on links to have them read aloud, unlike JAWS, which reads automatically. At higher magnification, text can become distorted where the page has not been coded to handle zoom, affecting readability. Fusion: Combining JAWS and ZoomText Fusion provides auditory feedback and high-level magnification for individuals with partial vision loss, offering magnification up to 20x with auditory feedback. But Jane showed us that even a 3x zoom can cause layout issues, like pixelation and clipped content, especially when sites don’t reflow content properly. Keyboard-only navigation Keyboard navigation is essential for users who can’t use a mouse, relying on shortcuts like the Alt key. But inconsistent implementation makes things harder. Jane pointed out unmarked buttons on the BBC homepage that would leave keyboard users guessing: “If something isn’t labelled properly, it just gets skipped over.” Captions for hearing impairments Captions aren’t just for deaf users—they help everyone. But live captions often lag, making comprehension harder. Testing BBC video content, we saw captions fall out of sync with speech, making it difficult for a user to keep track. Experiencing the world through the eyes of others Sree and Claire testing visual simulation glasses As part of our lab experience, we tested simulation glasses that aimed to alter vision, giving a general insight into conditions like: Cataracts : everything looks blurred. Tunnel vision : loss of peripheral vision, reducing situational awareness. Left-sided hemianopia : half the visual field disappears, common after strokes or brain injuries. It was very insightful to be reminded how much of the digital world can become difficult to use under these conditions, and how inclusive and thoughtful design can prevent the digital barriers that some users may face. N.B. While simulation glasses offer a glimpse, they can’t replicate the full experience of visual impairment. They’re a starting point for empathy, not a substitute for listening to real users who experience visual impairments. To truly understand, we need to speak with and learn from real users. The Visual Impairment North-East (Vine) Simulation Package In conversation with Accessibility Experts To deepen our understanding of accessibility, we interviewed Jane Dickinson and Jake Lloyd, two key accessibility specialists at DfE, to hear their insights. Jane’s biggest frustration? Accessibility being bolted on at the end. “It’s not enough to test for accessibility. Real users need to shape the design from the beginning.” She also highlighted how many users hesitate to disclose their accessibility needs for fear of being seen as difficult. Even when reports are written to improve accessibility, they often go ignored. “I can spend a whole day writing a report, and sometimes nothing changes.” Despite these challenges, Jane celebrated the wins—a blind user who was able to access their payslip independently for the first time: “One of our blind users told me, ‘For the first time, I didn’t have to ask someone to read my payslip. I could do it myself.’ That made all the work worth it.” Even small changes like properly marking up pdfs or labelling buttons has a huge impact and can make a service more accessible. Jake emphasised the importance of building for keyboard navigation and screen readers from the very start. “There are so many accessibility issues that come from not thinking about keyboard accessibility… It affects focus, visibility, and how well voice and assistive tech tools work.” He highlighted issues like repetitive, unclear links in patterns such as “Check your answers”: “Something like the ‘Check your answers’ pattern has links that just say ‘Change’… If you're just using a screen reader and you're navigating through a bunch of links… you're only going to hear “change”. So providing some hidden screen reader text, giving more context to that link can be really helpful.” This was another thoughtful reminder that different users read pages differently, and not everyone will be able to view the visual context to written content. A holistic approach to accessibility The accessibility specialists broke down their layered approach to testing the accessibility of services: Automated testing to catch common issues early. Manual testing using only a keyboard or different zoom levels. Assistive tech checks like screen readers and voice controls. Code reviews to ensure correct HTML and component use. As Jake put it, accessibility goes beyond the Web Content Accessibility Guidelines (WCAG) standards: “I’ll also record issues that don’t fail WCAG but still create barriers—like having to tab 30 times to reach an ‘apply filter’ button.” Jake warned against treating accessibility as an afterthought: “Where teams haven't thought about accessibility and inclusive design up front and early on, complex issues tend to come out of that.” Not boring. Not optional. A myth Jake wants to debunk is that accessible design equals boring design. “You can still be innovative. Your website can look good and be accessible if you plan it that way from the start,” he said. “Unfortunately, some organisations continue to treat accessibility as an afterthought, which remains a cultural issue”. Our specialists pointed out that advocacy and awareness are key to changing this mindset: “Having people with actual lived experience that can demonstrate the way that they interact with digital content, can be really powerful… Here's someone who is blind. They use a screen reader to navigate your service, and they can't do it.” They stressed how one in four people have a disability—can you afford to turn them away with inaccessible services? Why accessibility matters for everyone Jane and Jake made it clear: accessibility isn’t just for disabled users. It benefits all of us. Captions help on a noisy train. Good contrast helps in bright light. And if zooming to 400% breaks your layout—it’s not just low vision users who suffer. “If it’s not thought about up front, then it affects a lot of people.” Accessibility isn’t a task—it’s a mindset As user researchers and designers, we focus on how people interact with digital services. But in Sheffield, we were not the experts—we were the students. This wasn’t about checking off accessibility guidelines. It was about understanding what happens when those guidelines aren’t met. A missing label, a broken heading structure, or an unlabelled button—these aren’t small issues. Each one determines who gets to participate and who doesn’t. Accessibility is also never ‘done’, it is an ongoing activity that requires the whole team's input to maintain. As we left Sheffield, catching our trains to opposite ends of the country, we carried more than just knowledge. We carried a quiet but certain resolve to champion accessibility. The best accessibility work doesn’t “help” people. It supports their independence and ensures they don’t have to ask for help in the first place. Useful resources Department for Education accessibility and inclusive design training Making your service accessible: an introduction Department for Education accessibility and inclusive design manual W3C: Making the Web Accessible W3Cx: Introduction to Web Accessibility Sara Soueidan: The Practical Accessibility Course About the authors Sree is a Lead User Researcher specialising in uncovering user needs and delivering data-driven insights. A CDDO-DfE trained Service Assessor, she champions user-centricity and accessibility in government services. When she’s not diving into research, Sree can be found roaming the countryside with her husky, cooking up a storm, or curling up with a good book. Claire is a Senior User Experience Designer, specialising in interaction design. She advocates for accessibility and strives to bridge the gap between usability and inclusion. Outside of work, Claire enjoys exploring new places and experimenting with new recipes. Contact information If you have any questions about our research and design services or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .











