top of page

31 results found with an empty search

  • How we used T-shirt sizing to scope a complex multi-service Justice platform

    How we used T-shirt sizing to scope a complex multi-service Justice platform by Caity Kelly How do you scope and estimate 170+ features across seven work packages, involving multiple delivery teams, disciplines, and stakeholder groups? In our work with HMCTS on the Housing Disputes programme (more commonly known as Renters Reform), we ran an extensive T-shirt sizing exercise that became a shared planning language across service, product, architecture, design, and technical delivery. Here’s how we did it, and what we learned. What is the ‘Renters Reform’ Programme? His Majesty’s Courts and Tribunals Service (HMCTS) is responsible for administering the criminal, civil and family courts and tribunals in England and Wales. It plays a central role in delivering justice in a fair, efficient, and modern way. One of HMCTS’s flagship programmes is the Housing Disputes Policy Infrastructure Project (HDPIP) - more commonly known as Renters Reform. This major policy and digital transformation initiative aims to streamline how real property disputes (such as those between landlords and tenants) are resolved in England and Wales, and aligns it with upcoming legislation currently progressing through Parliament. The service will support users through a fully digital process (which is currently largely paper based) from initial case creation and submission through to tribunal decision and enforcement, with an emphasis on fairness, transparency, and accessibility for all users - both citizens and courts staff alike. The aim is to launch the service within 18 months of Royal Assent (which is when the Bill has been passed by Parliament), aligning with the new legal framework for property disputes once enacted. Background Our team at Solirius Reply, alongside a blended team of other suppliers, is helping HMCTS deliver three digital interfaces which will exist as one joined up service on GOV.UK : Citizen Interface (C-UI) : For renters, landlords, managing agents, and other citizens to initiate or respond to disputes. Expert Interface (Ex-UI) : For court staff, tribunal clerks, and judges to manage and resolve cases. Enforcement Interface : For bailiffs and enforcement agents to view and act on court-ordered outcomes. Each interface supports different roles, access levels, workflows, and data integrations. Together, they form a single digital ecosystem designed to support our service vision of a fairer, faster, and more accessible housing disputes resolution system.  Given the cross-cutting nature of many features (e.g. impacting more than one interface), as well as the involvement of multiple delivery partners (technical, design, architecture, and policy), we needed a collaborative and time-efficient way to estimate effort across the board. With such breadth and complexity, it became clear that scoping the effort required to deliver the interfaces required us to turn to a commonly used Agile framework - T-Shirt sizing - both to plan effectively and to set delivery expectations. What we did As the service, product, architecture, and design teams moved through an analysis of the ‘as system’ they identified seven major work packages (WPs) comprising over 170 features that would form the foundation of an MVP release and beyond. These included everything from case intake, document submission, and identity verification to evidence review, tribunal hearing preparation, enforcement workflows and much more. Each work package varied in complexity, technical dependencies, and level of design maturity. With multiple suppliers and HMCTS teams involved (technical, UCD, architecture, policy, service, product), we needed a shared and efficient way to: Understand the relative complexity of each feature Identify dependencies Support delivery planning and MVP scoping The T-shirt sizing exercise proved to be an excellent tool for developing a shared language and understanding of the scope of work across teams. Tools and methods Why T-shirt sizing? T-shirt sizing is an Agile estimation method that uses intuitive size categories (e.g., Small, Medium, Large) instead of fixed hours or story points. We used the following standard: Size Meaning Sprint equivalence S (Small) Simple/straight forward feature Half a sprint (up to 5 days) M (Medium) Self-contained but non-trivial 1 sprint (10 days) L (Large) Moderately complex, possibly cross-functional 1–2 sprints XL (Extra Large) High-complexity or unknowns 2–3 sprints XXL (Extra Extra Large) Too big; must be broken down 3+ sprints By standardising these definitions across teams, we could have more productive conversations — even when working remotely or across disciplines. We also chose T-shirt sizing because it helps avoid the law of diminishing returns . While it might seem logical that the more time we spend estimating, the more accurate we’ll be, this isn’t the case. In practice, spending too long on estimation often yields only marginal gains in accuracy, and can become a wasteful exercise. By keeping things light and collaborative, we were able to agree quickly on relative effort, without overthinking it. The goal wasn’t to get the “perfect” number, but to reach a shared, good-enough understanding that the team could act on. Estimating as a group also helped reduce the illusion of certainty and reminded us that estimates are just estimates and are likely to change when more information becomes available. The law of diminishing returns. Graph of accuracy versus effort showing a curved line peaking at medium effort, illustrating diminishing returns. Running the workshops Due to the scale of the service, we ran a series of online workshops via Microsoft Teams, each focused on a single work package. Our approach: Timeboxing - each workshop was time-boxed to 45 minutes Discussion time - each feature discussion was capped at 10 minutes, to keep us moving Solirius Delivery Manager facilitation  gathering input from product and service teams, architecture, user centred design, HMCTS subject matter experts (SMEs), and our technical leads Use of Figma - this was used to gain shared understanding of the features, reviewing draft service and screen designs live on the calls Use of Excel on Sharepoint - this was used to log the estimated t-shirt sizes and record blockers and dependencies (and that spreadsheet was later transformed into our draft delivery gantt chart) Use of Jira - this was used to connect estimates to our epics in the technical delivery backlog Wherever conversations ran long or diverged, we captured a note on the t-shirt sizing spreadsheet and moved on - enabling velocity without sacrificing key stakeholder input. What we learnt Cross-discipline collaboration One of the most valuable aspects of this process was the diversity of perspectives in the room and the cross-discipline dialogue it enabled. Attendees included: HMCTS product owners clarifying feature intent and priority Technical architects identifying systems integration and security considerations Service and content designers flagging potential usability or accessibility implications Delivery leads from multiple partner teams managing dependencies and timelines HMCTS business analysts and subject matter experts, policy advisors Solirius developers, testers and architects This cross-functional collaboration helped surface hidden complexity. For example: Technical features that seemed simple from the outside as they could reuse existing code from other GOV.UK services (e.g., email notifications) ended up being sized as “large” when accounting for the significant new content, design, and accessibility work, and considering multiple delivery channels (SMS, email, in-app), templates, and translation needs. Integration with legacy systems or GOV.UK services added layers of complexity. Features affecting two or more interfaces required discussion around ownership and sequencing. Some “XXL” features highlighted areas where requirements were still too vague — prompting further discovery before delivery could be planned. Our results This exercise delivered far more than just a set of estimates. It helped anchor planning, align stakeholders, and build a shared understanding of complexity across the Renters Reform programme. ✅ S hared understanding: Everyone left the workshops with a clearer view of what needed to be built, why, and how much effort it might take. This helped reduce assumptions and build a shared understanding across disciplines. ✅ Planning confidence: We could group features by estimated effort, identify critical paths, and flag areas requiring further discovery. This gave us greater certainty around sequencing of work. This resulted in: 173 features sized across 7 work packages Early identification of blockers, dependencies, and discovery gaps Shared delivery language adopted across technical, UCD, and policy teams ✅ Clearer MVP definition: “XL” and “XXL” features often sparked discussion about scope - helping us prioritise must-haves vs. nice-to-haves, and making it easier to define a realistic MVP. This resulted in the development of alignment across teams on MVP scope, and supported realistic roadmap conversations with programme leadership. ✅ Delivery velocity baseline: By estimating using sprint equivalents, we could model different delivery scenarios e.g. what two technical pods could deliver in six sprints vs what four could deliver, which enabled more informed forecasting. This enabled delivery teams to map estimates against real capacity and helped shape sequencing for the Citizen, Expert, and Enforcement interfaces. ✅ Improved stakeholder engagement: The structured, time-boxed format made it easier for stakeholders to participate meaningfully without being overwhelmed. It also ensured we covered all 173 features efficiently. We captured decisions and rationale, alongside red flags raised by stakeholders for transparency The estimation model is now being reused across the programme as a best-practice approach Outputs now serve as a baseline for backlog refinement and prioritisation We captured decisions and rationale, alongside red flags raised by stakeholders for transparency The estimation model is now being reused across the programme as a best-practice approach Outputs now serve as a baseline for backlog refinement and prioritisation ✅ Shared ownership of estimates: One of the most valuable outcomes was the sense of shared ownership that emerged through the process. Rather than handing down estimates from a single team or role, we sized features together as a cross-functional group. Everyone’s perspective from delivery, design, architecture, and product was heard and considered. This helped build trust and alignment, and ensured that the estimates reflected the collective understanding of the team. Because the group had worked through the complexity together, the final estimates weren’t just accepted, they were owned  by the entire team. That ownership has helped sustain momentum and buy-in during ongoing planning and prioritisation. Navigating team challenges Of course, there were challenges: Time constraints  meant we couldn’t deep-dive into every feature. We mitigated this by capturing flags in the sizing document for follow-up. Feature overlap  across interfaces occasionally led to confusion; we tackled this by defining which team “owned” the feature for estimation purposes. Remote working  can limit engagement, but the use of collaboration tools, strong facilitation, and clear pre-reads helped keep everyone aligned and involved. Outcomes and next steps As a result of the exercise, we now have: A fully sized feature set across all seven work packages Clearer MVP priorities and sequencing Early identification of   technical risks and design gaps A foundation for delivery planning across interfaces and suppliers This work has already influenced sprint planning, architecture decisions, and roadmap alignment. Reflections In multi-interface services, early alignment of scope and expectations is crucial. Our experience showed that T-shirt sizing, a lightweight yet effective estimation method, can achieve this without heavy documentation or lengthy planning. This approach helped us understand scope, uncover hidden complexities, and plan collaboratively, avoiding over-analysis and the law of diminishing returns by focusing just enough effort for a shared, useful estimate. Ultimately, it fostered trust, clarity, and shared ownership across teams, which are vital for successful delivery. Our key factors for success included: A clear process Shared definitions Strong facilitation Shared ownership Crucially, collaboration across roles While seemingly simple, T-shirt sizing, when executed correctly, cultivates a deeper shared understanding - an essential element for programs of this scale. About the author Caity Kelly is a Senior Delivery Consultant at Solirius Reply, currently supporting HMCTS on the Renters Reform programme. She works at the intersection of agile delivery, digital transformation, and service design in Government. If you have any questions about our Delivery services or you want to find out more about other services we provide at Solirius Reply, please get in touch (opens in a new tab) .

  • The lost art of data engineering 1: a data engineer's chronicles

    The lost art of data engineering 1: a data engineer's chronicles by Ara Islam This series is aimed to help you take a step back and ground yourself on the core principles that every data team must understand. We will focus on the data engineering process, which is the art of preparing the most valuable ingredient your company has at its disposal: your data. In today's fast-paced world of data, companies are racing to leverage the latest trends and innovations to gain a competitive edge. With AI most recently taking centre stage as the key buzzword and topic of every C-suite board meeting. It's no wonder if you are new to the data space, you might struggle to see past the spider webs of fancy terms and exotic ideas that seem to appear almost on a daily basis. For any company to use the latest and greatest instruments of competitive edge like AI chat bots or Business Intelligence (BI) Reports, is to have a sophisticated backend structure. One that allows your data to bend to your needs while staying organised, accurate and governed.  But before we can get into some of the topics of data engineering, it's important to first define what a data engineer is. A Data Engineer is a person who designs , builds , and maintains data systems. This definition can overlap with responsibilities of the other roles within a data team like Data Architects who also work on part of the design. A Data Architect may join a project before a data engineer and master the process and infrastructure. However, a Data Engineer's input is essential to the designing of how data transforms and moves across systems.  A Business Analyst may be asked to build a report to aid in commercial decision making. But, as the demand for reports increases and they become more critical, an analyst alone isn't enough. They would have to sacrifice on accuracy or time to deliver. Both of which can affect a business trust in data and the effectiveness of decisions. That is why a Data Engineer is essential in data driven transformations. They enable Data Analysts, Data Scientists and ML Engineers to focus on building reports and models quickly  with great certainty. Throughout this series, we will touch on core principles of Data Engineering. Whether you are a Data Engineer yourself or member of a data team, the aim is to  gain a deeper appreciation for the craft, and understand why it's so important. Contact information If you have any questions about our Data Engineering services, or you want to find out more about other services we provide at Solirius Reply, please get in touch (opens in a new tab) .

  • AI in action 4: Supporting service teams through the Service Standard technology decisions

    AI in action 4: Supporting service teams through the Service Standard technology decisions by Matt Hobbs In this final article of our AI in action series, we turn our attention to the technological foundations that underpin modern government services and consider how these foundations must evolve to meet emerging opportunities and challenges. Throughout this series, we have explored each of the 14 points of the government Service Standard to examine how artificial intelligence (AI) can support service teams and shape the future of public services.  If you’re joining partway through, you may want to read the introduction , which outlines how AI can support government service delivery and sets the context for this discussion. We then explored Service Standard points 1 to 5 , focusing on user needs, accessibility, and joined-up experiences, followed by points 6 to 10 , which examined how AI can support multidisciplinary teams, agile working, continuous improvement, and secure delivery. Now, we’ll explore points 11 to 14 of the Service Standard - areas that are less about interfaces and workflows, and more about the underlying systems, infrastructure, and culture that enable services to be sustainable, open, and resilient. Standards 11–14 include choosing the right tools and technology, making new source code open, using open standards, and operating a reliable service may not always be visible to the public. However, these principles are what ensure that digital services are trustworthy, cost-effective, and fit for the long term. AI can now assist with everything from technology selection and licensing, to automated testing and deployment, to maintaining live services in real-time. And as AI tooling becomes more sophisticated, it’s increasingly capable of helping teams uphold these standards not just at launch, but throughout the service lifecycle. Let’s explore how AI is supporting these back-end foundations, and where future innovation might unlock even more potential for collaboration, openness, and reliability across government. Point 11. Choose the right tools and technology Service Standard summary : Point 11 of the GOV.UK Service Standard advises teams to choose tools and technologies that support building high-quality, cost-effective services while staying adaptable for the future. It emphasises using automation, making smart build-or-buy decisions, leveraging common platforms, avoiding vendor lock-in through open standards, and managing legacy systems effectively. Existing AI tooling : AI-Powered Code Analysis  Tools: Tools like GitHub Copilot , SonarQube , or DeepCode assist  in reviewing code quality, suggesting improvements, and flagging technical debt early, helping teams choose better implementation paths. Automated Cloud Cost Optimisers : Services like AWS Cost Explorer  with AI-powered recommendations help optimise infrastructure choices, right-size services, and avoid over provisioning. Chatbots  for Vendor Research : AI chat interfaces help teams quickly compare tools, read documentation, and analyse trade-offs between technologies or vendors. AI-Driven Testing and Monitoring: Tools like Testim  or Mabl automate  and enhance test coverage  using AI, which ensures that the chosen tech stack is reliable and scalable. Natural Language to Query/Data Tools: Tools like OpenAI’s Codex  or Microsoft Copilot  allow non-technical users to interrogate systems or suggest tools via plain language, democratising decision-making on tools. Future AI innovations: AI Architects / Decision Advisors: Smart assistants that could suggest entire architecture stacks  based on your business context, team skill level, and legacy dependencies. Predictive Tech Debt Modelling : Tools that could forecast the long-term implications (cost, maintainability) of tech choices using historical data and project-specific inputs. Autonomous Procurement Bots: AI systems that could handle early-stage vendor outreach , price negotiation, and integration feasibility analysis to streamline procurement. Context-Aware Build-vs-Buy Recommenders: AI that could analyse organisational data, time constraints, and cost to recommend the best mix of bespoke development  vs. off-the-shelf  tools. Self-Adaptive Infrastructure Planning: AI systems that could not only recommend, but automatically adapt and refactor infrastructure  as service needs evolve or usage spikes . Point 12. Make new source code open Service Standard summary : "Make new source code open", advises that all new government service source code should be openly accessible and reusable under appropriate licences to promote transparency, reduce duplication, and lower costs. Teams are encouraged to develop code publicly from the outset, ensuring sensitive information is excluded, and retain intellectual property rights to facilitate reuse. Exceptions will apply for code tied to unannounced sensitive policies. Existing AI tooling: Automated Code Review for Sensitive Data: Tools like GitHub Copilot  and DeepCode  can flag hardcoded secrets, credentials, or personal data before code goes public. AI-Powered Documentation Generation: Tools like Mintlify , Tabnine , or even ChatGPT  can generate clear, developer-friendly documentation to make open-source code easier to understand and reuse. Open-Source Licence Selection Support: AI chatbots  and tools can guide developers in choosing appropriate open-source licences  (e.g., MIT  vs GPL ), making compliance simpler. Code Quality and Security Scanning: AI-enhanced tools like Snyk  or SonarQube  help ensure open code is clean, consistent, and secure before being published. Automated Issue Triage: NLP models  can help maintainers tag and sort GitHub issues or pull requests, speeding up community collaboration. Future AI innovations: Autonomous Redaction Bots: AI agents could scan and redact sensitive data , environment variables , or internal logic automatically before code is pushed to a public repo. Intelligent Open-Source Readiness  Advisors: AI tools could assess a private codebase’s readiness for open sourcing, providing a checklist or roadmap  for teams to completely open-source their service code. Adaptive Licensing Engines: AI could analyse dependencies  and business goals to suggest or automatically apply the most appropriate open-source licence  dynamically. Multi-language Documentation Bots: Future AI could generate documentation in multiple languages  to expand accessibility and global reuse of government code. AI Legal Assistants: AI tools could review legal implications  of publishing specific code, highlighting potential compliance or intellectual property  issues. Point 13. Use and contribute to open standards, common components and patterns Service Standard summary : Point 13 emphasises the importance for service teams to utilise and contribute to open standards, common components, and patterns. This approach allows teams to leverage existing solutions, enhancing user experience and cost efficiency. Teams are encouraged to use standard government technology components, maximise technological flexibility through APIs and authoritative data sources, and share any new components or patterns they develop, such as by contributing to the GOV.UK Design System . Additionally, when creating potentially useful data, services should publish it in an open, machine-readable  format under an Open Government Licence , ensuring sensitive or personal information is appropriately protected. Existing AI tooling: AI Code Review and Refactoring Tools: Tools like GitHub Copilot  or Amazon CodeWhisperer  can help teams identify non-standard code and refactor it to align with open standards or existing component libraries. Automated Documentation Generation: AI tools like GitHub Copilot , Swimm , Mintlify , Documatic , and Codex  can help government teams generate and maintain clear, up-to-date documentation for APIs, services, and components, improving transparency, reuse, and onboarding across departments. Design Pattern Recognition: AI can scan repositories and identify reusable UI  or service patterns  across services, helping teams understand where standard components are being used (or could be). Component Matching  Tools: AI can suggest existing GOV.UK Design System components  when a developer starts building something similar, reducing duplication and encouraging reuse. Open Data Quality Checks: AI can validate open data  for formatting issues, accessibility , or privacy risks, ensuring it's published in compliant and useful formats . Future AI innovations: AI-Driven Pattern Libraries : Imagine a tool that could automatically create and evolve a design/component library by learning from thousands of public services and interfaces across government. Conversational Component Builders: Voice/chat interfaces  that could let developers or designers describe a need in plain English, and the AI returns a standard component or generates one, ready for review and inclusion. Predictive Contribution Suggestions: AI tools that could analyse what a team is working on and proactively suggest which components , standards , or patterns  they could contribute back to central libraries, with boilerplate documentation  and tests. AI Code Compliance Enforcement: Advanced AI linters  that could not only point out code issues but teach the team how to align their work with government standards  interactively, like a smart mentor . Semantic Data Publishing Assistants: Tools that help teams model, tag, and publish new datasets in machine-readable , standard-compliant formats , using semantic web technologies  and natural language interfaces . Point 14. Operate a reliable service Service Standard summary : The GOV.UK Service Manual advises that online government services must be reliable, available, and responsive at all times. This involves maximising uptime, enabling frequent deployments without disruption, regularly testing in live-like environments, implementing robust monitoring and response plans, and addressing any organisational or contractual barriers to service reliability. Existing AI tooling: Anomaly Detection and Alerting: AI models can monitor system metrics and logs in real-time to detect unusual patterns like latency spikes or error rates. Tools like Datadog Watchdog  use machine learning to surface these anomalies automatically, helping teams act before users are impacted. Predictive Maintenance: By analysing historical performance data, AI can predict potential failures in infrastructure or applications. Platforms such as Amazon DevOps Guru  and Azure Monitor  leverage machine learning to forecast issues and recommend proactive fixes, reducing unplanned downtime. Automated Incident Triage: AI can automatically categorise and prioritise incidents, and even route them to the appropriate teams. PagerDuty’s Intelligent Triage  uses machine learning to consolidate related alerts and assess severity, enabling faster, more accurate responses. Load Forecasting: Machine learning can predict traffic patterns based on usage history, helping systems scale resources dynamically. Google Cloud’s AI Forecasting  tools support infrastructure teams in anticipating demand and adjusting capacity before bottlenecks occur. Intelligent Log Analysis: AI-powered tools can scan and summarise vast amounts of log data to highlight root causes and potential solutions. Platforms like Logz.io  and Elastic’s machine learning features  apply anomaly detection and natural language processing to make logs more actionable. Test Automation with AI: AI can improve software quality by generating and prioritising test cases based on real user behaviour. Tools like Testim  and Mabl  use machine learning to create adaptive, resilient automated tests that evolve alongside the application. Future AI innovations: Self-Healing Systems : Services that could automatically detect, diagnose, and correct issues in real-time with minimal human intervention, like restarting components or rolling back code. Autonomous Release Pipelines: AI systems that could decide the safest deployment windows and run dynamic risk assessments , pausing or altering deployments if anomalies are predicted. AI-Driven UX Monitoring : Tools that could interpret user sentiment or behavioural cues to detect subtle experience degradation before technical metrics reflect an issue. Cognitive Load  Prediction for Engineers - Future AI might help balance incident response load across teams, considering stress, alert fatigue , or previous workloads. Cross-Service Correlation Engines : AI could automatically correlate incidents across microservices or departments to pinpoint systemic failures more accurately and quickly. Proactive Compliance Monitoring : Smart systems could monitor changes in regulations and scan services to detect potential compliance issues before they impact service reliability. Conclusion Throughout this series, we’ve looked at how AI can support each of the 14 points in the UK Government Service Standard. From improving user research and simplifying journeys, to enhancing security and maintaining reliable infrastructure, AI is already beginning to transform how service teams operate. We’ve explored how current tools, across disciplines, can reduce repetitive work, improve decision-making, and help teams focus on what matters most: delivering accessible, effective, and inclusive public services. We’ve also considered what might be possible in the near future, where AI acts as a co-pilot across design, delivery, operations, and beyond. Crucially, we’ve acknowledged that AI is not a silver bullet. It must be applied thoughtfully, safely, and ethically. The UK Government’s AI Playbook  provides a clear foundation for doing just that, giving teams the frameworks, training, and principles needed to explore AI without compromising public trust or accountability. To further support implementation, the government has introduced a series of AI training courses through Civil Service Learning  and Government Campus , developed in partnership with leading technology providers and the Government Skills unit. These learning resources are designed to equip civil servants with the confidence and expertise to apply AI effectively in their roles. As AI capabilities mature, so too must our approach to delivery. The opportunity is not just to use AI for efficiency, but to use it as a force multiplier for a better, more human-centred government. If you work on digital services in the public sector, now is the time to start evaluating where AI might make your work more focused, inclusive, or sustainable. Not by replacing expertise, but by extending it. Thank you for following along with this series. I hope it has sparked ideas, opened questions, and helped you see AI as a practical and responsible enabler of better public service delivery. As always, I’d welcome your feedback and perspectives, especially as this space continues to evolve. Open collaboration and ongoing dialogue will be essential as we navigate this emerging, AI-enhanced landscape together. Contact information If you have any questions about our AI initiatives, Software Engineering Service, or you want to find out more about other services we provide at Solirius Reply, please get in touch (opens in a new tab) .

  • AI in action 3: Supporting Service Teams through the Service Standard Strengthen Delivery

    AI in action 3: Supporting Service Teams Through the Service Standard Strengthen Delivery In this third post of the series, building on our introduction to AI in public service delivery and our exploration of how AI can directly support teams , we now shift focus to delivery. We explore how AI can support service teams in structuring their work, iterating effectively, and safeguarding services. The Service Standard points covered here (points 6 to 10) focus on what it takes to run a high-functioning digital service team: building multidisciplinary teams, adopting agile methods, improving frequently, ensuring security, and defining success. These are not abstract ideas, they are the operational backbone of trustworthy, responsive government services. AI offers new possibilities across each of these areas. Whether it’s helping teams collaborate more effectively, assisting agile planning, surfacing insights from user feedback, or detecting security threats in real time, AI can be a critical partner in strengthening delivery. The opportunity here is not to replace human expertise, but to reduce friction and empower teams to focus on strategic, high-value work. Let’s explore where current AI tooling is already adding value, and where future innovation might fundamentally reshape how government teams deliver services. Point 6. Have a multidisciplinary team Service Standard summary : The Service Standard says for point 6 that a multidisciplinary team is essential for creating and operating a sustainable service. Such a team should encompass a diverse mix of skills and expertise, including decision-makers who are integrated into the team to ensure accountability and swift responsiveness to user needs. The composition of the team should align with the current phase of service development and include members familiar with relevant offline channels and necessary back-end system integrations. Additionally, the team should have access to specialist expertise, such as legal or industry-specific analysis, and ensure that any collaboration with contractors or external suppliers is sustainable. Existing AI tooling: Automated Meeting Summaries & Action Items: Tools like Otter.ai  or Microsoft Teams AI  can transcribe meetings, highlight key decisions, and assign action items, helping teams stay aligned regardless of discipline. Role-Aware Knowledge Management: AI platforms like Notion AI  or Confluence AI  can organise knowledge tailored to different roles (e.g. developers, designers, policy experts), making information more accessible and contextual. Cross-Team Communication Aids: AI chatbots  can bridge knowledge gaps by answering team questions on project-specific jargon, legal requirements, or tech architecture, which is helpful for non-specialists. Candidate Matching for Team Building: AI-driven HR tools (e.g. HireVue , Eightfold ) can recommend candidates with diverse skill sets to fill gaps in multidisciplinary teams. Design & Prototyping Assistants: Tools like Figma AI  and Uizard  can help non-designers contribute to early-stage prototypes, encouraging more inclusive collaboration. Sentiment & Collaboration Monitoring - AI-powered analytics in tools like Slack  or Microsoft Viva  can flag potential communication breakdowns or burnout risk in teams. Future AI innovations: Dynamic Team Composition Engines : AI could analyse project goals, current team skills, and workload to recommend optimal team structures in real time, like a "squad optimiser". Context-Aware AI  Team Members: Intelligent assistants that understand team dynamics and contribute proactively across disciplines, e.g., prompting legal implications during a design discussion. Automatic Skill Gap Detection  and Training: AI could assess ongoing work and suggest micro-learning tailored to individuals, helping multidisciplinary teams skill up fluidly. Cross-Discipline Language Translators: Real-time AI translators  that convert technical, legal, or policy jargon into plain English  (and vice versa) to improve shared understanding. Virtual Co-Pilot for Interdisciplinary  Projects: A unified AI tool that supports collaboration across product, design, policy, and development, suggesting decisions, alerting the team to blockers, and keeping the team aligned with service standards. Point 7. Use agile ways of working Service Standard summary : GOV.UK Service Standard's point 7, "Use agile ways of working," advocates for creating services through agile, iterative, user-centred methods. This approach emphasises early and frequent exposure to real users, allowing teams to observe usage patterns, gather data, and continuously adapt the service based on insights gained. By avoiding comprehensive upfront specifications, agile methods reduce the risk of misaligning with actual user needs. Service teams are encouraged to inspect, learn, and adapt throughout the development process, maintain governance structures aligned with agile principles to keep relevant stakeholders informed, and, when appropriate, test the service with senior stakeholders to ensure alignment with strategic objectives. Existing AI tooling: Automated User Research Analysis: AI tools like Dovetail  or Aurelius  can rapidly analyse qualitative user research (e.g. interviews, surveys) to identify common themes and pain points. Smart Backlog Management: Tools like JIRA with AI-powered suggestions  can help prioritise backlog items based on effort, value, and historical data. Natural Language Stand-up Summaries: AI assistants (e.g. Slack bots , Otter.ai ) can summarise daily stand-ups, meetings, or sprint reviews into concise updates for teams and stakeholders. Test Automation with AI: AI-driven testing tools (e.g. Testim , Mabl ) can create and maintain tests automatically as the UI evolves, helping teams iterate faster without breaking things. Code Review and Pair Programming Support: AI code assistants like GitHub Copilot  can help developers write and review code more efficiently, speeding up delivery during sprints. Sentiment Analysis on User Feedback: Tools like MonkeyLearn  can process large volumes of feedback to gauge user sentiment, helping prioritise improvements based on emotional impact. Future AI innovations: Agile Sprint Advisor : A smart assistant that analyses team velocity, blockers, and mood to suggest sprint goals, story point estimates, and optimal team composition. Real-Time Adaptive Agile Frameworks : AI could dynamically tweak your agile methodology (e.g. Kanban vs. Scrum hybrids) based on real-time metrics, user behaviour, and team health. Predictive Stakeholder Alignment : AI systems might proactively detect potential stakeholder misalignments and suggest communication strategies or demos at optimal times. Automated Prototype Iteration : AI might soon be able to auto-generate and refine prototypes from user feedback and usage analytics without needing a full sprint cycle. Behavioural Coaching  for Agile Teams: Future tools could offer personalised coaching to team members based on communication patterns, participation, and stress signals. Autonomous Discovery Research: Advanced AI could independently identify emerging user needs  by scanning behaviour data , online forums, and support tickets, feeding insights directly into discovery backlogs . Point 8. Iterate and improve frequently Service Standard summary : Point 8 emphasises the necessity of continuously iterating and improving services to remain responsive to evolving user needs, technological advancements, and policy changes. It highlights that services are never truly 'finished' and that ongoing enhancements go beyond basic maintenance, addressing underlying issues rather than just symptoms. This approach ensures services stay relevant and effective throughout their lifecycle without requiring complete replacement. Existing AI tooling: User Feedback Analysis Tools: Tools like MonkeyLearn  or Thematic  use AI to quickly analyse open-ended user feedback, surfacing common issues, trends, or sentiments. A/B Testing Automation: Platforms like VWO (Visual Website Optimizer)  or Optimizely  can use AI to run and evaluate A/B tests more efficiently, suggesting winning variants faster. Anomaly Detection: Services like Datadog , New Relic , or custom Machine Learning (ML)  models can automatically detect abnormal patterns in usage or errors, signalling areas needing improvement. Chatbots & Virtual Assistants: AI-powered chatbots (like those from Intercom  or Drift ) collect valuable data on where users get stuck, revealing real-time insights to inform iterations. Predictive Analytics: Tools like Tableau with Einstein AI  or Power BI with Azure ML  can help forecast future issues or trends based on historical user behaviour. Automated Usability Testing: Platforms like PlaybookUX  or Maze  use AI to analyse tester behaviour, highlighting UX issues that might not be obvious in manual reviews. Future AI innovations: Autonomous UX Optimisation : Future AI systems may automatically redesign or tweak interfaces in real time based on live user behaviour, eliminating the need for manual iterations. AI Co-Pilots for Product Managers: Think of a GPT-style assistant  that could read feedback, usage data, and roadmap priorities to suggest or even schedule iterations proactively. Generative UI/UX Design: Generative AI  could likely evolve to create user interface variations tailored to different user segments on the fly, reducing design iteration cycles. Proactive Problem Prediction : With advanced behaviour modelling, AI could predict where users are likely to face issues before they even occur, allowing teams to preemptively fix them. Real-Time User Research Agents: AI personas  simulating user behaviour at scale could become a core testing method, replacing or supplementing traditional usability studies. Fully Autonomous Service Improvement Agents: Eventually, AI agents might manage continuous delivery pipelines , observe live service metrics, and autonomously deploy safe micro-improvements without any human intervention. Point 9. Create a secure service which protects users’ privacy Service Standard summary : Create a secure service which protects users' privacy section of the GOV.UK Service Standard emphasises the importance of identifying security risks, threats, and legal responsibilities associated with government digital services. To create a secure, privacy-protecting service, GOV.UK Service Standard point 9 requires teams to identify and manage security risks and legal duties. Teams must follow "Secure By Design" principles: get senior leader buy-in on risks, resource security for the full-service lifecycle, vet third-party software, and research user-friendly security measures. They must also handle data securely, continuously assess risks, work with risk teams, manage vulnerabilities, and regularly test security controls. Existing AI tooling: AI-Powered Threat Detection: Tools like Darktrace  and Microsoft Defender for Endpoint  use machine learning to detect unusual activity, helping identify security breaches in real-time. Anomaly Detection in User Behaviour: Services like Splunk  or Elastic Security  use AI to flag suspicious access patterns, reducing insider threat or compromised credential risks. AI for Secure Code Review: Tools like GitHub Copilot Security  or Snyk  use AI to help identify insecure code, vulnerable dependencies, or bad practices in real-time during development. Automated Data Classification & Masking: AI tools can classify sensitive data (e.g., Personally Identifiable Information) automatically and apply masking or redaction rules, e.g., BigID , DataRobot , or AWS Macie . AI-Powered Identity and Access Management: Adaptive access systems use AI to determine access levels dynamically based on context (location, time, behaviour), such as Okta  or Ping Identity . Natural Language Processing (NLP) for Policy Compliance: Tools like OpenAI’s GPT  or Regtech solutions  can help audit privacy policies, terms of service, or user-facing content to ensure alignment with laws like GDPR. Future AI innovations: Self-Healing Infrastructure: AI-driven systems that could automatically detect and patch vulnerabilities  without human intervention, reducing exposure time from days to minutes. Privacy-Preserving AI (e.g., Federated Learning + Differential Privacy) - Services trained on user data across decentralised devices  without transmitting raw data, could enhance user privacy. Proactive Legal and Ethical Compliance Bots: AI agents capable of continuously scanning systems and processes for legal, ethical, and policy compliance , could update teams on potential issues in near real-time. AI-Assisted Threat Simulation: Intelligent adversarial testing (like an AI-powered “ red team ”) that dynamically tries to break your service using the latest cyberattack techniques. AI-Guided Secure UX Design: AI that evaluates user flows and recommends privacy-enhancing alternatives, like less intrusive authentication methods or better consent models . Conversational Security Assistants: AI copilots for security teams that can answer complex security questions, simulate risks , and suggest best practices based on the service's architecture and data flows. Point 10. Define what success looks like and publish performance data Service Standard summary : GOV.UK Service Standard's point 10 emphasises the importance of defining clear success metrics for government services and publishing performance data. By identifying and tracking appropriate metrics, service teams can assess whether their services effectively address intended problems and identify areas for improvement. Publishing this data promotes transparency, allowing the public to evaluate the success of services funded by public money and facilitating comparisons between different government services. Existing AI tooling: Automated Data Dashboards: AI-powered platforms like Power BI with Copilot , Tableau with AI insights , or Google Looker Studio  can automatically generate dashboards, detect anomalies, and offer natural language querying to help teams understand performance in real time. Natural Language Summarisation: Use AI (like ChatGPT  or Claude ) to translate complex performance data into easy-to-understand reports or public summaries, helping teams publish accessible data to the public. Predictive Analytics: Tools like Amazon Forecast , DataRobot , or Azure ML  can forecast trends and help services set realistic success metrics based on historic performance and current patterns. User Feedback Analysis: NLP tools (like MonkeyLearn  or ChatGPT-based  classifiers) can scan user feedback (from surveys, social, support tickets) to extract key themes or satisfaction metrics that feed into definitions of success. AI-assisted Goal Tracking: Project management tools with AI (like Asana , ClickUp , or Notion AI ) can help define, track, and surface progress toward performance goals using task data and milestones. Future AI innovations: Real-time Adaptive Performance Models: AI systems that could dynamically redefine success criteria  based on changing user behaviour, policy changes, or emerging technologies, like self-adjusting KPIs that evolve with service use. AI Explainability Dashboards: Fully transparent AI dashboards  that could not only present data but explain why a metric matters, how it's calculated, and its impact, customised per audience (public, team, leadership). Conversational Public Portals: Public-facing AI bots  that could let citizens ask questions like “How well is this service performing?” and receive personalised, up-to-date, natural language responses with supporting data. Autonomous Policy Feedback Loops: AI that links performance data with policy implications , automatically surfacing suggested reforms, service design  tweaks, or investment areas based on effectiveness data. Cross-Service Benchmarking AI : A tool that could automatically compare services across departments or regions, highlighting strengths and weaknesses, and recommending tailored success metrics  based on peer performance. Having looked at how AI can strengthen delivery through smarter team dynamics, continuous iteration, and proactive security, we’re now ready to explore the technology foundations that support these services. In the next post, we’ll examine the final four Service Standard points—choosing the right tools and technology, making source code open, using open standards and shared components, and operating a reliable service. These points drive sustainability, interoperability, and resilience. We’ll assess how AI can help teams make better technology decisions, write and maintain open-source code, ensure compliance with standards, and build services that are robust and scalable. If delivery is about the rhythm of a good team, these next points are the instruments they need to play in tune. Join me as we explore how AI can help choose, build, and run government technology more effectively. Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .

  • AI in action 2: Supporting Service teams through the Service Standard Operational foundations

    AI in action 2: Supporting Service teams through the Service Standard Operational foundations by Matt Hobbs In this second post in the series, we begin to explore how artificial intelligence can directly support teams in meeting the UK Government Service Standard. If you missed it, you can read the first article here (opens in a new tab). By aligning the capabilities of current AI tools, and those on the horizon, with the needs of service teams, we can start to see a clear path where AI acts as a multiplier for quality, consistency, and speed. We’ll examine the first five points of the Service Standard, which focus on understanding users, solving whole problems, providing joined-up experiences, simplifying services, and ensuring accessibility for all. These points sit at the very heart of designing inclusive and effective public services. While AI is not a silver bullet, its responsible and deliberate use can free up a team's precious time and resources to focus more deeply on strategy, empathy, and continuous improvement. The use of AI can do this by taking on the “heavy lifting” of data analysis, pattern recognition, and user insight generation. Let’s look at where AI is already making an impact, and where future innovation could lead us next. Where AI can help Point 1. Understand users and their needs Service Standard summary : This point of the GOV.UK Service Standard emphasises the importance of developing an in-depth understanding of users and the issues they face. By focusing on the user's context and the issues they are trying to solve, rather than preconceived solutions, service teams can effectively meet user needs in a simple and cost-effective manner. This approach involves conducting user research, creating quick prototypes to test hypotheses, and utilising data from various sources to gain comprehensive insights into user difficulties. In the sections below, I outline solutions service teams can use now, along with future opportunities for AI to support them. Existing AI tooling : AI-Powered User Research Analysis: AI-driven tools like Dovetail  and Affectiva  can analyse qualitative user research data (interviews, surveys, feedback etc) to identify patterns and trends. Chatbots and Conversational AI: Tools like ChatGPT , Intercom , or Drift  can collect real-time user queries, providing insights into common pain points and unmet user needs. AI-Driven Sentiment Analysis: AI tools like Lexalytics  or MonkeyLearn  can analyse social media, feedback forms, or customer support interactions to detect emerging issues. Predictive Analytics for User Behaviour: Platforms like Google Analytics with AI insights  or Amplitude  use AI to predict user needs based on past behaviours. A/B Testing Optimisation: AI-powered A/B testing  platforms like Optimizely  can help refine service designs by automatically analysing user interactions and determining the best-performing options. Future AI innovations : AI-Generated User Personas : Instead of manually creating personas, AI could dynamically generate data-driven personas based on real-time user interactions. Autonomous User Research  Assistants: AI-driven digital assistants could conduct real-time user research, asking adaptive questions based on previous responses. AI-Powered Prototyping : Future AI tools could automatically generate prototypes based on user behaviour data, helping teams iterate on designs more quickly and efficiently. Emotion Recognition  and Adaptive UX: While AI technologies like facial recognition or voice analysis could potentially detect user frustration or satisfaction and adapt the service experience, it is essential that the privacy implications of such data collection are rigorously evaluated before any implementation is considered. AI-Enhanced Accessibility Testing : AI could simulate how users with disabilities interact with a service, automatically reporting improvements for accessibility back to the service team. Point 2.  Solve a whole problem for users Service Standard summary : This section of the GOV.UK Service Standard focuses on designing services that address users' complete needs by collaborating across teams and organisations. This approach ensures services are intuitive and cohesive, minimising the complexity users face when interacting with multiple government services. Service teams are encouraged to understand constraints, scope services appropriately, and work openly to promote collaboration and reduce duplication. The goal is to create user journeys that make sense without requiring users to understand the internal structures of government. Existing AI tooling : Conversational AI and Chatbots : AI-powered virtual assistants (e.g., GOV.UK Chatbots, ChatGPT ) can provide seamless, 24/7 support, guiding users through complex government processes and reducing the burden on call centres. Automated Case Management and Routing : AI can analyse user queries and automatically direct them to the correct department or resource, ensuring faster and more accurate service delivery. Predictive Analytics for Service Demand : AI models can predict spikes in service demand and help allocate resources efficiently, improving responsiveness and planning. Personalised Service Recommendations : AI can analyse user data to offer tailored services (e.g., suggesting benefits or permits based on life events like moving homes or starting a business). Intelligent Document Processing : AI can extract and verify information from documents (e.g., passports, certificates) to accelerate application processes. Future AI innovations : Cross-Agency AI Integration: AI could unify multiple government services into a single, user-friendly interface, so citizens don’t have to navigate separate systems. Voice-Enabled Government Services : AI-powered voice assistants could allow users to access services through speech, making services more accessible. AI for Policy and Decision Support: AI could analyse patterns in service usage and citizen feedback to recommend improvements or policy changes. Proactive Citizen Support: AI-driven services could anticipate user needs (e.g., reminding users about expiring licences or upcoming payments) and send timely notifications. Bias Detection and Ethical AI: AI systems could be developed to ensure fairness and reduce biases in government services, improving trust and inclusivity. Point 3. Provide a joined up experience across all channels Service Standard summary : Point 3 of the Service Standard emphasises designing government services that seamlessly integrate across all channels, online, phone, paper, and face-to-face, to ensure accessibility and a consistent user experience. It highlights the importance of empowering service teams to address issues across any channel, involving frontline staff in user research, and utilising data from both online and offline interactions to drive continuous improvements. Additionally, it stresses that strategies to promote digital adoption should not hinder access to traditional channels. Existing AI tooling : AI-Powered Chatbots and Virtual Assistants: Tools like IBM Watson Assistant , Google Dialogflow , and OpenAI's ChatGPT  can provide consistent, automated support across websites, mobile apps, social media, and messaging platforms. They can also escalate issues to human agents when needed. Omnichannel Customer Experience Platforms: AI-driven platforms like Salesforce Service Cloud , Zendesk AI , and HubSpot AI  unify interactions across email, chat, phone, and social media, ensuring users receive consistent responses across all channels. AI-Based Sentiment and Intent Analysis: Tools like Google Cloud Natural Language API  and AWS Comprehend  analyse customer feedback from various sources to identify pain points and improve service design. Automated Document and Form Processing: AI-based OCR tools  (e.g., Adobe Sensei , ABBYY FlexiCapture ) extract and process information from paper forms or scanned documents, allowing users to switch between offline and digital channels seamlessly. AI-Powered Call Centre Support: AI tools like Google Contact Center AI  and Five9 Intelligent Cloud Contact Center transcribe , analyse, and route calls to the right agents while maintaining a record of previous interactions. Future AI innovations : Context-Aware AI Agents: Future AI assistants  could remember user interactions across channels (web, phone, in-person) and pick up conversations where they left off, offering a truly seamless experience. AI-Powered  Real-Time Translation  and Accessibility : AI tools could automatically translate conversations across languages in real-time (e.g., advanced Google Translate AI ) and enhance accessibility by transcribing voice conversations to text instantly for deaf users. Personalised AI Service Recommendations: AI-driven recommendation engines could analyse a user's past interactions and predict their next needs, proactively suggesting the best service channels and steps to take. Unified AI-Powered Digital Identity Verification : Future AI systems could securely verify users across different platforms using biometric authentication, facial recognition, and behavioural analysis, allowing for a smooth transition between online and offline services. AI-Driven Predictive Support : AI could analyse historical data to predict when users might need assistance and proactively offer solutions before they even reach out for help. Point 4. Make the service simple to use Service Standard summary : 'Make the service simple to use' emphasises designing government services that are intuitive, accessible, and easy for users to navigate. It stresses the importance of understanding user needs, removing unnecessary complexity, and ensuring services work for everyone, including those with disabilities or low digital skills. Services should be tested with real users, provide clear guidance, and avoid technical jargon to create an intuitive experience. Existing AI tooling : Intelligent Chatbots and Virtual Assistants: AI-powered chatbots  provide 24/7 support  across web, mobile, and voice channels. AI-Powered Search and Auto-Suggestions : AI enhances search by predicting user intent and dynamically suggesting relevant content. Automated Accessibility Enhancements : AI generates captions, text-to-speech, and real-time translations to improve accessibility. Smart Form-Filling and Data Auto-Completion : AI pre-fills forms and error-checks inputs to reduce mistakes. Personalised User Experiences : AI-driven content recommendations tailor service instructions based on user preferences. AI-Powered Process Automation and Self-Service : AI assists users in complex processes, reducing manual effort. Predictive User Support and Proactive Assistance : AI anticipates issues and provides relevant help before problems arise. Conversational Voice Interfaces and Multimodal Interactions : AI-powered voice assistants enable hands-free interaction with services. AI-Based Sentiment and Frustration Detection : AI analyses feedback and chat logs to identify user pain points. Fraud Detection and Security Simplification : AI-powered ID verification and fraud detection streamline authentication. Future AI innovations : Emotionally Aware Chatbots: AI could detect frustration or tone and adjust responses accordingly. Context-Aware Search: AI could understand past interactions to auto-filter irrelevant results. Dynamic Accessibility Adjustments: AI-powered interfaces could adapt layout and readability based on cognitive load  or disabilities . Predictive and Adaptive Forms: Forms  could dynamically adjust based on user needs, for example: reducing unnecessary form fields on digital interfaces. Fully Adaptive Interfaces: AI could modify  interface layouts , font sizes, and navigation based on user behaviour. AI-Driven Digital Assistants for Task Completion: AI could submit documents and complete applications on behalf of users. AI-Powered Nudges: AI could guide users to complete key tasks based on previous behaviour patterns . Multimodal  AI Interactions: AI could seamlessly switch between voice, text, and gestures depending on user preference . Real-Time Emotion Detection  for Support Teams: AI could alert teams when users are struggling, allowing instant intervention. Biometric AI for Seamless Security: AI could enable password-free authentication through facial recognition  or speech recognition . Point 5. Make sure everyone can use the service Service Standard summary : The GOV.UK Service Standard's fifth point, "Make sure everyone can use the service," emphasises designing services that are inclusive and accessible to all users , including those with disabilities, legally protected  characteristics, limited internet access, or low digital skills . Service teams are advised to meet accessibility standards  for both online and offline components, conduct user research with diverse participants, and provide appropriate support to ensure no user is excluded. Existing AI tooling : Automated Accessibility Testing: Tools like axe , WAVE , and Google's Lighthouse , enhanced with AI, help detect accessibility issues in real time (e.g., missing alt text, poor contrast). AI-Powered Transcription & Captions: Services like Google Speech-to-Text , Otter.ai , or Microsoft Azure can provide real-time subtitles  and transcripts for audio/video content, improving accessibility for deaf or hard-of-hearing users. Language Translation & Simplification: AI tools like DeepL  or Google Translate  assist by translating content into multiple languages, while GPT-based tools  can simplify complex text, making information more accessible to users with low literacy levels or cognitive impairments. Voice Assistants & Conversational Interfaces: AI-driven chatbots  (e.g. on GOV.UK or NHS sites) can guide users through processes using plain language or voice interaction, helping those with visual or motor impairments. Personalisation Engines : AI can adapt interfaces to user preferences, like increasing font sizes, contrast, or offering keyboard-only navigation modes, based on learned behaviours. Future AI innovations : Real-Time Inclusive Design Feedback: AI design assistants could offer proactive suggestions during development to flag accessibility concerns or recommend more inclusive design patterns. Emotion and Intent Detection: Advanced AI could detect user frustration or confusion through sentiment analysis (e.g. tone of voice, facial expressions) and offer adaptive support instantly. Dynamic UI Generation: AI could auto-generate personalised interfaces based on a user’s device, environment, or abilities, creating a “design-for-one” approach at scale. Augmented Reality (AR) for Navigation: AI-enabled AR could help users with visual or cognitive impairments navigate complex public spaces or digital services using voice-guided overlays. Multimodal  Accessibility Agents: Future AI assistants may seamlessly switch between text, voice, visual, and gesture inputs/outputs to match users' preferred interaction mode in real time. As we’ve seen, AI is already playing a role in how service teams understand users, simplify experiences, and deliver inclusive services. Whether it’s enhancing user research, supporting accessibility, or helping create joined-up services, AI has clear potential to amplify the points behind good service design. In the next article, we’ll turn our attention to the next group of Service Standard points, those that deal with team structure, agile practices, iteration, and security. These are the operational foundations that support successful delivery, and we’ll explore how AI can support multidisciplinary collaboration, continuous improvement, and safe, secure digital services. Join me again in the next article as we continue to map the intersection between AI and service excellence, coming soon. Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .

  • Lessons from the Cabinet Office GitHub Copilot Trial

    Lessons from the Cabinet Office GitHub Copilot Trial by Cameron Browne Drawing on lessons from the recent Cabinet Office Github Copilot trial, Cameron shares practical advice on how to use AI assistants as a powerful tool for learning and delivery, while ensuring you remain the pilot. AI assistants like GitHub Copilot are changing the way we work. They can be powerful tools, but they also have their limitations. I recently participated in the Cabinet Office Trial for GitHub Copilot. The trial was part of a wider government initiative to explore how AI code assistants can support digital delivery teams in organisations like the Ministry of Justice. It marked a shift towards encouraging and promoting responsible AI practices. As a QA Engineer currently working at His Majesty’s Courts and Tribunals Service (HMCTS), I used Copilot to help write and maintain automated test suites. I was provided with access to the GitHub Copilot AI code assistant in my development environment for 4 months, along with training in prompt engineering. Prior to this trial I had no experience with Copilot , Codex , or Large language models (LLMs).  My focus for this article will be providing practical advice for using AI code assistants. I believe this is useful for anyone who is currently trying to navigate the fast-paced world of constantly changing and improving AI assistants. These tips are not only applicable to AI code assistants, but also any AI chatbot you may use, and I believe they will stay relevant as the AI landscape changes. Like any tool, there is a right way and a wrong way to use it. Welcome message for participants of the GitHub Copilot trial. Tip 1 - AI is a great teacher  Use AI to onboard and learn faster AI assistants can act like personal tutors — and no question is too simple. For example, you can ask: “Does this project have automated accessibility tests?” This has been really helpful for me as an early career QA engineer who has previously moved from a project with Java developers to one with Ruby and Python developers. It helps me get up to speed quickly and navigate the project, even if there are technologies I haven’t worked with before. To get better answers, set the scene. Give Copilot a role and explain your experience level. For example, “I am a QA with 1 year experience with test automation and 2 months experience in Cucumber, you are a senior dev, teach me how this test suite works”. This tailors the response to your experience level. Other ideas for tailoring your assistant: Ask it to be your paired programmer to help figure out a bug Ask it to be your assistant and write documentation for you Feed it documentation and ask questions about it Finally, in the Copilot Chat, you can go back and clarify points. “I understand this , but not this. Explain it to me more simply”. As a QA engineer, you are constantly exposed to new technologies, so it’s important to keep learning. AI assistants have the potential to accelerate our learning and help us stay up to date as the tech landscape evolves. An example of highlighting a section of code and prompting 'in-line' in the code editor. Tip 2 - Concise context = Quality responses Keep prompts focused and remove clutter Your context is everything you send in your AI request (what the AI sees). The more unnecessary information you send to the chatbot, the more tokens you will use, and the more confused the response is likely to be. It also takes longer to generate your response and it is worse for the environment*. *The use of large AI prompts can be bad for the environment because running AI models consumes significant energy, contributing to carbon emissions. Clear and concise prompts lead to better results. I’ve found 2 key ways to achieve this: 1. Limit the unnecessary information you send with your request. When you prompt AI, you want to indicate relevant code: Open only relevant code files, close irrelevant ones. Autocomplete Copilot will use your open files to understand the context of your work and offer suggestions. Choose the right prompt method for the task. Consider if you should highlight a section of code and prompt ‘in-line’ when you want a focused response based on a specific section of code or a single file. Or use Copilot chat when your question requires a broader context across multiple files. Picking the right method helps control token usage and ensures more accurate results. You can use the @project tag in Copilot chat (see image) - this will send your entire project in the request, but it’s worth noting that this is more context than you will likely need.  2. Keep the Copilot Chat history relevant: Copilot uses your whole chat thread as context — keep it clean and focused.  Start a new conversation for new tasks to refresh your context window. Delete irrelevant responses within your current chat history (the bin icon, which is also demonstrated in the image). In short, manage your context well and the quality of responses generated will be better. An example of a previous prompt in GitHub Copilot Chat. The bin icon is circled to show how to delete an irrelevant prompt and response from your chat history. Tip 3 – AI can't read your mind… yet Don’t expect AI to guess — show, iterate, and refine While AI assistants are incredibly powerful, they're not mind-readers. Problems tend to arise when you expect AI to just know what you require and let it make assumptions. To get the best out of your AI assistant, you need to be crystal clear about your requirements; here’s how: Examples are your best friend Want your AI to write code that matches your team's preferences for readability and maintainability? Show, don’t tell. Whether it's the specific formatting of your tests or the naming conventions for different scenarios, providing examples is a huge time-saver.  You can indicate a file with an example, or even paste some example code directly into your prompt. It's much quicker than typing out all your requirements. For instance, instead of a lengthy explanation, you can simply say: "…look at the end_to_end.feature file for examples of the naming conventions to use for different test scenarios". Open a dialogue and iterate Think of your interaction with AI as a conversation. Don't just accept the initial response you get. If something isn't quite right, ask Copilot why it made certain choices. If you have a different preference, don't be afraid to prompt further. A prompt like, "I don’t like this, can you structure it this way instead to make it a bit more readable and consistent with the other tests… " can work wonders. Iterating with Copilot Chat is a much quicker way to refine your output.  Start with a general request, and then get more specific to improve the results. I find that refining the response using a chain of prompts is a much more productive way to work, rather than trying to strike gold with your first prompt. Often, it's the first AI response that helps you remember things you forgot to include in your first prompt. Maybe one day Copilot will be able to just read our thoughts, but for now, mastering clear communication, using plenty of examples, and embracing iteration are key to unlocking its full potential. Tip 4 - You’re the pilot Stay in control of your code This tip is perhaps the most simple but most important, and the one that really stuck with me. Remember, the tool is called ‘Copilot’ for a reason; you should be in control. AI assistants in all their forms are great for offering suggestions, but they shouldn’t be making your decisions. Copilot should never be handed a big, complex task for you to then just copy in the finished code. You can use Copilot to do complex things, but break complex tasks into steps, that way you can keep track of each step being taken and each decision made. You should fully understand everything you copy from AI because you’re the one responsible for the changes you make. While it can be tempting to copy and paste from Copilot without analysing every line of code, ‘ vibe coding ’ can only get you so far if you don’t understand the changes you’re making. If your AI tool is taken away, you should still be able to do your work.  Use Copilot as a tool, not a crutch. You should still be able to work without it.  Pro tips for staying in the driver’s seat Let Copilot help you break down complicated work into smaller steps E.g. - “I need to increase the coverage of my e2e tests to include a new user journey - break down this task into smaller steps based on my current e2e test coverage.” Have Copilot explain its work and help you understand it so you stay in control The image humorously suggests that while AI can quickly generate code, it may lead to even more time spent debugging. It’s easy to lose track of ownership when Copilot is doing the typing, but the decisions still need to be yours. You should be involved in each step and understand the changes you make. Otherwise you’ll spend more time debugging AI code than you would’ve spent doing the task yourself. Wrapping up GitHub Copilot can… Teach Understand your level of experience Follow clear instructions Brainstorm ideas Debug error messages and find the route of problems Iterate on responses  Speed up your work GitHub Copilot cannot…  Keep your context relevant  Read your mind to know what you want Replace you as the pilot Take responsibility for its work Useful resources: Github Docs Prompt Engineering: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat Github Copilot Cheat Sheet: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/github-copilot-chat-cheat-sheet Contact information If you have any questions about our AI initiatives, Quality Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .

  • Unlocking the web: start your journey into digital accessibility

    A look at how we can follow inclusive practices to ensure equal access to digital services for everyone. Guided by standards such as the Web Content Accessibility Guidelines (WCAG) and legislation, organisations should prioritise accessibility from the outset. Through rigorous testing, user feedback loops, and continuous improvement we can drive progress in accessibility. Overview What is digital accessibility  Who benefits from digital accessibility?  Legal standards and guidelines Shift Left accessibility Testing, auditing and user feedback Progress over perfection Contact information What is digital accessibility? Digital accessibility ensures there are no barriers for individuals when using digital services. This makes accessibility a functionality issue. Simply put, if the service is not accessible it is not functional. Although there are legal requirements to highlight the importance of accessibility, it goes beyond legal compliance checklists and is centred on creating inclusive digital spaces that everyone can use. Who benefits from digital accessibility?  Web accessibility benefits everyone. When digital spaces are built with accessibility in mind the result is faster, easier and more usable services. Importantly, this makes the service accessible for people with permanent, temporary and situational disabilities.  People may have accessibility needs across the following areas:  Cognitive Visual Auditory  Motor Speech Visual representation of disability types such as cognitive, visual, auditory, motor, and speech. Source: https://www.esri.com/arcgis-blog/products/arcgis-storymaps/constituent-engagement/building-an-accessible-product-our-journey-so-far/   Take time to understand your users and understand their experiences on your services. Not every user will have the same needs, and some users' requirements may conflict with others. Providing options and alternatives will allow you to create more inclusive digital spaces with reduced barriers for your users. Legal standards and guidelines Equality Act 2010 As far as legal requirements go, the Equality Act 2010  states that there is a ‘ a duty to make reasonable adjustments’ for those who classify as ‘disabled persons’.  Government requirements  Under the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018   all public services have further defined accessibility requirements  which are to: meet level AA of the Web Content Accessibility Guidelines (WCAG 2.2)  as a minimum work on the most commonly used assistive technologies  - including screen magnifiers, screen readers and speech recognition tools include disabled people in user research have an accessibility statement  that explains how accessible the service is - you need to publish this when the service moves into public beta As a minimum, it is required that public services meet basic requirements, but even for non-public services it is good practice to follow these guidelines. In doing so, you begin to make your digital service an accessible space for all. WCAG The Web Content Accessibility Guidelines (WCAG) serve as the internationally recognised standards for web accessibility. WCAG provides guidelines organised into four principles: Perceivable, Operable, Understandable, and Robust (POUR). Following these guidelines enhances the overall accessibility of your web content. Perceivable: Provide alternatives for non-text content, captions, and sufficient colour contrast for text. Operable: Ensure keyboard accessibility, sufficient reading time, and avoid content causing discomfort. Understandable: Use clear language, consistent navigation, and offer input assistance. Robust: Employ valid code, adhere to web standards, and avoid browser-specific features. Currently, web content should adhere to the WCAG 2.2 (2023) standards . The recent version introduces 9 new guidelines (6 A & AA) and removes one (4.1.1 Parsing) . Meeting the WCAG 2.2 guidelines will mean you will also meet the previous versions of the guidelines.  Shift Left accessibility Visual representation of shift left activities that involve security, testing and operations processes earlier on in the dev cycle including throughout plan, code, build, test, release, deploy, operate and monitor phases. Source: https://blogs.vmware.com/cloud/2021/05/11/shift-left-platform-teams/   Accessibility should not be the responsibility of a single person/role but of the whole team. This involves baking accessibility in from the start, from the initial idea through to sign off. This implements a ‘Shift Left’ approach  which encourages earlier accessibility reviews, involving all on the team from product owners through to release.  A shift left approach embeds accessibility into the process so that it is not just an afterthought or a bottleneck to releases. It also prevents an excess of accessibility tech debt items that tend to remain at the bottom of the backlog. Testing, auditing and user feedback A large part of creating accessible services is to regularly test the service using automated testing tools and manual assessments (including testing with assistive technology). At Solirius we have several Accessibility specialists who are continuously working to implement, build and maintain accessible and inclusive services.  Testing needs to be carried out in parallel to regular user testing to ensure you better understand real experiences for users and are not just building services to meet compliance. Progress over perfection Accessibility is a vast area with many specialisms, and can initially feel overwhelming. But it’s important to remember that even small accessibility considerations are a start and can go a long way for users. Don’t let the pressure of perfection stop you from getting involved and learning about accessibility. Lean on your peers and figure out how you can tackle challenges together, it is a learning curve for many but we all start somewhere. Summary Prioritising web accessibility ensures that your services are inclusive and usable for all users. By implementing a shift left approach, utilising the Web Content Accessibility Guidelines (WCAG) and involving users with a variety of needs, you can create a more inclusive digital landscape. Remember, accessibility is an ongoing journey involving everyone, and continual efforts to improve will help create digital services that benefit all. Contact information If you have any questions about accessibility or you want to find out more about what services we provide at Solirius please get in touch .

  • Meet the Team: Ayesha Saeed

    Ayesha shares her journey to becoming an Accessibility Lead at Solirius as well as insight into her top tips and interests. Meet Ayesha Saeed, a Senior Accessibility Specialist with over 5 years of experience working in accessibility on a range of products in both public and private sectors. She has a wide variety of experience including conducting audits, delivering training, and building implementation plans with teams, through to app accessibility and consulting.  How did you get involved in accessibility? I have a QA background and so I started my accessibility journey by conducting accessibility audits, which prompted me to begin learning about accessibility principles and user-focused design. I really enjoyed learning about accessibility and all the different specialisms within it. I studied Social Anthropology at university so I enjoyed learning about people and understanding the numerous ways people interact with technology.  I went on to work on a government project where I learnt lots about the laws surrounding digital accessibility; GDS standards and WCAG compliance. I expanded my experience to mobile apps, gaining invaluable insights into the nuances of mobile accessibility and learning more about guidelines for iOS and Android platforms. I began to cultivate a culture of accessibility on the projects I worked on, educating my team, working to ensure that accessibility considerations were no longer an afterthought. Currently, I am an Accessibility Lead at Solirius working on another government project, managing several services and ensuring they have the necessary guidance to deliver accessible services. I support on testing practices, writing Accessibility Statements and working with teams to build roadmaps to make their services accessible. I also deliver training sessions to empower services to integrate accessibility principles in the early stages of development and help to motivate them to sustain their efforts throughout the process.  What are your interests? I like to cook a lot and enjoy taking my mum’s classics and turning them into veggie friendly versions using my homemade seitan. I also like to keep active by swimming regularly and (occasionally) attempting yoga. I’ve also gotten into crocheting recently and enjoy seeing what I can make.  Top accessibility tip? Don’t feel like you need to know it all! Digital accessibility is such a rich subject and can be difficult to grasp when you are new to it. Just remember to be patient with your learnings, reach out to peers, read about accessibility and try to get involved with the accessibility communities for support. Your small changes can have a huge impact!  Top accessibility resource? The A11y Slack - It’s a great community of accessibility specialists and advocates who are friendly and open to help. It is free and open to all, and you can join at web-a11y.slack.com . Contact information If you have any questions about accessibility or you want to find out more about what services we provide at Solirius please get in touch .

  • Breaking barriers: digital inclusion in government services

    Breaking barriers: digital inclusion in government services In this article, Piya discusses the importance of creating government services that are accessible to everyone . Government accessibility standards exist to ensure that a wide range of people can use government services on both web and mobile applications. Importantly, accessibility is a shared responsibility, and Piya lists resources that offer guidance on integrating accessibility into the development of services. Overview:  GOV.UK requirements Meeting WCAG 2.2 Testing with assistive technology User research with disabled people Accessibility statements  GOV.UK design system DWP resource GOV.UK requirements The government accessibility requirements  state that all services must meet the following criteria to ensure that all legal requirements regarding public sector websites and mobile applications are met: Meet level AA of the  WCAG 2.2  (Web Content Accessibility Guidelines) at a minimum Work on the most commonly used assistive technologies - including screen magnifiers, screen readers and speech recognition tools Include disabled people in user research (including cognitive, motor, situational, visual and auditory impairments) Have an accessibility statement that explains how accessible the service is (published when the service moves to public beta) Reaching these requirements ensures that services meet the legal requirements as stated by Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018 . In addition, we can ensure that we are creating more inclusive digital services for users with diverse needs. Meeting WCAG 2.2 WCAG 2.2 is based on 4 principles, that emphasise the need to think about the different ways that people interact with digital content: perceivable: recognising and using the service with senses that are available to the user. operable: finding and using content, regardless of how a user chooses to access it. understandable: understanding content and how the service works. robust: content that can be interpreted reliably by a wide variety of user agents. For example, users might use a keyboard instead of a mouse or rely on a screen reader to have content spoken aloud. The WCAG 2.2 principles apply to all aspects of your service (including code, content and interactions), which means all members of your team need to understand and consider them. It is important to conduct regular accessibility testing using a range of automated and manual tools as early as possible to ensure your design, code, and content meet WCAG 2.2 AA requirements (all A and AA criteria). Testing with assistive technology  To meet the government service standard, testing should be done across the following assistive technologies and browsers throughout development, ensuring that the most commonly used assistive technologies are tested and work on the service before moving to public beta:  JAWS (screen reader) on Chrome or Edge  NVDA (screen reader) on Chrome, Firefox or Edge VoiceOver (screen reader) on Safari  TalkBack (mobile screen reader) on Chrome  Windows magnifier or Apple Zoom (screen magnifiers)  Dragon (speech recognition tool) on Chrome  Low vision user using a screen magnification tool to increase the text size on a webpage to allow them to see the content clearly. Source: Digital Accessibility Centre (DAC) https://digitalaccessibilitycentre.org/usertesting.html   It is a shared responsibility to make sure services are compatible with commonly used assistive technologies as testing across these combinations should be done throughout all stages of development; when planning new features, when designing and building new features, and testing. For more information on how to test with assistive technology, see testing with assistive technologies .  User research with disabled people Inclusive user research is essential for creating user-centred services that meet the needs of all users, including those with disabilities and diverse backgrounds. By involving a varied group of participants early on, teams can identify and address usability and accessibility barriers, enhancing the design, functionality, and content to benefit everyone. This approach encourages continuous improvement, ensuring government services evolve with users' needs. Ultimately, inclusive user research builds trust by showing a commitment to accessibility, making services more usable and welcoming for a broader audience. Accessibility statements   Accessibility statements are required to communicate how accessible a service is. This includes stating the WCAG compliance level, explaining where the service has failed to meet guidelines (and a roadmap of when this will be fixed), contact information and how to report accessibility issues. Government services should follow a standard accessibility statement format  to maintain consistency.  GOV.UK Design System (GDS) The GOV.UK  design system (GDS)  has many reusable components that are utilised across government services. Each component shows an example, an option to view the details on how to implement the component, as well as research regarding the component's usability and what kind of issues users have faced. Any known accessibility issues are also highlighted and based on this research, some components are labelled ‘experimental’ as some users may still experience issues navigating them. Services must proceed with caution when adopting these components, and carry out rigorous manual, assistive technology and user testing to ensure that the implementation is accessible and WCAG guidelines are met.  Example of where to find accessibility research on the GDS details component, under heading ‘Research on this component’.  Source:  Government Design System (GDS) details component - https://design-system.service.gov.uk/components/details/   Summary Overall, government services must ensure they are creating services that are regularly tested and work with users who have a range of access needs or assistive technology requirements including:  Reviewing, understanding, and meeting GOV.UK and WCAG 2.2 standards Implementing accessible components that can be accessed by assistive technology Ensuring accessibility is the whole team’s responsibility when developing a service Regularly testing with users with disabilities Providing an accessibility statement to inform users where the service does and does not meet accessibility guidelines  Accessibility should be considered from the start as retrofitting costs more time and resources, and results in your users not being able to use your service. DWP resource:  The Department for Work and Pensions (DWP) accessibility manual is a great resource for guidance on testing, accessibility best practices throughout service development and details on how each member of the team can integrate accessibility. DWP Accessibility Manual home page Source: GOV.UK - Accessibility in Government - https://accessibility.blog.gov.uk/2021/05/27/why-weve-created-an-accessibility-manual-and-how-you-can-help-shape-it/   Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius,   please get in touch .

  • 6 common accessibility mistakes in design—and how to fix them

    6 common accessibility mistakes in design—and how to fix them by Philena Bremner In this article, Philena discusses the importance of designing accessible experiences that cater to a diverse range of users, as well as for temporary or situational challenges. She touches on why accessibility is not just a technical requirement but a design principle that benefits everyone. Philena highlights six common design mistakes that hinder accessibility and provides practical solutions to create more inclusive, user-friendly designs. Why accessibility in design matters Design isn’t just about making things look good—it’s about making sure everyone can use your product or service. Think about it: you’ve probably struggled with low contrast on your phone in bright sunlight or found it hard to navigate a cluttered website when you’re in a rush. Accessible design makes things easier for everyone.  But accessibility isn’t just about following guidelines - it’s also about understanding real user needs. That’s why user research and feedback on design decisions are essential to ensure designs truly meet the needs of diverse users. By listening to feedback and testing with people who have a range of abilities and experiences, designers can identify barriers and create solutions that work for everyone. So, let’s look at some common design mistakes and how you can avoid them to create a better experience for all users. Mistake 1: Low contrast text Let’s start with one of the most obvious issues - low contrast. Sure, it might look stylish to have light grey text on a white background, but can anyone actually read it? Now, imagine someone with a visual impairment trying to make sense of that. But here’s the thing: low contrast isn’t just an issue for those with impaired vision. Think of someone trying to read on their phone outside in the sun, with the screen reflecting glare—contrast matters in that scenario too. Don’t example of low contrast text with light grey text on a light grey background, making it hard to read. Do example of high contrast text with dark grey text on a lighter grey background, making it clear and easy to read. How to get it right: Aim for a contrast ratio of at least 4.5:1 for normal text. Use tools like the WebAIM Color Contrast Checker  to test your designs. Think of contrast as a universal design principle—if it’s easier for someone with a visual impairment, it’s easier for everyone. Mistake 2: Relying only on colour to convey information Think about a form where the only indication of an error is a red outline. For someone who’s colourblind, that red outline might not even register. The same problem happens when colour alone is used to convey important information, like in charts or buttons. Accessibility isn’t just about catering to specific disabilities, it’s also about ensuring clarity for everyone. Whether it’s a person with colour blindness or someone trying to interact with your design in less than ideal lighting, relying solely on colour can be a problem. Don’t example of two forms side by side showing an error relying solely on colour to convey information. On the left, the perspective of a user without colour blindness shows a red border around the email field to indicate an error. On the right, the perspective of a user with colour blindness (Deuteranopia) shows the same form where the red border is not distinguishable, making the error unclear. Do example of two forms side by side showing an improved design where errors are supplemented with icons and text. On the left, the perspective of a user without colour blindness shows an email field with a red border, an error icon, and the text 'Enter your email.' On the right, the perspective of a user with colour blindness (Deuteranopia) shows the same form where the error icon and text are clearly visible, ensuring the error is understandable without relying on colour alone. How to get it right: Always supplement colour with icons, text, or patterns. For example, instead of just using a red outline for errors, add a symbol and text that clearly explains the issue and how to fix it. Use a colour-blindness simulator  during the design process to ensure your work is still clear without colour.  Be aware that blindness simulators will never replace real user feedback. Ensure you test your designs with diverse users.  Mistake 3: Complex layouts that confuse users We’ve all been there—landing on a website that’s so cluttered and chaotic that we have no idea where to look. For someone with cognitive disabilities or attention issues, this kind of layout can make navigation nearly impossible. But even without a disability, a complex layout can be frustrating. Picture yourself trying to book a flight on a crowded train, with limited time and attention—simplicity and clarity become lifesavers. Don’t example of three pages showing a complex and inconsistent layout. The panels have inconsistent button placements, varied spacing, and misaligned elements, making navigation and readability difficult. Do example of three panels showing a simple and consistent layout. The panels have aligned elements, consistent button placements labeled 'Continue,' and uniform spacing, making navigation clear and easy to follow How to get it right: Use a clear visual hierarchy with headings and subheadings that guide users. Make important information easy to find with a clean layout, such as grouping related elements together to create an intuitive flow. Use consistent spacing, fonts, and alignment to reduce cognitive load. Keep consistency across pages, so users don’t have to relearn how to navigate every time. For example, place the primary action button, like "Continue" or "Submit," in the same location across all pages and use consistent labelling to avoid confusion. Mistake 4: Text that’s too small or difficult to read Tiny text is a big problem. Whether someone has low vision or is trying to read on a small screen in a bumpy car ride, small, illegible text makes for a frustrating experience. Readable text benefits everyone. Imagine you’re trying to skim an article on your phone during your commute—clear, bold text that’s easy to read helps you grasp the key points. Don’t example showing text that is tiny and hard to read, with a decorative font that reduces readability Do example showing text with a larger font size and a clear, easy-to-read typeface for better accessibility. How to get it right: Use a minimum font size of 16px for body text. Keep line length between 45 to 75 characters for better readability. Choose fonts that are easy to read, with good spacing between letters and lines. Some fonts that are considered accessible include: Arial, Calibri, Century Gothic, Helvetica, Tahoma, Verdana, Tiresias, and OpenDyslexic. Again, it is important to get real user feedback to see what works for your users. Mistake 5: Missing image descriptions For someone using a screen reader, images without descriptions are a black hole of information. They can’t see what the image is trying to convey, so they miss out on key content. Alternative text or alt text can provide that context for users by describing images for users who can’t see them. But alt text isn’t just for screen reader users. What about someone with a slow internet connection? While they’re waiting for the images to load, they can still understand what’s there if you’ve provided alt text. Don’t example showing an unclear alt text description for an image with a purpose. The image of mountains and a sun is labeled with the file name '12344545767.jpg,' which does not provide meaningful context. Do example showing a clear alt text description for an image with a purpose. The image of mountains and a sun is described as 'Simple illustration of mountains and the sun,' providing meaningful context. How to get it right: Always include meaningful alt text for images that convey information. Avoid purely decorative images, or if they are not needed make sure they’re marked as such by using empty alt text ( alt="" ). Alt text should reflect the image’s purpose and context in relation to the surrounding content, for example if you use ‘simple illustration of mountains and a sun’: On a page about travel destinations it could be: “Illustration of a mountain range at sunrise, representing a peaceful travel location.” On a page about design inspiration it could be: “Minimalist mountain and sun illustration showcasing simple design concepts.” Think of alt text as part of the story you’re telling—don’t leave users in the dark. How to write good alt text for screen readers Mistake 6: Incomprehensible data graphs Complex data visualisations can be a headache for users, especially those with assistive technology or those who are colourblind. Labels that are too small or graphs that rely solely on colour can make it difficult to understand what’s being presented. But this isn’t just a challenge for users with disabilities. Anyone trying to read a graph on a small screen or in a distracting environment will appreciate clear, easy to understand visuals. One simple way to make graphs more accessible is to incorporate patterns or textures in addition to colour. For example, instead of only using red and green in a pie chart, you can add stripes or dots to differentiate between sections for users who struggle with colour perception. Don’t example of two pie charts relying solely on colour to convey information. On the left, the perspective of a user without colour blindness shows sections in orange, purple, and pink labeled 'Pass,' 'Fail,' and 'Not applicable.' On the right, the perspective of a user with colour blindness (Achromatopsia) shows the same chart in grayscale, making it impossible to distinguish between sections. Do example of two pie charts with additional patterns and labels to supplement colour. On the left, the perspective of a user without colour blindness shows the chart with colours, patterns, and text labels indicating '24% not applicable,' '45% pass,' and '31% fail.' On the right, the perspective of a user with colour blindness (Achromatopsia) shows the same chart with patterns and text labels, ensuring the data is still understandable without relying on colour. How to get it right: Provide clear, concise summaries of data trends. Label graphs and charts clearly, with text and visual cues like patterns. Use high contrast colours and provide alternative formats, like tables, for users who prefer text-based information. For image-based graphs, provide clear alt text or captions that describe the data and key insights, ensuring the information is accessible to screen reader users. Designing for everyone At the end of the day, accessibility is about making sure everyone has equal access to services and products. By avoiding these common design mistakes, you’re not just helping people with disabilities—you’re creating a better experience for anyone who might be in a permanent, temporary or environmental situation where good design means accessible design. Take action When designing services or products, ask yourself: is this accessible for everyone? Start making these changes today, and be sure to conduct user accessibility testing along the way - you may be surprised by small changes that improve the overall user experience for everyone. Additional resources To further enhance your accessibility design skills, explore these valuable resources: Accessibility - Material Design WebAIM: Web Accessibility for Designers Stark - Contrast & Accessibility Checker | Figma Accessible fonts and readability: the basics How to write good alt text for screen readers Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch .

  • Let’s talk accessibility: why we need proxy users

    Have you ever been in a situation where you’re keen to test the accessibility of a service, but your target users haven’t communicated any accessibility needs? Sree (Sreemoyee), our Principal User Researcher, discusses how you can advocate for diverse user needs and ensure inclusive design on your projects. In a recent project, our data-fluent user group did not declare any accessibility needs, which led our team to consider skipping accessibility tests. Recognising the importance of catering to future users with accessibility needs and staying ahead of evolving user requirements, I turned to an ‘Accessibility Lab’, a database of proxy users with accessibility needs curated by our client’s User Centered Design (UCD) team. Who are proxy users in the context of accessibility testing? Proxy users, though not part of the primary user group, share comparable digital skills and accessibility needs that make them useful contributors to inclusive design. For my education-centric project, the Department for Education (DfE) Accessibility Lab was the ideal resource, featuring primarily teachers as proxy users who had signed up to be contacted for accessibility testing. Importantly, these teachers were not users of the service we were testing, ensuring unbiased perspectives without preconceptions. Venn diagram illustrating the intersection of Target users and Proxy users, highlighting shared traits in the overlapping area: comparable digital skills and accessibility needs. How I prepared for accessibility testing with proxy users: hot tips We opted for remote testing to accommodate the preference and availability of the proxy users. This decision necessitated adjustments to ensure effective testing. Clearly communicating the necessary information I communicated with the participants through emails and video calls, reassuring them that no prior knowledge of the service was necessary. Before the remote testing sessions, I provided them with the project background, outlining the goal of evaluating service accessibility. Throughout, I encouraged open communication, emphasising to participants that we are testing the service and not them, encouraging candid and honest feedback. Tailoring the usability tests It was important to familiarise myself with the specific accessibility needs of the proxy users to understand each person’s unique requirements. When testing with a participant with dyslexia who reported finding traditional text-heavy interfaces challenging, I asked them to describe their current environment and any assistive technologies they might use for dyslexia. During the test, I focussed on their interaction with fonts, line spacing, and visual cues to assess their content comprehension. Crafting guided interactions In remote sessions, I asked participants to use their main device and specified the browsers. Recognising potential challenges faced by proxy users who are unfamiliar with the service, I provided extra guidance and prompts, to enhance clarity in task understanding. For example: Original prompt: “Start the data submission journey and go through it as you normally would.” Guided prompt: “Start the data submission journey by selecting option x on the homepage, and if you encounter any difficulties, feel free to ask for guidance.” Observing and enquiring As the remote setting made it more difficult to pick up on non-verbal cues, I used screen-sharing tools to observe participants’ facial expressions and gestures as they navigated through the webpages. I encouraged them to think out loud and share their preferences and dislikes. With their consent, I recorded the sessions for later review. I observed closely for signs of difficulty and asked open-ended questions, such as: “How did you feel navigating through that section?” “How would you describe your experience using this feature?” Engaging with empathy Mindful of potential challenges faced by users with cognitive impairments, I approached remote testing with patience and empathy. I gave extra time for understanding, adjusted the testing environment based on their real-time feedback, and strategically built in breaks and buffers within the testing schedule. One participant made what was my favourite request: “Mind if I take a break to cuddle my cat?” Using relevant tools and technologies I facilitated the use of tools and assistive technologies as per user need to make the testing process smoother and more accurate. During a session, noting the need for screen magnification, I provided proxy users with the option to adjust the interface’s font size and contrast settings. Would I recommend accessibility testing with proxy users? Absolutely. The Project Leads observed these research sessions firsthand and described them as “eye-opening” and “fascinating”. But why? The pros of accessibility testing The benefits of conducting accessibility testing with proxy users are nuanced and varied: Tech-debt mitigation In the absence of actual users with declared accessibility needs, accessibility testing with proxy users encourages the adoption of inclusive design and development practices from the outset - the foundation that a truly user-centered service is built upon. In testing, visually impaired users highlighted issues with cluttered screens and excessive scrolling. Their feedback revealed that the approach of cramming information into a small screen made it hard for users with visual challenges to understand the content. Frustrated user staring at a laptop, stating: ‘A busy screen is hell.' The insight from users with accessibility needs, together with feedback from our target users, prompted us to simplify the homepage, making it cleaner and more straightforward, reducing cognitive load. We validated these changes through further testing to ensure enhanced usability. Proxy users, with their unique needs, enable us to spot and fix accessibility issues early, helping avoid the accumulation of technical debt and costly retrofits later in its development journey. Ethical inclusivity Engaging with diverse users is vital for inclusivity. When real users don’t declare accessibility needs, proxy users guide us in understanding diverse experiences. It’s not a checkbox exercise; it’s our ethical duty to ensure digital services are equitable for everyone. During testing, one proxy user emphasised the importance of truly grasping diverse user needs, stating: “I want options, not assumptions… It’s awfully good of you and your team to reach out to understand my experiences.” A proxy user stating “I want options, not assumptions.” Enhancing user experience through unbiased perspectives Proxy users, especially those unrelated to the service or product being tested, bring a fresh perspective to the table. They offer insights without the bias of prior knowledge or experience, helping us see our product objectively. Their feedback acts as a powerful tool to uncover potential blind spots and create a more user-friendly experience. Compliance with accessibility standards Conducting accessibility testing, alongside accessibility audits, helps us meet the Web Content Accessibility Guidelines (WCAG) 2.2, which is based on 4 design principles: perceivable, operable, understandable, and robust. A four-piece jigsaw puzzle representing the four design principles: perceivable, operable, understandable, robust. In structuring the guidelines as principles instead of technology; the WCAG accentuates the need to understand how people interact with digital content, ensuring that the service is accessible, identifying areas for improvement, and reducing legal risks while promoting ethical design and development practices. Specific educational insights In the instance of our education focussed project, testing with the proxy users who were primarily teachers gave us valuable insights into the unique accessibility needs of education providers. Their feedback enabled us to develop and refine our service to align with the real needs of those in the sector. The cons of accessibility testing with proxy users While the benefits of involving proxy users are significant, it’s essential to acknowledge potential risks: Representation gap Proxy users, while sharing comparable accessibility needs, may not fully represent the experiences of the target user group. To address this, it’s essential to complement proxy user insights with targeted feedback from users with disabilities to bridge the representation gap. Availability Finding suitable proxy users for recruitment can be a challenge, potentially causing testing delays. In my project, this risk was mitigated by leveraging the client’s Accessibility Lab, a database of proxy users, which was readily available, preventing potential recruitment challenges and minimising testing delays. Intermediary role Proxy users, as intermediaries, may unintentionally filter or misunderstand information because they might not fully grasp the nuances of the target user group’s experiences. To counter this, I structured testing sessions with extra guidance and prompts to minimise the risk of misinterpretation. In conclusion Effective leveraging of proxy users in accessibility testing requires a balanced approach. While their insights are invaluable for inclusive design and early issue detection, it’s important to supplement their feedback with testing from actual users with disabilities whenever possible. Combining both approaches ensures a thorough evaluation of accessibility and usability. See you folks on the inclusive side! Key takeaways Inclusive design: Proxy users can play a crucial role in ensuring inclusive design for diverse user groups, especially when there are no declared users with accessibility needs in the user research pool Strategic decision-making: Gaining insights into accessibility needs of a diverse audience can enable data-driven informed choices. Communication is key : Clear communication before and during testing sessions, and encouraging open feedback creates a conducive testing environment. Tailoring testing session : Adapting usability tests to address specific accessibility challenges enables a focused assessment of user interactions with the service. Testing with empathy and flexibility: Prioritising users’ needs and conducting tests with patience and empathy are crucial. Maintaining a balanced approach : While proxy user insights are invaluable, supplementing feedback with testing from actual users with disabilities ensures a comprehensive evaluation of accessibility and usability. Useful resources Understanding WCAG 2.2 WCAG 2.2 Map Testing for accessibility Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch . This article was originally posted by Sree on medium.com .

  • WCAG 2.2 one year on: Impact on government services

    WCAG 2.2 one year on: Impact on government services by Ayesha Saeed After over a year of the release of WCAG 2.2 what should you be doing as a government service? Ayesha one of our Accessibility Leads answers some key questions you may have for how to implement WCAG 2.2 if you haven't already started.  Overview:  What is WCAG? Overview of the changes What are the new guidelines? Key questions on WCAG 2.2 Looking forward Useful resources What is WCAG? The WCAG (Web Content Accessibility Guidelines) ( opens in a new tab)  are universal guidelines that are used by public bodies to ensure accessibility is built into digital services. The WCAG guidelines are broken down by levels: Level A: Must do, basic requirements (legally required for public sector). Level AA: Must do, removes further significant barriers (legally required for public sector). Level AAA: Specialised support, most comprehensive. Meeting the WCAG guidelines is one part of meeting legal accessibility guidelines as a government service (both public and internal users). Check out Piya’s article on government requirements (opens in new tab)  from earlier in our accessibility series for details.  You can also see understanding accessibility requirements for public sector bodies (opens in new tab)  for a comprehensive breakdown.   Overview of the changes The latest official version of WCAG 2.2 was published on 5th October 2023. This replaces the previous version, 2.1, which was published in 2018. WCAG 2.2 builds on and is compatible with WCAG 2.1, with added requirements.  One success criterion, 4.1.1 Parsing, was removed in WCAG 2.2 as it was deemed redundant. WCAG 2.2 also addresses aspects related to privacy and security in web content. There are 9 further A, AA and AAA guidelines to be aware of including; focus management, dragging movements, target size, consistent help, redundant entry, and accessible authentication. 6 of the new criteria are A and AA level which are what government services are legally required to meet for WCAG 2.2, bringing the total of A and AA guidelines to 55. You can see the full details of the changes on the W3 website for the WCAG 2.2 introduction (opens in new tab) .  What are the new guidelines? Level A and AA: 2.4.11 (AA): Focus Not Obscured (Minimum): focus states must not be entirely hidden. A graphic of a good example of two popup bubbles overlapping. You can partially see the focus on the popup behind. 2.5.7 (AA): Dragging Movements: functionality must not rely on dragging. Alternatives such as buttons for left and right should be provided. A graphic of a good example of a dragging function, with left and right arrows on either side. A hovering mouse shows how you can use the buttons and the dragging feature. 2.5.8: Target Size (Minimum) (AA): there can only be one interactive target in a 24px by 24px area. A graphic of a good example of icons where there is only one interactive element in a 24px by 24px area. 3.2.6: Consistent Help (A): help mechanisms must appear in the same place on each page. A graphic of a good example of two screens next to each other, with the help function located in the same top right hand corner on both. 3.3.7: Redundant Entry (A): users must not be required to re-enter the same information, unless essential such as for security purposes. Provide an option to automate the input for the same information twice if required twice. A graphic of a good example of the option to use the same details for an address so a user does not have to enter the same information twice. In this example there is a checkbox to say the billing address being input is the same as the your address input. 3.3.8: Accessible Authentication (AA): authentication must not require a cognitive test (exceptions for object recognition or personal content). For example, provide compatibility with a password manager so a user doesn't have to input or transfer information for authentication. A graphic of a good example of giving users several options for authentication e.g through the use of a password manager. Level AAA: 2.4.12: Focus Not Obscured (Enhanced): focus states must not be hidden at all.  A graphic of a good example of two popup bubbles. You can fully see the focus on the popups and they do not overlap. 2.4.13: Focus Appearance: focus indicator must meet a contrast ratio of at least 3:1 and at least 2 px in thickness that goes around the item . A graphic of a good example of a clear focus around a button, with contrast of a minimum of 3:1 and 2px thickness. In this example a black outline is used on a light grey background. 3.3.9: Accessible Authentication (Enhanced): authentication must not require a cognitive test, with no exceptions. For example, provide compatibility with a password manager so a user doesn't have to input or transfer information for authentication. A graphic of a good example of an authentication form with no cognitive test or Captchas to login. Key questions on WCAG 2.2 Q1: Does meeting WCAG 2.2 ‘break’ my accessibility progress? A site that meets WCAG 2.2 will also meet 2.1 and 2.0. Q2: When do I start building and testing for WCAG 2.2? Testing your service against WCAG 2.2 should be incorporated as soon as possible if you haven't already started.  You should aim to conduct regular accessibility testing (manual, automated and against assistive technologies), so you can maintain an accurate understanding of how compliant your service is and prevent any surprises when it comes to a yearly audit. Do not rely solely on an annual audit to accessibility test your service, as this is only a snapshot in time and does not reflect ongoing maintenance of accessibility. If it has been at least a year since your service was last audited, or it was audited against WCAG 2.1, you will need to conduct an audit again. You should also continuously conduct usability testing to ensure your service is meeting the needs of real users, and not just WCAG. Q3: Do I need to update my Accessibility Statement? You should reassess your service for WCAG and other legislation compliance every year, and update your accessibility statement to reflect this. As it is over a year since WCAG 2.2 was released, all services should now be testing and updating their accessibility statement to the WCAG 2.2 guidelines.  Q4: When will GDS start monitoring? The GDS Monitoring Team started testing sites against the new WCAG 2.2 success criteria from 5th October 2024. Find out more information at changes to the public sector digital accessibility regulations (opens in new tab) .  Q5: When will the GOV.UK Design system be updated? The GOV UK Design System Team have reviewed WCAG 2.2 (opens in new tab)  and updated the design system, and included these changes in the latest GOV.UK Frontend v5.0.0 (opens in new tab) . They have also provided guidance on how to meet WCAG 2.2, and which components, pages and patterns will be affected.  Q6: How is my accessibility automated testing impacted? You should continue to use automated tools such as pa11y and aXeCore to support testing in build pipelines. For aXeCore, you can tag which level you want your tests to run against, so make sure you add ‘wcag22’ to cover the new guidelines. Find out more at Axe-core 4.5: First WCAG 2.2 Support and More (opens in new tab) . Semi-automated tools such as Wave and aXe can still also be used to pick up some accessibility issues.  Automated/semi-automated tools do not cover all WCAG 2.2 guidelines so it is important to continue to test manually, with assistive technology and with real users. Looking forward WCAG 3.0 (opens in new tab)  is currently a Working Draft and aims to provide guidance to build for users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. WCAG 3.0 also aims to support a wider range of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. Content that conforms to WCAG 2.2 A and AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Ensuring you factor in regular maintenance is paramount to keeping accessibility up to date. And remember, WCAG does not cover every scenario. Test with your users and conduct regular user research.  Useful resources WCAG 2.2 and what it means for you (Craig Abbott) (opens in new tab) Obligatory WCAG 2.2 Launch Post (Adrian Roselli (opens in new tab) What WCAG 2.2 means for UK public sector websites and apps (GDS - YouTube) (opens in new tab) Testing for WCAG 2.2 (Intopia - YouTube) ( opens in a new tab) WCAG 2.2 Explained: Everything You Need to Know about the Web Content Accessibility Guidelines 2.2 ( opens in a new tab) Contact information If you have any questions about our accessibility services or you want to find out more about other services we provide at Solirius, please get in touch .

bottom of page