top of page

31 results found with an empty search

  • AI in action 1: Supporting service teams through the Service Standard

    AI in action 1: Supporting service teams through the Service Standard by Matt Hobbs As digital public services evolve, so must the tools we use to build them. This series explores how Artificial Intelligence (AI) can responsibly support UK government service teams in meeting the Government Digital Service (GDS) Service Standard. From user research to accessibility testing, performance monitoring to service assessments, we’ll examine where AI can complement human expertise, enhancing delivery without compromising trust, transparency, or inclusion. Overview What is the Service Standard? What is the Service Manual? What is a Service Assessment? Wrapping up About the author Welcome to a series exploring how Artificial Intelligence (AI) can support UK government service teams in meeting the Government Digital Service (GDS) Service Standard . As digital public services continue to evolve, so too must the tools and methods used to build them. AI, when applied thoughtfully and responsibly, has the potential to enhance delivery, improve user outcomes, and support those working in government to focus on what matters most: meeting real user needs. This series will explore how AI can play a role in supporting service teams at every stage of the service lifecycle, from discovery to live, and how it can complement the Service Manual’s practical guidance. Whether through natural language processing, data analysis, accessibility testing, or helping teams with performance monitoring, we’ll consider both current capabilities and future possibilities. This is not a call to automate everything, nor to substitute human judgement, but to embrace new tools in a way that strengthens delivery and accountability across government. Before we continue, let me cover a couple of important points... What is the Service Standard? The UK Government Service Standard  is a set of points designed to help teams create and run effective, user-centred digital services. Maintained by the Government Digital Service (GDS) , it ensures that public services are accessible, efficient, and meet user needs. The standard promotes practices such as understanding users , using agile methodologies , testing services with real users , and making services secure  and accessible . It's used throughout the development lifecycle to ensure quality and consistency across UK government digital services. What is the Service Manual? There may be a few readers who've never heard of the Service Manual. So, here's a brief history and overview. The UK Government Service Manual  was introduced as part of the Government Digital Service (GDS)  initiative, launched in 2011  to improve digital public services. Continuously updated, it reflects evolving best practices and legal requirements , ensuring government services remain effective and accessible for all users. The Digital Service Standard and the Service Manual are the foundations for what you need to complete a Service Assessment. What is a Service Assessment? A UK government Service Assessment  is a structured evaluation process designed to ensure that digital services meet government standards before they go live or progress through key stages of development. Approval stages in a Service Assessment: UK government services typically go through 3 key Service Standard assessments: 1. Alpha assessment Conducted at the end of the Alpha phase Focuses on whether the service team has researched user needs, developed and tested prototypes, and has a plan for the Beta phase Core evaluation criteria : user research, design, technology choices, and feasibility 2. Beta assessment Conducted at the end of the Beta phase Evaluates whether the service has been tested with users, can handle expected demand, and meets accessibility and security standards Some departments may also decide to run a private beta for certain services, testing them with a small group of invited users In some cases, a service may remain in the Beta stage for an extended period Core evaluation criteria : performance, scalability, accessibility, data security, and readiness for live deployment 3. Live assessment Conducted before a service moves from Beta to Live (full public availability) Ensures the service is sustainable, meeting user needs, and is continuously improved Core evaluation criteria : performance monitoring, governance, data management, and ongoing user feedback integration Service Standard criteria Each assessment evaluates against 14 Service Standard points , some of these include: Understanding user needs Designing for everyone (inclusivity) Making the service simple and accessible Using open standards and scalable technology Ensuring security and privacy To progress to the next stage, service teams must pass these assessments. If unsuccessful, they are expected to resolve the issues highlighted and reapply for a future assessment. Wrapping up Some people might see using AI in the Service Standard, and Service Assessment process as “cheating” because if AI does all the work, what’s left for the service team to do? But really, AI is just a tool to help things run more efficiently and save the UK government time and money. It’s not  about replacing human expertise. It’s also important to remember that AI can sometimes get things wrong (what’s called a “ hallucination ”), so it’s critically important that teams sense-check what AI produces instead of just accepting it at face value. Now that we’ve outlined the purpose and structure of the Service Standard and the role of service assessments, we’re ready to dive into the practical side, where and how AI can help. In the next post, we’ll begin exploring each of the 14 Service Standard points in turn, starting with what is arguably the most critical: understanding users and their needs. We’ll look at how AI can assist user researchers, support data analysis, and improve how teams gather insights, without losing the nuance or empathy that human researchers bring. So please stay tuned! About the author  My name is  Matt Hobbs — Principal Engineer (Frontend) and Guild Lead at Solirius Consulting, currently embedded in HMCTS. Before joining Solirius, I spent six years at GDS, leading on frontend development and shaping strategy across accessibility, performance, and digital best practice.  I also wrote a series of blog posts documenting the performance improvements made to GOV.UK — covering everything from HTTP/2 and jQuery removal to Real User Monitoring. Well worth a read if you’re interested in practical, real-world frontend engineering in the public sector. Why we focus on frontend performance Speeding up GOV.UK with HTTP/2 How GDS improved GOV.UK ’s frontend performance with HTTP/2 (Case Study) Making GOV.UK pages load faster and use less data How Real User Monitoring will improve GOV.UK for everyone What we’ve learned from one year of Real User Monitoring data on GOV.UK The impact of removing jQuery on our web performance A Request For Comments (RFC) for enabling HTTP/3 on GOV.UK   Contact information If you have any questions about our AI initiatives, Software Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .

  • Solirius Reply attends the Reply Xchange

    Ayan Kar and Hamid Ali-Khan presenting at the Reply Xchange Earlier this month, Solirius attended our very first Reply Xchange, a high‑energy event designed to explore the latest in technology, innovation, and digital experience. Hosted by Reply, the day brought together clients, partners, and teams from across the network for a packed programme of expert talks, interactive demos, and collaborative discussion. The goal: to connect people and ideas, share what’s working, and inspire bold thinking for the future. Solirius was proud to contribute by presenting on the role of AI in delivering complex data migrations, a critical enabler for transformation programmes across government. We showcased how AI can enhance the accuracy, speed, and scale of migrations, reduce manual effort, and improve long‑term data quality and governance. Our talk delivered by Ayan Kar, Data Engineering Lead and Hamid Ali-Khan, Head of Engineering focused on some key AI themes:  The criticality of modernisation of applications  How AI can significantly enhance data migration accuracy and efficiency  The use of AI tools to improve development productivity  The importance of the decommissioning of legacy systems in the journey for the best outcome in data migrations The Xchange left a strong impression on the Solirius team in attendance, leaving them feeling not only energised by what’s possible, but more connected to the broader Reply community. It’s clear there’s real momentum and we’re motivated to accelerate our AI capabilities, deepen collaboration across the network, and adapt innovative Reply solutions to better serve the needs of our public sector clients. From intelligent data services to AI-assisted delivery and decision support tools, we see a huge opportunity to unlock value and deliver lasting impact through thoughtful, human-centred innovation. Members of the Solirius Reply team at the Reply Xchange Contact information If you would like to see the full presentation or speak with our Engineering or AI practitioners on how we can support your transformation efforts, please reach out to Ayan Kar or Hamid Ali-Khan via our contact form here (opens in a new tab) .

  • Lessons in accessibility: A day at the DfE Accessibility Lab and conversations with the experts

    At the DfE Accessibility Lab, our colleagues Sree (User Researcher) and Claire (UX Designer) explored how assistive technologies are used—and where they can fall short when services aren’t designed with everyone in mind. One crisp spring morning, as the sun finally pushed through the grey weight of winter, a user researcher, Sree, travelled from Newcastle and an interaction designer, Claire, journeyed from London, converging in Sheffield. Their destination: the Department for Education’s (DfE) Accessibility Lab.  Their goal: to understand how digital services function for those who navigate the world differently. Inside the Accessibility Lab: Where digital barriers become visible From left to right: Claire, Sree and Jane at DfE’s Accessibility Lab, Sheffield We expected a technical demonstration—a run-through of tools and accessibility best practices. What we got was something much more human: a window into the lived experience of those who rely on assistive technologies daily. Guided by Jane Dickinson, an accessibility specialist at DfE, we explored tools like Dragon, JAWS, ZoomText, and Fusion. Jane not only explained how they work but showed us how easily they can fail when services aren't built with accessibility in mind. Insights from testing with assistive tools Dragon: Voice recognition for hands-free navigation Dragon voice control lets users navigate computers hands-free. But if clickable elements aren’t properly coded as buttons, Dragon can’t find them. Jane demonstrated how Dragon struggled with buttons on a DfE service and the BBC homepage as they weren’t coded as such. Dragon couldn’t recognise the “click button” command as the button was invisible to the tool - highlighting a major gap between design and code. JAWS: Screen reader for non-visual navigation JAWS relies on well-structured content: heading levels, labelled buttons, and descriptive links. Jane showed how generic links like “Read more” or “Download” confuse JAWS users due to a lack of individual distinction or missing ARIA labels, making browsing chaotic and frustrating. As Jane put it:  “If a page isn’t structured properly, it’s a nightmare to navigate.” ZoomText: For low vision users ZoomText is a magnification tool that helps users navigate visually. However, it requires users to hover or click on links to have them read aloud, unlike JAWS, which reads automatically. At higher magnification, text can become distorted where the page has not been coded to handle zoom, affecting readability. Fusion: Combining JAWS and ZoomText Fusion provides auditory feedback and high-level magnification for individuals with partial vision loss, offering magnification up to 20x with auditory feedback. But Jane showed us that even a 3x zoom can cause layout issues, like pixelation and clipped content, especially when sites don’t reflow content properly. Keyboard-only navigation Keyboard navigation is essential for users who can’t use a mouse, relying on shortcuts like the Alt key. But inconsistent implementation makes things harder. Jane pointed out unmarked buttons on the BBC homepage that would leave keyboard users guessing: “If something isn’t labelled properly, it just gets skipped over.” Captions for hearing impairments Captions aren’t just for deaf users—they help everyone. But live captions often lag, making comprehension harder. Testing BBC video content, we saw captions fall out of sync with speech, making it difficult for a user to keep track. Experiencing the world through the eyes of others Sree and Claire testing visual simulation glasses As part of our lab experience, we tested simulation glasses that aimed to alter vision, giving a general insight into conditions like: Cataracts : everything looks blurred. Tunnel vision : loss of peripheral vision, reducing situational awareness. Left-sided hemianopia : half the visual field disappears, common after strokes or brain injuries. It was very insightful to be reminded how much of the digital world can become difficult to use under these conditions, and how inclusive and thoughtful design can prevent the digital barriers that some users may face. N.B.  While simulation glasses offer a glimpse, they can’t replicate the full experience of visual impairment. They’re a starting point for empathy, not a substitute for listening to real users who experience visual impairments. To truly understand, we need to speak with and learn from real users. The Visual Impairment North-East (Vine) Simulation Package In conversation with Accessibility Experts To deepen our understanding of accessibility, we interviewed Jane Dickinson and Jake Lloyd, two key accessibility specialists at DfE, to hear their insights. Jane’s biggest frustration? Accessibility being bolted on at the end. “It’s not enough to test for accessibility. Real users need to shape the design from the beginning.” She also highlighted how many users hesitate to disclose their accessibility needs for fear of being seen as difficult. Even when reports are written to improve accessibility, they often go ignored.  “I can spend a whole day writing a report, and sometimes nothing changes.” Despite these challenges, Jane celebrated the wins—a blind user who was able to access their payslip independently for the first time: “One of our blind users told me, ‘For the first time, I didn’t have to ask someone to read my payslip. I could do it myself.’ That made all the work worth it.” Even small changes like properly marking up pdfs or labelling buttons has a huge impact and can make a service more accessible. Jake emphasised the importance of building for keyboard navigation and screen readers from the very start. “There are so many accessibility issues that come from not thinking about keyboard accessibility… It affects focus, visibility, and how well voice and assistive tech tools work.” He highlighted issues like repetitive, unclear links in patterns such as “Check your answers”: “Something like the ‘Check your answers’ pattern has links that just say ‘Change’… If you're just using a screen reader and you're navigating through a bunch of links… you're only going to hear “change”. So providing some hidden screen reader text, giving more context to that link can be really helpful.” This was another thoughtful reminder that different users read pages differently, and not everyone will be able to view the visual context to written content.  A holistic approach to accessibility The accessibility specialists broke down their layered approach to testing the accessibility of services: Automated testing  to catch common issues early. Manual testing  using only a keyboard or different zoom levels. Assistive tech checks  like screen readers and voice controls. Code reviews  to ensure correct HTML and component use. As Jake put it, accessibility goes beyond the Web Content Accessibility Guidelines (WCAG) standards:  “I’ll also record issues that don’t fail WCAG but still create barriers—like having to tab 30 times to reach an ‘apply filter’ button.” Jake warned against treating accessibility as an afterthought:  “Where teams haven't thought about accessibility and inclusive design up front and early on, complex issues tend to come out of that.” Not boring. Not optional. A myth Jake wants to debunk is that accessible design equals boring design.  “You can still be innovative. Your website can look good and be accessible if you plan it that way from the start,” he said. “Unfortunately, some organisations continue to treat accessibility as an afterthought, which remains a cultural issue”. Our specialists pointed out that advocacy and awareness are key to changing this mindset:  “Having people with actual lived experience that can demonstrate the way that they interact with digital content, can be really powerful… Here's someone who is blind. They use a screen reader to navigate your service, and they can't do it.” They stressed how one in four people have a disability—can you afford to turn them away with inaccessible services? Why accessibility matters for everyone Jane and Jake made it clear: accessibility isn’t just for disabled users. It benefits all of us. Captions help on a noisy train. Good contrast helps in bright light. And if zooming to 400% breaks your layout—it’s not just low vision users who suffer. “If it’s not thought about up front, then it affects a lot of people.” Accessibility isn’t a task—it’s a mindset As user researchers and designers, we focus on how people interact with digital services. But in Sheffield, we were not the experts—we were the students. This wasn’t about checking off accessibility guidelines. It was about understanding what happens when those guidelines aren’t met. A missing label, a broken heading structure, or an unlabelled button—these aren’t small issues. Each one determines who gets to participate and who doesn’t. Accessibility is also never ‘done’, it is an ongoing activity that requires the whole team's input to maintain.  As we left Sheffield, catching our trains to opposite ends of the country, we carried more than just knowledge. We carried a quiet but certain resolve to champion accessibility.  The best accessibility work doesn’t “help” people. It supports their independence and ensures they don’t have to ask for help in the first place. Useful resources Department for Education accessibility and inclusive design training Making your service accessible: an introduction Department for Education accessibility and inclusive design manual W3C: Making the Web Accessible W3Cx: Introduction to Web Accessibility Sara Soueidan: The Practical Accessibility Course About the authors Sree is a Lead User Researcher specialising in uncovering user needs and delivering data-driven insights. A CDDO-DfE trained Service Assessor, she champions user-centricity and accessibility in government services. When she’s not diving into research, Sree can be found roaming the countryside with her husky, cooking up a storm, or curling up with a good book. Claire is a Senior User Experience Designer, specialising in interaction design. She advocates for accessibility and strives to bridge the gap between usability and inclusion. Outside of work, Claire enjoys exploring new places and experimenting with new recipes. Contact information If you have any questions about our research and design services or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab) .

  • Driving social impact: Solirius' pro-bono digital and design project with The Talent Tap

    Supporting charities through pro-bono digital and design services by Theone Johnson and Sam Smith At Solirius, we are committed to using our technical expertise to create a positive and lasting impact on society through our pro-bono work with charities.  Supporting social mobility through user-centered design The Talent Tap is a social mobility charity that supports young people from areas with fewer opportunities, often rural and coastal areas where challenges are greater, by providing access to work placements and other professional opportunities and support that help shape their future careers. They also work with businesses to advocate for industry-wide change to implement sustainable diversity and inclusion strategies, with a focus on improving social mobility. As part of our Social Value initiative, we partnered with The Talent Tap to provide pro-bono digital and design support, identifying opportunities to improve the content and design of their website, attract potential corporate partners, and amplify their impact. The Design Team: Claire McShane Designer, Louise Morales-Brown User Researcher, Anna Rapp User Researcher, Sam Smith User Researcher, Lydia Davidson Designer, Hattie Brash Service Designer and Theone Johnson Project Manager Our Solirius Design Team, an outstanding group of talented design consultants used their expertise in user-centred design, content strategy and accessibility to make a meaningful impact on the charity while reinforcing Solirius' commitment to its core social values. The aim The Talent Tap wanted their website to have a simple and professional feel to attract more interest and funding from potential corporate partners, while still maintaining a friendly undertone that communicated their USP as a youth-led charity. Our approach We worked closely with the Talent Tap team to enhance their website’s content and design, ensuring it aligned with their goals and vision. For this, we took a holistic approach by  equipping them with industry insights to refine their online presence and leveraged user research to identify ways the  design can better meet user needs and their own strategic goals. This included: Conducting a content audit to identify pain points and improvement opportunities Carrying out an accessibility audit to ensure inclusivity for all users Performing a competitor analysis to identify opportunities for differentiation Mapping and improving the existing information architecture to create clearer user journeys Facilitating collaborative design workshops to inform the new creative direction for the charity  Conducting user research and analysis to understand and design for the user needs of their target audience Applying industry standards to update the content to ensure clear, impactful messaging  Creating high-fidelity wireframes that aligned with The Talent Tap’s new brand kit Adding clear wireframe annotations to guide the development phase Working examples Screenshots of examples of the project work carried out. Included are a Talent Tap Planning Workshop, competitor analysis, accessibility audit, content audit, a corporate partner persona and an image of an information architecture map. The outcome We are proud to have supported The Talent Tap’s mission by enhancing their digital presence and helping them connect more effectively with their audience. Social mobility is a key focus for us at Solirius, and we’re grateful for the opportunity to help The Talent Tap expand their reach—bringing this important message to more young people and businesses across the UK. Thank you and congratulations to everyone who contributed to this project. The Talent Tap team is thrilled with the new website designs and their CEO said: “Working with Solirius as a not for profit was a joy both in terms of their patience and incredible enthusiasm and knowledge. You have taken what was a clunky and standard not-for-profit website and turned into a fully functional, user centric asset. As a charity we simply could not have afforded the service provided.” Contact information If you have any questions about our design services or you want to find out more about other services we provide at Solirius , please get in touch (opens in a new tab) .

  • Delivering a modern analytical platform for DfE

    We supported the Department for Education (DfE) to introduce a new data intelligence platform while focusing on user needs and better ways of working. Linda Souto Maior, service designer on the project, shares how we did this. The DfE Analyst Community plays a critical role in helping the department achieve its goals to enable children and learners to thrive . Over time analysts were operating on increasingly outdated, disparate and costly legacy systems.  As the data and analytical sector moves towards cloud based technologies, DfE wanted to build on existing ways of working, keep pace with emerging trends and new opportunities to support the Strategic Data Transformation programme. The vision was to build a new joined up service to help analysts find data and access cloud analytical tools, known as Analytical Data Access (ADA), named after the famous pioneer and mathematician Ada Lovelace.  A key enabler for any major technology change is to design the service in a way that meets the needs of users and also recognises the impact of the change to the organisation and importantly to the individuals affected.  Challenges across the organisation There were a number of challenges across business and technology teams: DfE’s Data Directorate had an immediate requirement to replace outdated and costly platforms with cost of support and maintenance high. Users of data across the DfE Analyst Community could not always find what data is being stored, who owns it, how to get access and how to use it. Users are limited by capacity and difficulty in scaling without impacting others, and may experience delays in running data queries. Duplication of the same data assets across different platforms with no single source of truth, and increasing costs, risks and effort to store, manage and control access to data. Reduced ability for experts in datasets to collaborate easily. Time-consuming daily log-ins to multiple systems. The approach to address the problem By working in blended teams Solirius provided service design, user research, interaction design, business analysis, delivery and business change expertise to bring the service to live.  The service was designed to bring together three underlying platforms: a data discovery platform to catalogue all DfE data, a data intelligence platform using Databricks on Azure with a Delta Lake for data storage, and a library of reports and dashboards.  Key improvements included: A single point of entry to reduce sign-ins across platforms.  Greater processing power for faster calculations and complex analysis.  Governance of data through a single request form to gain access to datasets. Information about the service, support guides and access to tailored training designed in collaboration with analysts. A single homepage to access all services. Outcomes and value added Alongside the technology challenge of deployment, we worked closely with users to overcome nervousness about the new platform. This included adapting the service but also improving communications: Set up a super user group of analysts who helped with design input, test and provide feedback on the service.  Worked with the supplier (Databricks) to integrate R Studio (third party modelling software) based on user feedback. Mapped out all data requirements and user flows to identify common pain points and avoid duplication of data prior to any data migration. Developed a roadmap for data migration and changes to ways of working. Ran frequent show & tells across the organisation and invited our analysts to demonstrate the use of tooling.  Engaged with analysts and the supplier to design training and support guides. Co-designed a business change strategy to establish collective ownership of the change.  Worked closely across DfE departments and alongside other projects to ensure a joined-up service across all channels. Provision of flexible resourcing to meet the needs of delivery and budget constraints. The service continues to be rolled out across the organisation and is now being used by 300 analysts with 60 modelling areas migrated for 50+ analyst teams. Meanwhile we continue to work with the Analyst Community and Data Directorate to improve and adapt the tool as new use cases come up. Long term, the service will save time, costs and effort, through better collaboration and faster processing, and also enable better use and governance of data. "As sponsor for the DfE Analytical Data Access (ADA) service I have been impressed with the calibre of the Solirius resources who have supported us in getting this ambitious programme off the ground. They have been key in helping us build multi-disciplinary squads and they have integrated seamlessly with our existing staff. Their expertise has brought shape and rigour to our work and enabled us to deliver a professional service that is growing in demand." Patrick Healey, Deputy Director | Data Operations | Data Contact information If you have any questions about implementing new digital technology in your organisation or want to find out more about what services we provide at Solirius please get in touch .

  • Providing data engineering services that support company growth

    Growth Intelligence specialise in helping companies grow, using AI and uniquely rich SME data to drive more revenue, reduce acquisition costs and increase conversion rates. To support their business model Growth Intelligence were looking for assistance to: manage existing infrastructure and scale data pipelines to handle ever increasing amounts of data using contemporary cloud technologies create and maintain bespoke applications to support the day to day activities of their data science and customer success teams maintain code libraries, repos, apps and machine learning feature data. Setting up the specialist team Our specialised resourcing solution gives GI access to experienced, high quality data engineers at a wide variety of levels. Proponents of agile delivery, our team adopts GI’s hybrid scrum/kanban approach, monitoring sprint progress using kanban boards and running the full range of agile sprint ceremonies (daily stand-ups, retrospectives and sprint planning sessions). We collaborate with GI’s engineering and data scientists teams, using a stack of Python, Ansible, AWS, Tornado, Flask, Elasticsearch, Docker, Pandas. Working alongside the GI team and key stakeholders, our engineer’s work spans requirements gathering, technical spikes and OKR management. 2 years of success and growing Solirius has worked with Growth Intelligence for over 2 years and we are proud to continue our association, helping to maintain and improve the leading-edge services that GI provides to companies around the world. "The Solirius team are great additions to our engineering team. They are highly professional, effective at working independently (whilst also knowing when to seek clarification on requirements / design / architecture) and proactive in taking on projects / problem solving. They have integrated seamlessly into the team - which is great for us as a start-up as it enables us to have a single cohesive engineering team. They have a genuine interest in helping us succeed and creating a friendly and enjoyable culture to work in." Prashant Majmudar, CTO at Growth Intelligence

  • Solirius pro-bono partnership with The Talent Tap

    As part of our social value initiative, we are providing pro-bono digital and design support to charities, and we are excited to announce our partnership with The Talent Tap in this endeavour. Who is The Talent Tap? This remarkable charity supports young people from socio-economically deprived areas, with a specific focus on coastal and rural regions , by providing access to professional opportunities and work placements that help shape their future careers. Why are we doing this? The social value initiative at Solirius reflects our commitment to using our digital and design expertise to create a positive and lasting impact on society.  By offering our services pro-bono, we aim to support charities that might not have access to the resources they need to enhance their digital presence and improve their services for the communities they serve. Through our partnership with The Talent Tap , we are proud to support their mission of equipping young people with the tools and opportunities they need to realise their potential and shape their professional futures. We believe that everyone, regardless of background, should have access to opportunities that can define their careers, and we are excited to contribute to The Talent Tap's efforts in driving positive change. What have we done so far? We’ve started working with The Talent Tap to identify opportunities to enhance the content and design of their website. Our approach includes conducting accessibility audits, competitor analysis, content reviews, and user research. This comprehensive strategy will guide future ideation, ensuring the platform meets user needs all while highlighting the essential work they do.  What is next? Over the coming weeks, we’ll continue working closely with The Talent Tap to shape the future design and user experience of their website, ensuring that any new concepts make a meaningful impact for the people they serve. We look forward to sharing more updates on this exciting journey as we continue to work together! Contact information If you have any questions about our social value initiative or you want to find out more about what services we provide at Solirius please get in touch .

  • HMCTS QA Town Hall Recap: Monday, 14th October 2024

    The Testing Centre of Excellence (TCoE) at His Majesty's Courts and Tribunals services (HMCTS) had the privilege of being part of an incredible day of learning and collaboration at the QA Town Hall, run by Solirius. Our Test Lead Elizabeth Jones explains what we got up to. Firstly, what is the Testing Centre of Excellence (TCoE)? The purpose of the TCoE is to establish a centralised hub for standardised testing and quality processes across the organisation. We promote continuous improvement by fostering cohesive testing practices and ensuring that all members have access to the necessary resources and support. Through our dedicated platform, we offer a space for the testing community to connect, collaborate, and share knowledge with access to a wide range of resources, including document templates, best practices, and training materials, all to enhance testing capabilities. The three main goals of the HMCTS TCoE are: Standardisation:  To establish a centralised model where testing and quality processes are standardised across the whole of HMCTS. This centralised model will serve as a unified framework that sets clear guidelines, best practices, and standardised methodologies for testing and quality assurance activities. Efficiency:  To streamline workflows, reduce redundancies, and enhance the overall efficiency and effectiveness of our testing efforts. Community : To connect testers from all jurisdictions and create a testing community. By sharing experiences, knowledge and training, we can bridge the gap in understanding and expertise, ultimately enhancing the quality and effectiveness of our testing processes across the entire organisation. What did we do at the QA Town Hall? The TCoE at HMCTS had the privilege of participating in an incredible day of learning and collaboration at the QA Town Hall, run by Solirius. It was a full day packed with insightful sessions, attended by 47 participants eager to enhance their knowledge of testing practices and get hands-on experiences with our workshop. Keynote speaker Christine Pinto: Playwright & AI in test automation talk Christine is a respected figure in the QA community and delivered an engaging talk, sharing her expertise in automation engineering globally through insightful articles and presentations. She gave us an in-depth look into the capabilities of Playwright and how AI tools can significantly enhance automation frameworks.  The session covered practical ways AI can assist with coding, optimising test coverage, and ways AI can help improve the quality, reliability, scalability and security of our tests. We explored various AI tools that streamline development, reduce repetitive tasks, and boost efficiency, empowering teams to stay ahead of the curve.  We discussed essential guidelines to protect sensitive data, learning strategies to ensure that confidential information is never shared with AI tools, thus preventing potential data breaches and maintaining robust data security. TCoE: Accessibility & screen reader testing workshop This hands-on session run by Ayesha Saeed (Accessibility Testing Lead), Philena Bremner (Accessibility/UX Consultant) and Piya Patel (Junior Accessibility Tester) highlighted the importance of accessibility in software testing. Attendees practised live screen reader testing of multiple websites and learned the essential steps to ensuring applications are inclusive for all users, bringing greater awareness about improving product and service accessibility standards.  We then covered how to integrate axeCore into automated Playwright tests, a crucial step in ensuring earlier accessibility visibility (shift-left) by automatically detecting WCAG2.2 violations when the CI/CD build pipelines are run. Attendee's learned how automation can help to scale accessibility testing and build more inclusive and user-friendly digital services. Showcasing the Testing Centre of Excellence (TCoE) Website Myself (Elizabeth Jones - Test Lead) and Abigail Smith (QA Engineer) showcased the HMCTS TCoE website which has over 230 users, and is designed to be a one-stop resource for QA professionals. We demonstrated how the platform can support testers with access to tools, templates, and expert advice, promoting efficiency and continuous learning. Crime Test Practice Lead: Show and Tell by HMCTS Crime department: Improving testing processes We also had an insightful session from James Widdowson (Crime Test Practice Lead) from the Crime Team at HMCTS who showcased their journey to enhancing testing efficiency and reducing defects through a transformative approach. They shared valuable, actionable strategies for creating a streamlined, collaborative testing environment, detailing how they developed a reliable automation pack compatible with the existing Serenity framework.  Their transition from Jenkins and Gerrit to Azure and GitHub, alongside the integration of API checks to verify message processing, substantially increased stability. As a result, they now run over 600 automated UI tests, achieving a pass rate consistently in excess of 90%, highlighting the positive impact of robust automation practices on quality outcomes. A big thank you A special thanks to the day's organisers and session hosts— Elizabeth Jones, Tom Bowker, Gareth Davies, Nimesh Patel, Josh Gray, Christine Pinto, James Widdowson, Abigail Smith, Ayesha Saeed, Piya Patel, Phi lena  Bremner and Jake Gowler   — for making this event such a success. Your hard work truly made this an engaging and impactful experience. Finally, we would like to extend a big thank you to PA Consulting  for generously hosting the event! We truly appreciate your support. The QA Town Hall was an incredible day of learning, collaboration, and growth for all involved. Thank you to all the participants for attending and actively contributing to the success of the event. We can’t wait for the next one! Contact information If you have any questions about our Quality Assurance services, or if you want to find out more about what other services we provide at Solirius   please get in touch .

  • Discovery to Beta: putting users at the centre of the design for new digital services in education

    The Education and Skills Funding Agency (ESFA), sponsored by the Department for Education (DfE), brings together the former responsibilities of the Education Funding Agency (EFA) and the Skills Funding Agency (SFA) to create a single agency accountable for funding the education and skills training for children, young people and adults. Aligning with the department’s data strategy The funding for educational institutions is delivered through various funding streams. The processes used to gather data for calculating the value and allocation of funds were time-consuming and complex. Teams had developed their own processes, on varying technology stacks, with limited consistency between teams. The production of datasets was one of the steps in the process to be digitally transformed, contributing to the department’s data strategy of reducing complexity and improving consistency. The objective of the Funding Data Service (FDS) project was to align the preparation of data with this strategy, whilst enhancing functionality from legacy technology solutions that were being decommissioned. Introducing agile ways of working - Discovery and Alpha For Discovery we deployed an agile multi-disciplinary team consisting of a user researcher, business analyst, service designer, data analyst and developer. In alpha the team adopted scrum methodologies and ceremonies (sprint planning, stand-ups, show and tells and retrospectives). A large amount of user research, business analysis and service design was needed during alpha. User research consisted of interviews, contextual enquiries and user surveys to help develop user personas and to map user journeys. Interviews were conducted with small focus groups, starting with single teams and then moving onto larger multi-team meetings. This helped encourage richer discussion and alignment between different groups. The business analysis covered stakeholder analysis, process mapping and backlog development. This work demonstrated that the service had a relatively small number of users, but the majority were experts in their discipline. Our focus, therefore, was to consume as much of their domain knowledge as possible and to ensure that we had a good understanding of their current pain points. “I always appreciated the FDS team following up on the feedback we provided as users because it felt like you were keen to build something that would work for us.” Choosing the right technology During alpha the technical team carried out data modelling, technical spikes and developed prototypes to prove our riskiest technical assumptions. For example, our first major technical challenge was to securely transfer data from a SQL server instance behind an internal firewall to cloud-hosted, MS Azure data storage. A technical spike was conducted to investigate the use of Azure services to do this, and desk research conducted to understand all relevant security frameworks. The technical stack consisted of: Front end: Angular 12, HTML, JavaScript, CSS Back end: .NET Core v5, microservices, Azure web services, Azure functions Data stores: SQL Server, Azure SQL server, Azure Blob storage, SQL SSIS packages, Azure Data Factory At the conclusion of alpha the team had: validated that a digital service would help resolve the problem identified the people and process change necessary for the new service agreed the tech stack and developed an approach for developing a Minimal Viable Service (MVS) for Beta. Developing the Minimal Viable Service The MVS would encompass a digital system for sourcing, managing and publishing provider data, including integrations with other digital services. This would deliver extensive value for the client and enable the decommissioning of legacy functionality. Using Scrum and working in 2 week sprints the team established a regular delivery cadence that supported dependency and risk management at a programme level. We adopted a behaviour driven development (BDD) approach across the team (development, quality assurance, analysis and design) to refine the understanding of user needs and pain-points. Early stage wireframes were iterated to hi-fidelity ‘development ready’ designs based on user feedback collected in design working sessions. User stories incorporated ‘gherkin syntax’ style acceptance criteria to give both the development and quality assurance teams a clear understanding of the expected user experience. The quality assurance team deployed an ‘automation first’ approach to testing, improving consistency, frequency and efficiency in test execution. Putting users first “I did feel like I wanted to put the extra effort in for FDS as it felt you listened to me as a user of the service and actually took on board what we wanted” Due to the seasonal nature of the user’s workload (peaks around term times), the timing of the MVS go-live date needed to coincide with the start of a new funding year to prevent operational disruption. Before the release of new functionality, the team conducted usability testing sessions with key users. This was critical to the product achieving user acceptance, and the feedback captured in a ‘near-live’ environment was analysed, refined and ultimately added to the product backlog as development ready user stories. The team worked closely with users in group and 1-to-1 settings, delivered regular ‘show and tell’ sessions with stakeholder groups including senior leadership, other digital services and potential future users. ‘Show and tells’ were used to drive a common understanding of the project’s progress and the service itself, and to capture input from a wider cohort. This helped to manage expectations and dependencies with other teams. Growing the service The goals were to deliver an MVS service that would meet user needs, deliver value, prevent operational disruption, and create the foundation for future scaling and enhancement. The MVS went live after 5 months of intensive work, supporting the delivery of £691 million in annual funding for 16-19 year olds. Following MVS the team have: transitioned to a hybrid live-support and development model, supporting day-to-day operations alongside the delivery of new functionality. released new functionality weekly, ensuring value is provided quickly and incrementally. onboarded new Funding Streams, meaning the service is now supporting the annual delivery of billions in education and skills funding. The team received excellent feedback throughout for their user-centred approach and were widely recognised as an exemplar for agile software development.

  • Supporting gender diversity at Solirius

    On International Women’s Day 2024 Charlotte Morphet and Sarah Littlejohn explain why they set up and run the Gender Diversity Group at Solirius, an employee-led initiative to celebrate all genders, share experiences and discuss a wider range of topics related to women in technology.  Why gender diversity is important The tech industry is known for its lack of gender diversity and at Solirius we are passionate about making a positive change to grow and sustain diversity at all levels.   We recognise that having a diverse range of experiences and viewpoints, makes us a stronger, more well - rounded and innovative company. We have a great representation of women at Solirius (34% vs the 20% industry standard), so we wanted to create a community to empower all of our amazing female, non-binary and trans employees. What our group does To do this we launched our Gender Diversity Group in 2021, which has been going from strength to strength ever since!  Our community is built to be an inclusive, supportive and positive space, where everyone of all genders are welcome as we recognise that change must come from all directions.  The focus of the community is to be a safe space where everyone can get together and share experiences. We meet up once a month at our Gender Diversi-tea sessions, to discuss a wide range of topics that are important to us - and of course share a cup of tea 🫖. Participation is voluntary, employees are welcome to join (or leave) any time. Every year we aim to build out a programme of prompts to focus our discussions and get everyone thinking.  Topics to discuss over tea Over the course of the last year we discussed themes such as ‘Disrupting Stereotypes’, ‘Tackling Imposter Syndrome’ and ‘Intersectional Feminism’. We gathered community driven insights with our diverse opinions and experiences.  As part of our celebration for International Women’s Day we are launching our next programme of discussion points which we can’t wait to learn more about and share with the whole of Solirius. Our themes and prompts will include: Gender & Your Career Journey The Influence of gender on the creation of tech The impact of tech on our understanding and treatment of gender As part of these, we’re going to cover topics like, menstruation,  fertility, and menopause, safety impacts when designing applications and how technology affects gender roles. As a group we are looking forward to keep growing over the next 12 months and help balance the gender gap in our industry! What our members have to say Here are a couple of quotes from our members about why they love being part of our community: ‘" I love the solidarity, community and opportunities to learn from others. " - Phoebe “ The GDG is a community I’m so proud to be a part of ” - Sarah “ I love being a part of a non judgemental, inclusive group, where you can learn and hear some thought provoking stories and ideas. I think the Gender Diversity Group is a very valuable part of Solirius. ” - Claire Thank you. Charlotte Morphet, Business Consultant & Sarah Littlejohn, Technical Delivery Consultant

  • The dark side of AI: algorithmic bias and its unintended consequences

    By Harry Lloyd - Business Consultant As AI continues to transform industries and our daily lives, we’re witnessing incredible innovation, but also facing significant ethical challenges. From biased algorithms to privacy concerns, AI's impact isn’t always positive. In this article, Harry Lloyd explores the challenges of algorithmic bias and ways to mitigate it. Overview Introduction What is algorithmic bias? Real world example of the harmful effects of algorithmic bias Mitigating bias Looking ahead Introduction Artificial Intelligence (AI) has become an integral part of our lives, from personalised recommendations on social media to cars that can practically drive themselves. Many industries, including the UK’s public sector, are improving with these emerging technologies.  This is extremely exciting, but biases often occur in AI, and left unchecked could lead to unintended consequences. In this article, we will explore the importance of addressing AI bias and share strategies for creating fair algorithms. What is algorithmic bias? Algorithmic bias occurs when algorithms are trained on biased data and then make decisions that systematically disadvantage certain groups of people.  It's like a hidden, unintended preference that sneaks into AI systems and can lead to unfair outcomes and perpetuate social inequalities.  Just as a teacher's personal beliefs might influence how they present information, the data used to teach AI can carry its own biases, affecting the decisions it makes.  Just because the information comes from a computer, doesn’t mean the result is 100% truthful. Algorithmic bias isn't just a theoretical concept; it's a tangible challenge that can impact crucial decisions in areas like education, criminal justice, and social services. For example, this issue could occur if you’re hiring a candidate for a role.  If your algorithm is based on historical data that is oversaturated with certain demographics, it may then discriminate against applicants from underrepresented backgrounds.  We need to take proactive steps to identify and eliminate these biases to ensure the algorithm’s fairness. Mitigating bias The very first step to avoid these problems is awareness. We need to understand that bias is an issue and that it is important to talk about.  People come with their own set of biases and limitations, which are influenced by different experiences and views.  Bias is something that inherently exists in the human condition, once we understand this, then we can begin to mitigate it.  There is no easy fix or magic solution for addressing these issues to make AI completely fair and unbiased. It's a complex challenge that can't be solved with just technical tweaks.  Fortunately, there are some key approaches to achieve the best practice. These approaches offer a path toward achieving fair, morally sound, and beneficial outcomes that treat everyone fairly and justly. AI transparency AI transparency is the ability to examine inputs and outputs to understand why an algorithm is giving certain recommendations. Complex AI models, such as Deep Learning, can lead to the issue of the black box problem.  This refers to the difficulty in understanding and interpreting the internal workings of AI models. When the decision-making process is opaque, it becomes challenging to identify, correct, or mitigate biases.  There are several techniques and approaches being developed to tackle this problem.  One of these being Local Interpretable Model-agnostic Explanations (LIME) which   offers a generic framework to uncover black boxes and provides the “why” behind AI-generated predictions or recommendations.  You can also use saliency maps to help visualise the outcome. These highlight the regions of an input that most influence the model’s prediction, showing what the model focuses on. Sound, transparent practice ensures that you can identify particular issues that may be causing problems. It's like turning on the lights in a dark room filled with hidden obstacles; you can see the issues clearly and can then take steps to remove the bias.  Diverse datasets and development teams It is important that the datasets we use to train algorithms are diverse and contain a wide array of data types. If we want less biased algorithms, we may need more training data on protected classes.  A protected class refers to groups shielded from discrimination under the Equality Act 2010. These protected classes could be things like race, gender, age or disability. Checking the algorithm’s recommendations for these classes would be a good indication of any discrimination.  Another key strategy is to prioritise diversity and inclusivity in the development teams and training of AI models.  Diverse teams, both in demographics and skills, are vital to detect and combat AI bias. If many people have different perspectives, then issues around unwanted bias will more likely be noticed and then mitigated before deployment.  These teams will benefit from establishing clear guidelines and ethical frameworks for AI development. Leading companies in the AI space, such as Google AI  and Microsoft AI , have invested into fairness research and put together responsible practices when developing these tools. These guidelines should set the standard to emphasise fairness, transparency and accountability throughout the entire process. Furthermore, ongoing monitoring and evaluation of AI systems (e.g. via regular audits) can help identify and rectify biases that may emerge over time. It is essential to collaborate with a diverse range of stakeholders, from experts in the field to social scientists and affected communities. Looking ahead Artificial Intelligence is a powerful tool, but needs to be used properly. Algorithmic bias isn't theoretical; it's real and impactful. To harness the potential of AI responsibly, ethical considerations must take centre stage. Awareness is key. Collaboration is key. It is vital to foster a culture of continuous learning and improvement. By implementing some of these strategies we can work towards creating AI systems that are fair and free from bias. These technologies can then be used to promote equality and have a positive impact on society. Contact information If you have any questions about Data, AI and Ethics or you want to find out more about what services we provide at Solirius   please get in touch .

  • How AI-powered chatbots and virtual assistants are enhancing citizen services

    By Emily Sato - Software Engineer Discover how AI-powered chatbots and virtual assistants are revolutionising citizen services in government departments. One of our talented software engineers, Emily Sato, delves into how AI can improve efficiency, responsiveness, and safety in public services. Overview What are AI-powered chatbots and virtual assistants? Key components The importance of AI-powered chatbots and virtual assistants for government services Enhancing availability, responsiveness, and efficiency in government services  Safety and data protection Future prospects What are AI-powered chatbots and virtual assistants? Source: Image of Gov.UK Chatbot Experimental Prototype Picture this: a world where getting help from your government is as easy as chatting with a friend. Thanks to AI-driven chatbots and virtual assistants, that world is becoming a reality. The emergence of AI-driven tools such as chatbots and virtual assistants are reshaping the landscape of how government departments engage and assist citizens by facilitating a more accessible and responsive approach, empowering the public to obtain timely assistance and information. In essence, chatbots are typically focused on public engagement to address specific queries or perform predefined tasks, providing 24/7 customer support to users. On the other hand, virtual assistants aim to provide a more conversational and interactive experience, handling multiple tasks, providing guidance, and offering a broader range of more complex services. Key components To mimic human conversation and problem-solving capabilities, both of these transformative tools leverage a variety of AI technologies behind the scenes. In its foundations, Natural Language Processing (NLP) is used to enable AI-powered chatbots and virtual assistants to understand, interpret and generate human language. It helps machines to comprehend and extract meaning from users' input to generate the appropriate response. To power NLP, these tools must learn from data and user interactions. They must adapt to new information, updates, and changes in user behaviour, ensuring they remain relevant and effective in meeting user needs. This is where Machine Learning (ML) techniques are applied to improve better decisions autonomously by analysing vast amounts of data. They can identify patterns, extract insights, and recommend optimal solutions, improving the quality and efficiency of their assistance. To build effective AI-powered chatbots and virtual assistants, the following components are essential: Natural Language Processing (NLP):  facilitates the comprehension and production of human language by AI systems Machine Learning (ML):  enhances decision-making by analysing data and identifying patterns Data integration:  ensures seamless access to accurate and current information User interface (UI):  provides a user-friendly platform for interaction between users and the AI Security protocols: safeguards sensitive data and ensures adherence to regulatory standards Continuous learning systems: enables the AI to evolve and respond to new information and user behaviour Source: Solirius The importance of AI-powered chatbots and virtual assistants for government services Source: Image of Gov.UK , How can AI help users AI-powered chatbots and virtual assistants are particularly significant for government services due to their ability to address the needs of a diverse and broad population. Unlike the private sector, where services may be tailored to specific customer segments, government services must be accessible to all. These AI tools ensure equitable access by providing consistent, reliable assistance to everyone, thereby fostering greater inclusivity. The scale and complexity of government operations require a level of efficiency that AI-powered systems can uniquely provide. Governments handle vast amounts of data and interactions daily, and AI technologies can automate routine tasks and manage high volumes of inquiries without disruption. Government departments also operate under strict regulations and ethical standards, which set them apart from other sectors. AI implementations in government must prioritise transparency, data protection, and fairness, helping to build and maintain public trust. This focus ensures that AI applications align with democratic values and serve the best interests of citizens. These chatbots and virtual assistants offer a cost-effective solution for optimising resource allocation within budget constraints. By automating routine inquiries and administrative tasks, these technologies reduce the need for extensive human resources and allow departments to focus on more complex and strategic areas. This not only improves operational efficiency but also maximises the value of public funds, enhancing overall service delivery. Enhancing availability, responsiveness, and efficiency in government services  One of the standout advantages of AI-powered chatbots and virtual assistants is their ability to ensure round-the-clock availability. They break free from traditional office hour constraints, providing citizens with immediate assistance beyond standard working hours.  Whether it's urgent inquiries or time-sensitive matters, these AI systems cater to users' needs, offering prompt and accurate information or guidance regardless of time zone or day of the week. This eliminates the need to wait in queues or navigate complex phone menus, reducing waiting times and frustrations associated with delayed or inaccessible services. The immediacy of responses enhances accessibility, allowing individuals to engage with chatbots or virtual assistants instantly, including in remote areas or those facing challenges in accessing physical government offices. Moreover, AI-driven capabilities empower chatbots and virtual assistants to handle a wide range of inquiries efficiently, adapting seamlessly to expansions of services or unexpected increases in demand. This ensures consistent service delivery without overburdening resources. However, the accuracy of these systems varies depending on several factors, including the complexity of the queries and the quality of their training data. Modern chatbots and virtual assistants powered by advanced Natural Language Processing (NLP) models (like GPT-4) can achieve high accuracy levels—typically between 85% to 95%—and even up to 100% for general conversational tasks or simple queries. They excel in automating routine inquiries and administrative tasks, reducing the time and effort spent on repetitive activities while maintaining the scalability to handle high volumes of inquiries simultaneously. Through iterative learning, these tools refine their understanding of citizen inquiries, offering more personalised and tailored experiences, ultimately promoting a user-centric approach. Safety and data protection Source: Photo by Pixabay To safeguard sensitive information shared during interactions, these AI systems are implemented with robust security measures, encryption protocols, and compliance with data protection regulations.  Stringent guidelines and frameworks are established to ensure that the data shared with chatbots or virtual assistants remains confidential and protected from unauthorised access or breaches. Strict access controls and regular audits are crucial to monitor and mitigate any potential vulnerabilities or risks associated with handling sensitive information. Some of the efforts in developing methods of safeguarding the public interest from the advancements of AI can be seen in the AI Safety Institute . The state-backed institute focuses on driving safety research, evaluating advancements on AI systems and facilitating information to the broader public. The UK has also recently hosted the world’s first major AI Safety Summit,  taking the leading role in opening a channel for international discussion and collaboration on Frontier AI safety.  The Government Service Standard are working closely with the Central Digital and Data Office (CDDO) and No.10 on advancements of AI safety by experimenting continuously and producing guidance across the government departments. The CDDO has recently published the Generative AI Framework  where it sets principles to use generative AI responsibly and safely. Future prospects The integration of AI-powered chatbots and virtual assistants within government services represents just the beginning of a transformative journey. Looking ahead, the future prospects for these technologies hold immense promise in rethinking how to handle public service delivery, citizen engagement, and administrative efficiency. With ongoing advancements, AI systems will become more adept at customisation, tailoring interactions and services based on individual user preferences, behaviours, and needs. These systems may offer highly personalised and adaptive services, providing users with more relevant and targeted information or assistance. Future developments will emphasise ethical AI and responsible governance. Governments will continue to prioritise ethical guidelines, transparency, and accountability in AI operations to ensure fairness, non-discrimination, and ethical conduct in citizen interactions. By placing citizens at the centre, advancements in AI will enable governments to deliver more personalised, accessible, and responsive services, catering to the diverse needs and ensuring inclusivity across different demographics. Contact information If you have any questions about AI-Powered chatbots and virtual assistants or you want to find out more about what services we provide at Solirius please get in touch .

bottom of page