Lessons from the Cabinet Office GitHub Copilot Trial
- Cameron Bowne
- Jul 15
- 7 min read

Drawing on lessons from the recent Cabinet Office Github Copilot trial, Cameron shares practical advice on how to use AI assistants as a powerful tool for learning and delivery, while ensuring you remain the pilot.
AI assistants like GitHub Copilot are changing the way we work. They can be powerful tools, but they also have their limitations.
I recently participated in the Cabinet Office Trial for GitHub Copilot. The trial was part of a wider government initiative to explore how AI code assistants can support digital delivery teams in organisations like the Ministry of Justice. It marked a shift towards encouraging and promoting responsible AI practices.
As a QA Engineer currently working at His Majesty’s Courts and Tribunals Service (HMCTS), I used Copilot to help write and maintain automated test suites. I was provided with access to the GitHub Copilot AI code assistant in my development environment for 4 months, along with training in prompt engineering. Prior to this trial I had no experience with Copilot, Codex , or Large language models (LLMs).
My focus for this article will be providing practical advice for using AI code assistants. I believe this is useful for anyone who is currently trying to navigate the fast-paced world of constantly changing and improving AI assistants. These tips are not only applicable to AI code assistants, but also any AI chatbot you may use, and I believe they will stay relevant as the AI landscape changes. Like any tool, there is a right way and a wrong way to use it.

Tip 1 - AI is a great teacher
Use AI to onboard and learn faster
AI assistants can act like personal tutors — and no question is too simple. For example, you can ask: “Does this project have automated accessibility tests?”
This has been really helpful for me as an early career QA engineer who has previously moved from a project with Java developers to one with Ruby and Python developers. It helps me get up to speed quickly and navigate the project, even if there are technologies I haven’t worked with before.
To get better answers, set the scene. Give Copilot a role and explain your experience level. For example, “I am a QA with 1 year experience with test automation and 2 months experience in Cucumber, you are a senior dev, teach me how this test suite works”. This tailors the response to your experience level.
Other ideas for tailoring your assistant:
Ask it to be your paired programmer to help figure out a bug
Ask it to be your assistant and write documentation for you
Feed it documentation and ask questions about it
Finally, in the Copilot Chat, you can go back and clarify points. “I understand this, but not this. Explain it to me more simply”.
As a QA engineer, you are constantly exposed to new technologies, so it’s important to keep learning. AI assistants have the potential to accelerate our learning and help us stay up to date as the tech landscape evolves.

Tip 2 - Concise context = Quality responses
Keep prompts focused and remove clutter
Your context is everything you send in your AI request (what the AI sees). The more unnecessary information you send to the chatbot, the more tokens you will use, and the more confused the response is likely to be. It also takes longer to generate your response and it is worse for the environment*.
*The use of large AI prompts can be bad for the environment because running AI models consumes significant energy, contributing to carbon emissions.
Clear and concise prompts lead to better results. I’ve found 2 key ways to achieve this:
1. Limit the unnecessary information you send with your request. When you prompt AI, you want to indicate relevant code:
Open only relevant code files, close irrelevant ones. Autocomplete Copilot will use your open files to understand the context of your work and offer suggestions.
Choose the right prompt method for the task. Consider if you should highlight a section of code and prompt ‘in-line’ when you want a focused response based on a specific section of code or a single file. Or use Copilot chat when your question requires a broader context across multiple files. Picking the right method helps control token usage and ensures more accurate results.
You can use the @project tag in Copilot chat (see image) - this will send your entire project in the request, but it’s worth noting that this is more context than you will likely need.
2. Keep the Copilot Chat history relevant:
Copilot uses your whole chat thread as context — keep it clean and focused.
Start a new conversation for new tasks to refresh your context window.
Delete irrelevant responses within your current chat history (the bin icon, which is also demonstrated in the image).
In short, manage your context well and the quality of responses generated will be better.

Tip 3 – AI can't read your mind… yet
Don’t expect AI to guess — show, iterate, and refine
While AI assistants are incredibly powerful, they're not mind-readers. Problems tend to arise when you expect AI to just know what you require and let it make assumptions. To get the best out of your AI assistant, you need to be crystal clear about your requirements; here’s how:
Examples are your best friend
Want your AI to write code that matches your team's preferences for readability and maintainability? Show, don’t tell. Whether it's the specific formatting of your tests or the naming conventions for different scenarios, providing examples is a huge time-saver.
You can indicate a file with an example, or even paste some example code directly into your prompt. It's much quicker than typing out all your requirements. For instance, instead of a lengthy explanation, you can simply say: "…look at the end_to_end.feature file for examples of the naming conventions to use for different test scenarios".
Open a dialogue and iterate
Think of your interaction with AI as a conversation. Don't just accept the initial response you get. If something isn't quite right, ask Copilot why it made certain choices. If you have a different preference, don't be afraid to prompt further. A prompt like, "I don’t like this, can you structure it this way instead to make it a bit more readable and consistent with the other tests… <insert example>" can work wonders. Iterating with Copilot Chat is a much quicker way to refine your output.
Start with a general request, and then get more specific to improve the results. I find that refining the response using a chain of prompts is a much more productive way to work, rather than trying to strike gold with your first prompt. Often, it's the first AI response that helps you remember things you forgot to include in your first prompt.
Maybe one day Copilot will be able to just read our thoughts, but for now, mastering clear communication, using plenty of examples, and embracing iteration are key to unlocking its full potential.
Tip 4 - You’re the pilot
Stay in control of your code
This tip is perhaps the most simple but most important, and the one that really stuck with me. Remember, the tool is called ‘Copilot’ for a reason; you should be in control. AI assistants in all their forms are great for offering suggestions, but they shouldn’t be making your decisions.
Copilot should never be handed a big, complex task for you to then just copy in the finished code. You can use Copilot to do complex things, but break complex tasks into steps, that way you can keep track of each step being taken and each decision made. You should fully understand everything you copy from AI because you’re the one responsible for the changes you make.
While it can be tempting to copy and paste from Copilot without analysing every line of code, ‘vibe coding’ can only get you so far if you don’t understand the changes you’re making. If your AI tool is taken away, you should still be able to do your work.
Use Copilot as a tool, not a crutch. You should still be able to work without it.
Pro tips for staying in the driver’s seat
Let Copilot help you break down complicated work into smaller steps
E.g. - “I need to increase the coverage of my e2e tests to include a new user journey - break down this task into smaller steps based on my current e2e test coverage.”
Have Copilot explain its work and help you understand it so you stay in control

It’s easy to lose track of ownership when Copilot is doing the typing, but the decisions still need to be yours. You should be involved in each step and understand the changes you make. Otherwise you’ll spend more time debugging AI code than you would’ve spent doing the task yourself.
Wrapping up
GitHub Copilot can…
Teach
Understand your level of experience
Follow clear instructions
Brainstorm ideas
Debug error messages and find the route of problems
Iterate on responses
Speed up your work
GitHub Copilot cannot…
Keep your context relevant
Read your mind to know what you want
Replace you as the pilot
Take responsibility for its work
Useful resources:
Github Docs Prompt Engineering: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat
Github Copilot Cheat Sheet: https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/github-copilot-chat-cheat-sheet
Contact information
If you have any questions about our AI initiatives, Quality Engineering services, or you want to find out more about other services we provide at Solirius, please get in touch (opens in a new tab).
Comments