Interviews
4 Oct 2024

Rebecca Bradley, AI Governance & Ethics Specialist, R People

We sit down with Rebecca Bradley to discuss the evolving role of AI in further education, including the potential of GenAI, best practice guidelines, and how AI can personalise learning, ease administrative burdens, and prepare students for the future.

A woman with long brown hair and a leather jacket stands outdoors, facing the camera with a slight smile. Green foliage is blurred in the background.

Where do you think the further education sector currently stands in terms of its interaction with artificial intelligence, including GenAI? Do you think that many have been too hesitant in considering its implementation?

The further education sector is really starting to tap into the potential of GenAI, and there are some great examples of best practice out there. However, for many, there’s still a sense of uncertainty. It’s not a lack of vision - it’s more that these tools, while familiar to some, can still feel complex to navigate. Recent advancements, particularly in large language models, have opened up new opportunities, but they’ve also heightened the sense of overwhelm.
 
This feeling of hesitancy is understandable. Learning how to implement GenAI safely and effectively, while keeping up with such a rapid pace of technological change, can feel daunting - especially without a clear roadmap.
 
What we find makes the difference is when people start to understand how AI works. By showing its history, what it can do, and how it can be responsibly integrated into further education you start to see the hesitation lift, and initial reluctance often turns into excitement. Suddenly, AI becomes a tool that personalises learning, simplifies administrative tasks, and helps prepare learners for a tech-driven future, rather than something to be feared.

Given the breadth of artificial intelligence's capabilities, what should form the foundations of 'best practice' rules and regulations?
 
When it comes to AI in education, best practice really needs to be grounded in three key areas: transparency, ethics, and inclusivity. Transparency means making sure that everyone - students, educators, and leaders - understands how AI systems work and how decisions are made. Ethics are just as crucial; we need to be confident that AI isn’t reinforcing biases, especially in things like recruitment, assessment, or personalised learning. And then there’s inclusivity: AI should be accessible to all learners, no matter their background or level of tech knowledge.
 
The real challenge now is ensuring that everyone, not just the early adopters, has the opportunity to engage with these tools. It’s not enough to focus on those who already understand GenAI - we need to bring everyone on board to prevent knowledge gaps that could lead to inequalities in education. By ensuring that everyone has access to these tools, we can avoid creating disadvantages and make sure the benefits are shared across the board.
 
Getting these foundations right is key. When AI is implemented properly, it can be a game-changer - making learning more personal and ensuring that everyone, no matter their background, can benefit from what AI has to offer.

To what extent do you believe tackling perceptions about the risks of artificial intelligence could improve uptake in education?
 
Addressing the risks surrounding AI is one of the biggest hurdles to improving uptake in education. A lot of the hesitation isn’t because people don’t see the potential. There’s often a misconception that AI is a threat - whether it’s fears about data misuse, job loss, or unintended bias in decision-making. But when we start having open conversations about these risks, showing people that AI can be implemented safely and ethically, it helps put them more at ease.
 
We’ve created the first City & Guilds assured courses in the UK for both AI Governance and Ethics, and Prompt Engineering, to help tackle some of the big challenges in using AI responsibly. The Governance course looks at five core pillars - security, ethics, privacy, transparency, and bias - ensuring AI is used safely and fairly. Our Prompt Engineering course focuses on using AI models to get the best, most accurate outputs while maintaining safety and consistency.

Is there a role for the Government to play in enhancing accessibility and awareness of AI's capabilities in education?
 
Absolutely! The Government can have a huge impact by helping to make AI more accessible and ensuring that everyone in education understands its potential. This could mean introducing national strategies that encourage AI literacy at all levels - ensuring teachers, students, and leaders alike are equipped with the skills and knowledge they need. Providing funding for AI tools and investing in training would go a long way in preparing the FE sector to adopt AI safely.
 
There’s also a real opportunity to support research on AI’s impact in education, which could shape future policy and ensure the technology is used responsibly. By making sure GenAI tools are available to those in more disadvantaged areas, the Government can help ensure that everyone benefits from the opportunities AI can offer in education.
 
Which key areas of further education provision do you believe AI could have the greatest impact in, and how?
 
AI has huge potential to make a real difference in further education, particularly in personalising learning and easing the workload of administrative tasks. Imagine AI systems that adapt to each learner’s progress and strengths - helping learners get the support they need at just the right time. This level of personalisation can make a big impact on student outcomes.
 
Aside from learning, AI can also take on routine tasks like marking, scheduling, and other admin jobs that take up valuable time. By streamlining these processes, focus can be freed up for what matters most: teaching and supporting students.
 
You can connect with Rebecca on LinkedIn or email rebecca@rpeople.co.uk

Share your details and we’ll be in touch