1. Purpose
2. Related policies
3. Scope
4. Definition of AI
5. Acceptable and unacceptable uses of AI
6. Disclosure and transparency
7. Safeguarding and wellbeing
8. Staff and student responsibilities
9. Role of the Compliance Officer
10. Review
1. Purpose
This policy aims to clarify the impact of Artificial Intelligence (AI) on the Ashbourne community and outline appropriate usage, responsibilities and safeguarding measures in line
with government guidance and ethical standards.
3. Scope of policy
This policy applies to all members of the Ashbourne College community, including staff, students, administrative personnel and parents. It governs the use of AI within college operations, academic work and digital systems.
4. Definition of AI
AI (Artificial Intelligence) refers to software systems designed to understand user input and generate responses based on patterns in data. Open AI systems (e.g. ChatGPT) draw from vast internet-based datasets, while closed systems rely on user-provided or restricted datasets. The answers generated include: text, sound, videos, graphs, diagrams, images and predictions. Some models include, for example: OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot Studio, Apple’s Apple Intelligence, Grammarly, plagiarism checkers and adaptive learning platforms.
Generative AI (GAI): open and closed systems
Some GAI tools process and store more information than you input, including for example:
Users should be aware that certain tools collect metadata such as IP address or system info, which may be shared or sold, raising privacy concerns under GDPR.
5. Acceptable and unacceptable use of AI
The examples below are not exhaustive.
Acceptable uses
6. Disclosure and transparency
Students are encouraged to disclose the use of AI where appropriate, especially if used in planning, editing and/or researching assignments. Using AI to correct grammar and spelling is allowed. However, altering tone, style or content using AI must be acknowledged and may be restricted depending on the task.
AI should never replace individual effort and critical thinking. It must be everyone’s ambition to develop as independent learners, which cannot be achieved without independent research, development and expression of ideas.
If using AI-detection tools, staff should ensure that evidence of a student using inappropriate assistance is unquestionable before making an accusation.
7. Safeguarding and wellbeing
Users should be aware of the risks associated with open AI systems, including potential data harvesting and profiling. Just like other internet platforms, AI tools may expose individuals to targeted risks if misused.
All AI-generated information should be independently verified. Users should not rely on AI outputs without cross-referencing credible sources.
8. Staff and students’ responsibilities
9. Role of Compliance Officer (data protection lead)
10. Review
This policy is reviewed annually.