CONTACT

Artificial Intelligence (AI) Policy

Artificial Intelligence (AI) Policy


1.   Purpose
2.   Related policies
3.   Scope
4.   Definition of AI
5.   Acceptable and unacceptable uses of AI
6.   Disclosure and transparency
7.   Safeguarding and wellbeing
8.   Staff and student responsibilities
9.   Role of the Compliance Officer
10. Review

1. Purpose
This policy aims to clarify the impact of Artificial Intelligence (AI) on the Ashbourne community and outline appropriate usage, responsibilities and safeguarding measures in line
with government guidance and ethical standards.

2. Related policies

(Back to menu)

3. Scope of policy
This policy applies to all members of the Ashbourne College community, including staff, students, administrative personnel and parents. It governs the use of AI within college operations, academic work and digital systems.

4. Definition of AI
AI (Artificial Intelligence) refers to software systems designed to understand user input and generate responses based on patterns in data. Open AI systems (e.g. ChatGPT) draw from vast internet-based datasets, while closed systems rely on user-provided or restricted datasets. The answers generated include: text, sound, videos, graphs, diagrams, images and predictions. Some models include, for example: OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot Studio, Apple’s Apple Intelligence, Grammarly, plagiarism checkers and adaptive learning platforms.

Generative AI (GAI): open and closed systems
Some GAI tools process and store more information than you input, including for example:

  • Location
  • IP address
  • System info
  • Browser info

Users should be aware that certain tools collect metadata such as IP address or system info, which may be shared or sold, raising privacy concerns under GDPR.

(Back to menu)

5. Acceptable and unacceptable use of AI
The examples below are not exhaustive.
Acceptable uses

  • Conducting background research.
  • Transcribing handwritten notes into digital text.
  • Drafting outlines or first drafts under supervision.
  • Summarising or proofreading one’s own work.
  • Teacher use for lesson planning, assessment or resource creation.
  • Learning how to evaluate AI outputs critically.


Unacceptable uses

  • Submitting AI-generated work as one’s own.
  • Using AI to rephrase or rewrite substantial portions without attribution.
  • Using AI for activities that breach academic integrity or ethical guidelines.
  • Generating harmful, offensive and/or misleading content.
  • Present as a table or clearly divided bullets.

(Back to menu)

6. Disclosure and transparency
Students are encouraged to disclose the use of AI where appropriate, especially if used in planning, editing and/or researching assignments. Using AI to correct grammar and spelling is allowed. However, altering tone, style or content using AI must be acknowledged and may be restricted depending on the task.

AI should never replace individual effort and critical thinking. It must be everyone’s ambition to develop as independent learners, which cannot be achieved without independent research, development and expression of ideas.

If using AI-detection tools, staff should ensure that evidence of a student using inappropriate assistance is unquestionable before making an accusation.

(Back to menu)

7. Safeguarding and wellbeing
Users should be aware of the risks associated with open AI systems, including potential data harvesting and profiling. Just like other internet platforms, AI tools may expose individuals to targeted risks if misused.

All AI-generated information should be independently verified. Users should not rely on AI outputs without cross-referencing credible sources.

(Back to menu)

8. Staff and students’ responsibilities

  • Compliance officer – know who this is and communicate any concerns with regard to AI to them.
  • Never input anyone’s personal or sensitive data into AI tools.
  • Know the difference between open and closed AI systems.
  • Fact-check all outputs generated by AI.
  • Apply the same ethical, legal and professional standards to use of AI as you would elsewhere.
  • Any plagiarism is unacceptable. Reference any AI tool used in academic work, just as you would cite a source.
  • Understand the benefits and dangers of AI.
  • Consider the potential for AI to reflect human biases and errors.

(Back to menu)

9. Role of Compliance Officer (data protection lead)

  • Policy updates and staff training.
  • Conducting data protection impact assessments (DPIAs).
  • Monitoring AI usage for compliance.
  • Managing data access and security.
  • Investigating breaches.
  • Liaising with the ICO.

10. Review
This policy is reviewed annually.

(Back to menu)

Authorised by The Principal
Date June 2025
Effective date of the policy September 2025
Circulation Teaching staff / all staff / parents / students on request
Review date September 2026
Menu ☰