
The 'From the Ground Up' Report
Guidelines for Ethical Implementation of AI in Educational Technology: A Framework
​
This report presents a framework of twelve ethical controls for AI implementation in education, developed through extensive research and stakeholder consultation in 2024-2025. The controls address key concerns including learning outcome alignment, user agency, cultural sensitivity, critical thinking, transparency, human interaction, impact measurement, ethical training, bias detection, emotional well-being, accountability, and age-appropriate implementation. These guidelines aim to ensure AI serves educational values rather than undermining them, by providing practical, actionable guidance for educational institutions.
The 'From the Ground Up' Report
At Educate Ventures Research, we recognise both the opportunities and the challenges AI brings to education. Through our work we aim to ensure that AI boosts productivity whilst maintaining a strong commitment to ethics and safety.
​
A key part of this approach is supporting research that tackles important questions about AI within the education sector. One such question is how to ensure AI genuinely benefits students, teachers and parents. To address this issue, we present the new report, ‘Developing Standard Ethical Guidelines for AI Implementation in Education’, produced by Educate Ventures Research in partnership with Avallain.
​
The report presents 12 clear and actionable controls that address essential areas, including the protection of learner well-being, the importance of preserving the human element, and the necessity for transparency regarding AI’s capabilities and limitations.
​
Built on extensive consultation with teachers, EdTech developers, policymakers and researchers, these guidelines are designed to help educational institutions implement AI ethically while fully realising its potential for personalised learning and administrative efficiency. This framework emphasises collaboration, adaptability and continuous evaluation, recognising the rapid pace of AI innovation and the unique needs of diverse learning communities.
Download the Report
Discover how the 12 Ethical Controls can help your institution harness AI’s potential without compromising accountability, equity, or the human heart of education.
​
By integrating these Ethical Controls, education systems can confidently embrace AI as a tool for innovation, while preserving the values, relationships and integrity that make learning truly transformative.
Key Takeaways
CEO & Founder at Educate Ventures Research, Professor Rose Luckin:
"Our findings make it clear: AI’s promise in education must come with strong ethical safeguards. By foregrounding student well-being and equitable access, these controls help every learner benefit from AI."
​
Co-Founders of Avallain, Ursula Suter and Ignatz Heinz:
"We believe a research-driven approach is essential to ensuring that technology is never an end in itself but a true enabler of learning. Together with Educate Ventures Research, we aimed to explore how GenAI ought to be implemented into educational technology, not only to mitigate its risks but, more crucially, to support teaching and learning practices."
​
-
Students First – Elevating engagement, critical thinking and well-being as core priorities.
-
Stakeholder Collaboration – Sustained input from educators, parents and technologists at every stage of AI adoption.​
-
Practical Implementation Roadmap – Step-by-step advice for rolling out AI ethically, from pilot phases to long-term impact measurement.
What's Inside the Report
The report is structured around 12 Ethical Controls. Each control includes:
​
-
Definition – A concise overview of the core principle and why it matters.
-
Challenges – Key obstacles and stakeholder-specific difficulties (for students, teachers, parents, institutions and developers).
-
Mitigation Strategies – Proven approaches for addressing each challenge, informed by real-world examples.
-
Implementation Guidance – Best practices from research, industry and other sectors that can be adapted to different educational contexts.
-
Stakeholder Relevance – Clear explanations of how each control impacts and benefits students, educators, parents, institutional leaders and technology developers.
These controls are grounded in a systematic literature review of AI Ethics guidelines and were validated through a multidisciplinary expert panel workshop to achieve maximum consensus. The panel included educators, school administrators, AI ethicists and edtech industry experts, ensuring diverse perspectives.
Who Should Read It
-
School Leaders & Administrators looking to set a clear vision and robust policies for AI adoption.
​
-
Teachers eager to discover how AI can enrich classroom practice without diminishing professional judgment.
​
-
Policymakers seeking evidence-based, internationally informed guidelines that shape responsible AI legislation and funding.
​
-
Digital Education Stakeholders aiming to align products with recognised ethical standards and stand out in an increasingly scrutinised market.
Executive Summary: From the Ground Up
​
Introduction and Purpose
​
The rapid integration of artificial intelligence in educational settings offers promising opportunities for personalised learning, administrative efficiency, and expanded access to resources. However, this technological evolution brings significant ethical considerations that must be addressed to ensure AI serves educational values rather than undermining them. This report presents a comprehensive framework of twelve ethical controls for AI implementation in education, developed through rigorous research and extensive stakeholder consultation.
​
Ethics in education, particularly concerning AI implementation, is paramount due to the profound impact educational experiences have on shaping individuals and society. As AI systems become more prevalent in educational settings, they influence critical aspects of learning, assessment, and educational decision-making. Without proper ethical guidelines, there is a considerable risk of perpetuating biases, compromising student autonomy, or prioritising efficiency over holistic development. Moreover, as education plays a crucial role in molding future citizens, the ethical use of AI in this domain sets a precedent for how society at large will interact with and govern these technologies.
​
Development Process and Methodology
​
The framework was developed through a multi-phase process over approximately six months in 2024-2025. The approach combined systematic literature review, case study analysis, and extensive stakeholder engagement to ensure the guidelines would be both theoretically sound and practically applicable.
​
The initial phase involved foundational research, including a systematic literature review of existing AI ethics guidelines and analysis of case studies where educational institutions or adjacent sectors implemented AI controls. By late 2024, researchers formulated an initial draft of AI controls addressing identified gaps in current guidance.
​
Stakeholder consultation was central to the development process. A teacher focus group reviewed the draft framework, contributing practical insights about classroom realities. Concurrently, a multidisciplinary expert panel was convened, comprising educators, school administrators, AI ethicists, and edtech industry specialists. This panel participated in workshops using the Delphi process, providing structured feedback on each proposed control. The project also drew on concerns and findings from a landscape study conducted by Educate Ventures Research, which engaged educational leaders from 23 multi-academy trusts encompassing 413 schools and 250,000 students.
​
The framework underwent multiple iterations based on stakeholder input. Teacher feedback led to clearer definitions and practical use-case examples for each control. The expert workshop resulted in the expansion of existing controls and the addition of two new ones. Throughout the consultations, the team continuously refined the framework, integrating stakeholder suggestions and resolving ambiguities to create a comprehensive and actionable set of guidelines.
​
The Twelve Ethical Controls
​
The framework consists of twelve ethical controls, each addressing a distinct aspect of AI implementation in educational settings:
1. Learning Outcome Alignment: Ensures AI tools continuously support the full spectrum of learning goals rather than narrow metrics. This control requires implementing a continuous evaluation system for AI-driven educational interventions that assesses both immediate academic outcomes and long-term educational impact across diverse learning objectives.
​
2. User Agency Preservation: Designs AI systems to empower users with choice and control, ensuring that AI in education does not undermine student autonomy or teacher professional judgment. The AI should act as a supportive guide rather than an autocratic tutor, with safeguards against over-automation.
​
3. Cultural Sensitivity and Inclusion: Ensures AI educational tools are culturally responsive and free from bias, providing an inclusive experience for all learners. This entails establishing systematic processes to detect and correct cultural biases in AI content or interactions, with diverse representation in training data and knowledge bases.
​
4. Critical Thinking Promotion: Embeds opportunities for students to practice critical thinking when using AI-powered tools. Rather than passive acceptance of AI-generated outputs, the system should prompt reflection and skepticism, encouraging students to question, analyse, and critically evaluate information.
​
5. Transparent AI Limitations: Clearly communicates what the AI can and cannot do to all stakeholders. This control implements user-friendly explanations about AI systems' capabilities, limitations, and decision processes to manage expectations and prevent misplaced trust.
​
6. Adaptive Human Interaction Balance: Maintains a healthy balance between AI-mediated learning and human-human interaction. Guidelines establish thresholds for minimum human engagement, ensuring that AI personalisation does not come at the expense of essential teacher-student and peer-to-peer interactions.
​
7. Impact Measurement Framework: Establishes a framework to measure the real educational impact of AI interventions, both short-term and long-term. This combines quantitative data with qualitative assessments in regular review cycles to gauge how AI affects learning and inform improvements.
​
8. Ethical Use Training and Awareness: Provides mandatory training for all stakeholders on the ethical and appropriate use of AI in education. These programs cover topics such as academic integrity, understanding AI bias, privacy issues, and responsible use policies, tailored to different stakeholder groups.
​
9. Bias Detection and Fairness Assurance: Implements continuous processes to detect, audit, and mitigate bias in AI systems to ensure fair educational opportunities for all students. This includes using specific fairness metrics, conducting regular audits, and establishing clear processes for addressing identified biases.
​
10. Emotional Intelligence and Well-being Safeguards: Monitors and supports student emotional well-being in AI-mediated learning, with protocols for human intervention when needed. This control balances emotion detection with privacy and non-intrusiveness, establishing protocols for human response when an AI detects possible emotional issues.
​
11. Organisational Accountability & Governance: Establishes institutional oversight and clear lines of responsibility for AI systems used in education. This control creates governance frameworks—policies, committees, and processes—to ensure AI tools are deployed ethically and in compliance with legal requirements.
​
12. Age-Appropriate & Safe Implementation: Ensures that AI tools and practices in education are tailored to students' developmental stages and uphold a safe, child-friendly learning environment. This includes configuring content and capabilities suitable for different age groups, implementing content filtering, and prioritising child safety and well-being.