top of page

THE SKINNY
on AI for Education

Issue 8, July 2024

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalized learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.


But first, calling all educational leaders, please complete our EVR AI Benchmarking Self-Evaluation exercise – it’s important that we hear from as many UK schools and colleges as possible, to ensure our analysis gives a representative perspective about what is happening with AI in UK education at the moment.

How are you using AI? Compare your progress to other Schools and Colleges across the UK 

​

Professor Rose Luckin and the EVR Team would like to invite you to participate in a national benchmarking exercise to evaluate current trends in the use of AI in education. Your perspective counts and all the team will need is for you to complete a simple 10-minute self-evaluation.  

AI Benchmarking Promo

Headlines

​

​

Missing the human touch?

​

When I was a young teenager, a friend lent me a book that scared me witless. It was about a top athlete who was kidnapped, tortured and then dismembered so that all that remained was his brain in a vat of liquids that kept it alive. The book recounted his thought processes, absent any sensory experience. It gave me nightmares.

 

I knew the premise was implausible—keeping a brain alive with no connection to blood and life—but nevertheless, the thought of being a cognitive system devoid of any connection to the world seemed deeply frightening to me.

 

But isn't that just what our AI systems are? Systems that appear to emulate cognition but with no connection to the world about which that cognition occurs? They're like disembodied brains, processing vast amounts of information but lacking the grounding of real-world experience.

 

This parallel raises profound questions about the nature of AI and its limitations. Can a system truly understand or think without sensory input and physical interaction with the world, as AI mostly does? How might this fundamental disconnect impact AI's ability to genuinely assist in human tasks, particularly in fields like education where experiential learning is crucial? Is mimicking human emotion the same as feeling it? It is striking to note, as highlighted by Ethan Mollick, that AI can outperform humans in tasks requiring empathy and judgement. For instance, AI is 87% more likely to persuade in a debate than an average human, and it can provide more effective, novel, and empathetic emotional reappraisal than 85% of humans. Although in a setting where real empathy is required I feel that AI may not be as good at reading the cues and providing the human warmth and caring.

 

As we continue to develop and deploy AI systems, we must grapple with these questions. The goal should be to create AI that complements human intelligence rather than attempting to replicate it in isolation. Only then can we harness the full potential of AI whilst avoiding the pitfalls of the 'brain in a vat' scenario.

 

As an AI researcher in education, I'm often asked: can AI truly enhance human intelligence (HI)? The answer isn't straightforward, but it's crucial for the future of learning. Consider Vygotsky's theory of cognitive development. He argued that our intelligence is shaped by social interactions and cultural context. We learn through a mix of everyday experiences and formal instruction, with language as the bridge between the two. Now, look at AI. It's trained on vast amounts of text but lacks real-world experience. It can process information but can't truly understand it in the way humans do. A child learns what a "dog" is by petting one, hearing it bark, watching it play. AI only knows dogs through words, images and movies. It's missing the crucial experiential foundation that grounds human understanding.

 

This disconnect matters. In education, we're seeing AI used for personalised learning and automated grading. But are these truly enhancing human intelligence, or just optimising rote learning?

 

The risk is that we're using AI to standardise and atomise education, rather than enriching the social and cultural aspects that are fundamental to human cognitive development.

 

So, how can we better use AI in education? We need to design AI systems that complement, not replace, human interaction. AI should facilitate rich social learning experiences, not substitute for them.

 

Imagine AI-enhanced environments that encourage collaboration and critical thinking. Or AI tutors that adapt not just to a student's knowledge level, but to their cultural context and socio-emotional preferences.

 

The challenge ahead is clear: to harness AI's power while preserving the irreplaceable human elements of learning. Only then can we truly enhance, rather than replace, human intelligence in education.

 

Let's ensure our AI tools are supporting the development of well-rounded, socially adept learners - not just efficient information processors. The future of education depends on getting this balance right.

​

- Professor Rose Luckin, July 2024

AI regulation and policy

1. Seoul AI Summit Spurs Safety Agreements:

Government and corporate officials from dozens of countries agreed on actions for AI safety at the AI Seoul Summit and AI Global Forum. Agreements include developing risk thresholds, creating shared policies, and establishing international AI safety research networks.

 

Implications for educators: These agreements could shape future AI use in education, potentially affecting how AI tools are implemented in schools. Educators may need to stay informed about these international agreements and adjust their use of AI tools accordingly. It also presents an opportunity to discuss global AI governance with students.

 

2. Can California fill the federal void on frontier AI regulation?:

California proposed the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act". The bill includes provisions for safety demonstrations, emergency shutdown protocols, and a subsidised public computing cluster to democratise access to AI training.

 

Implications for educators: This could influence how AI is developed and used in educational settings, potentially leading to new standards for AI in education. Educators in California may have access to new resources for AI education, and the state's actions could set precedents for other regions.

 

3. OpenAI expands lobbying team to influence regulation:

OpenAI is growing its global affairs team from 3 to 50 staff members, positioning them in key locations worldwide. They're engaging with policymakers on issues such as responsible AI model release and misinformation concerns related to elections.

 

Implications for educators: This could influence AI legislation globally, potentially affecting how AI is used in schools and universities. Educators should stay informed about these policy developments and consider incorporating discussions.

 

4. Apple set to be first Big Tech group to face charges under EU digital law:

The European Commission is preparing to charge Apple over allegedly stifling competition on its mobile app store, under the new Digital Markets Act.

 

Implications for educators: This could impact how educational apps are distributed and accessed on Apple platforms. Educators may need to reassess their use of Apple-based educational tools and consider the potential for a more diverse app ecosystem.

Brussels accuses Apple of breaking EU 'gatekeeper' rules

 

5. The U.S. Department of Justice and Federal Trade Commission are preparing to investigate Microsoft, Nvidia, and OpenAI for potential antitrust violations, focusing on market dominance and partnerships in the AI industry.

 

Implications for education: Educators and educational institutions should stay informed about these regulatory developments, as they may influence the availability and use of AI tools in educational settings. There may be opportunities to participate in discussions about AI governance in education and to help shape policies that balance innovation with safety and ethical considerations.

AI security and misuse

1. Hackers 'jailbreak' powerful AI models in global effort to highlight flaws:

Security experts and hackers are finding ways to bypass safety measures in AI models, revealing vulnerabilities. These efforts are often aimed at highlighting shortcomings in AI models that have been rapidly released to the public.

 

Implications for educators: Be aware that no AI is risk-free and that the AI models that are popularly in use can provide worrying and potentially dangerous responses. This also emphasises the need to teach cybersecurity and ethical AI use in technology curricula. Educators should consider incorporating lessons on AI vulnerabilities and the importance of robust security measures in AI systems. This could also serve as a case study for discussing the ethics of 'white hat' hacking.

 

2. Political deepfakes top list of malicious AI use, DeepMind finds:

Google's DeepMind research found that AI-generated deepfakes impersonating politicians and celebrities are the most prevalent form of malicious AI use. The creation of fake images, video, and audio is almost twice as common as the next highest misuse: generating misinformation using text-based tools like chatbots.

 

Implications for educators: This underscores the importance of teaching digital literacy and critical evaluation of online content. It also highlights the risks that young people will be subjected to plausible misinformation from which they need safeguarding. Educators should consider incorporating lessons on identifying deepfakes and understanding their potential impact on public discourse. This could be integrated into media studies, civics, and digital citizenship curricula.

AI in finance and business and the future of work

1. IMF warns of 'profound concerns' over rising inequality from AI:

The IMF stresses the need for adapting education and training policies to prepare workers for an AI-driven job market. They estimate that AI will affect almost 40% of jobs worldwide, emphasizing the importance of lifelong learning opportunities.

 

Implications for educators: This highlights the importance of continuously updating curricula to prepare students for an AI-driven job market. Governments should consider incorporating AI education across all subjects and emphasizing adaptable skills that will remain relevant in an AI-dominated workforce, such as being good at learning.

Environmental impact of AI

1. Google emissions jump nearly 50% over five years as AI use surges:

 

Implications for educators: This highlights the need to address the environmental impact of AI in both technology and environmental education. Educators could consider incorporating discussions about the environmental costs of AI into computer science, environmental studies, and sustainability curricula.

AI technology developments

1. Audio Generation Clear of Copyrights:

Stability AI released Stable Audio Open, a text-to-audio generator utilising open-source data to circumvent copyright issues. This tool can generate music or sound effects up to 47 seconds long, offering potential for educational content creation without legal concerns.

 

Implications for educators: This tool could revolutionise the creation of audio resources for lessons. Teachers could generate copyright-free sound effects and music for educational content, enhancing multimedia learning experiences without concerns over licensing. It also presents an opportunity to teach students about AI-generated content and the ethical considerations surrounding its use.

 

2. Apple's Gen AI Strategy Revealed:

Apple unveiled "Apple Intelligence", a suite of generative AI features integrated into iOS, iPadOS, and MacOS. This includes semantic search capabilities, which can analyse context to understand user routines and relationships; generative media functions for tasks like text refinement and image creation; and an enhanced Siri that can interact with multiple apps simultaneously.

 

Implications for educators: This development could transform how students and teachers interact with Apple devices in the classroom. It offers new tools for personalised learning, content creation, and information retrieval. Educators might need to adapt their teaching methods to incorporate these AI-enhanced features, potentially leading to more interactive and personalised learning experiences. BUT the features will likely only be on top-end iPhones and laptops meaning that many will not have access to these tools due to their high price.

 

3. Nvidia and Alibaba released high-performance open-source language models. Nvidia's Nemotron-4 340B family includes a 340-billion parameter base model, while Alibaba's Qwen2 family ranges from 500 million to 72 billion parameters. Stability AI also released a slimmed-down version of its text-to-image generator, Stable Diffusion 3 Medium.

 

Implications for education: All these advancements could revolutionise content creation in educational settings, enabling more personalised learning experiences and accessible materials. Educators might leverage these tools to create custom audio-visual aids, generate localised content in multiple languages, or develop interactive learning materials tailored to individual student needs.

AI company developments

1. Apple to join OpenAI's board in observer role

 

Implications for educators: This collaboration between major tech companies could lead to new developments in AI that may impact on the tools available for teaching and learning.

 

2. Nvidia tide is lifting the tech sector:

Massive investments in AI infrastructure, predicted to reach $1 trillion over the next 4-5 years, are affecting various components of IT systems, from chips to software.

 

Implications for educators: This trend confirms that AI is here to stay, students will increasingly be using it and educators and education systems will need to respond as more and faster AI appears on the market.

 

3. Most stocks hyped as winners from AI boom have fallen this year:

This trend highlights the volatility of AI-related investments and the gap between hype and reality in the AI sector.

 

Implications for educators: This highlights the concerns that investors will be most likely be experiencing about the potential for a return on their investments and reminds us that AI is now a consumer product, including that used for teaching and learning. Educators could also consider using this as a case study to teach students about market dynamics, the tech industry, and the importance of looking beyond hype when evaluating investments.

​

4. OpenAI co-founder Ilya Sutskever announces rival AI start-up:

Safe Superintelligence Inc will focus on AI safety, potentially influencing future AI development in educational tools.

​

Implications for educators: This highlights the growing importance of teaching AI safety and ethics.

AI in education and research

- Scale AI introduced new leaderboards to test AI models' abilities in coding, language skills, instruction following, and math problem-solving. These proprietary benchmarks aim to provide more reliable evaluations of AI models' capabilities.

 

- Researchers are developing benchmarks for agentic behaviours in AI, testing capabilities like tool use and planning. These include tests like WorkBench, which evaluates an AI's ability to use software tools to operate on simulated workplace databases.

 

- A Google study showed an AI chatbot called Articulate Medical Intelligence Explorer (AMIE) outperforming human doctors in diagnostic ability and bedside manner during simulated patient conversations.

 

Implications for education: These developments suggest potential for AI to enhance assessment methods, enabling more sophisticated evaluation of student skills. They also point towards new areas of study in computer science and AI ethics curricula. The medical AI study highlights possibilities for enhancing medical education and potentially democratising access to medical knowledge.

Further Reading: Find out more from these free resources

Free resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionize the field of education. From personalized learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalized instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page