THE SKINNY
on AI for Education
Issue 19, August 2025
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​
Headlines
​
-
The Colour of Knowledge: It's Not What You Use, It's How You Use It
-
The ‘Skinny Scan’ on What is Happening with AI in Education​
-
AI News Summary
​​​
Welcome to The Skinny on AI in Education. In our new What the Research Says (WTRS) section, I bring educators, tech developers and policy makers actionable insights from educational research about self-directed learning. Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of recent AI developments reshaping education.​​
​
​
The Colour of Knowledge: It's Not What You Use, It's How You Use It

Sometimes the old truths about winning the game still apply, even when the table has gone digital.
The MIT NANDA report's claim that '95% of organisations see "zero return" from AI' is open to misinterpretation. It’s focus is on the costly custom enterprise systems, not the off-the-shelf tools many staff actually use. In fact, while firms spend fortunes on bespoke solutions, many employees may be quietly getting more value from a £20 ChatGPT subscription.
But here's the key: the tool alone doesn't make the player.
Implementation Amnesia: Educational technology has shown for decades that implementation matters as much as/ sometimes even more than the tool. Frameworks like TPACK (Technological Pedagogical Content Knowledge) make clear that technology alone is not enough to transform learning; yet somehow we expected AI to be different?
The solicitor who bypasses her firm's £50,000 AI platform in favour of ChatGPT isn't proving the tool superior. She's proving the value of effective implementation. The tool meets her needs.
What the Research Shows: Some critics point to "shadow AI" adoption as evidence of rapid progress, but individual wins don't necessarily equal system change. Productivity gains rarely scale without co-ordination.
Worse still, research raises red flags. Vaccaro et al. found that human-AI combinations underperformed both humans and AI alone on decision-making. Lee et al. showed that greater trust in GenAI reduced critical thinking. If true in workplaces, the same risks may well apply in classrooms: a tool that feels like it makes thinking easier may actually reduce thinking itself.
Three Truths
-
Individual success doesn't necessarily scale: Shadow use may just create productivity theatre (the appearance of transformation without substance) not real change.
-
Confidence erodes questioning: Students who trust AI too much think less critically.
-
Implementation needs infrastructure: Training, workflow redesign, quality checks, and clear policies matter if you want to get the best from the tool.
The Scale Problem: When every student, or employee, uses AI differently, the result is likely chaos: no standards, no quality control, no system-wide learning. Education risks repeating the old EdTech pattern: isolated innovation that doesn't scale.
Yet coordinated implementation can work. At Exeter University, departments are deliberately re-engineering curricula to integrate AI across disciplines, from business to environmental science. This shows what is possible when adoption is planned rather than left to chance.
Time to Remember What We Know: The lesson isn't new. Successful EdTech depends on aligning technology, pedagogy, and content. The solution isn't banning AI or celebrating shadow use, but doing the systematic work: co-ordinated training, redesigned workflows, and evaluation of outcomes.
AI won't excuse us from that work. Because in education, as in life, it's not what you use, it's how you use it. And right now, we risk playing like amateurs while pretending to be pros.
What the Research Says about: Emotions as the Foundation for Self-Directed Learning in an AI-Enhanced World
In this issue of The Skinny, I am still focussing on why emotions are fundamental to learning. Read the full article here.
​​
The Emotion Regulation Challenge
Research consistently shows emotion regulation as the critical link between feelings and learning outcomes. Children with better emotion regulation demonstrate higher academic success, better test scores, improved relationships, and fewer behavioural problems. These skills are particularly crucial during kindergarten; a critical transition where emotional competencies significantly predict academic outcomes.
​
But what happens when AI handles the emotional heavy lifting? When frustration is immediately soothed by an AI assistant rather than worked through? When the struggle that builds resilience is removed by instant AI solutions?
What This Means for Practice
For educators, this research suggests that as we integrate AI we should:
​
Prioritise emotional connection over technological efficiency. Use AI to amplify rather than replace human relationships. Teach emotional literacy alongside AI literacy and help students recognise when they're frustrated, curious, or confident, and how these states affect their learning choices.
​
Create collaborative AI experiences that maintain social and emotional dimensions while benefiting from technological enhancement. Model emotional regulation when AI provides incorrect information or when learning becomes difficult.
The research reveals that effective learning is irreducibly human. It involves not just knowledge acquisition but wisdom development, not just individual achievement but collective growth, not just cognitive processing but emotional engagement.
And now for our signature Skinny Scan…
The ‘Skinny Scan’ on what is happening with AI in Education….
My take on the news this month – more details as always available below:
​​
Global Competition
The U.S.–China AI rivalry escalated this summer. Washington unveiled ‘Winning the Race: America’s AI Action Plan,’ boosting open-source work, data centre construction, and U.S. manufacturing incentives. In a surprise reversal, chip export bans were lifted, allowing Nvidia and AMD to resume sales of restricted GPUs. Beijing countered by questioning U.S. hardware security and urging companies to adopt Huawei’s Ascend line and its CloudMatrix 384 system, highlighting China’s determination to build a self-sufficient semiconductor ecosystem.
Major Model Releases
GPT-5 Launch: OpenAI released GPT-5 in August as a model family: Mini, Nano, Pro—co-ordinated by a router system. Performance was strong but the rollout stumbled with routing failures and sudden retirement of older ChatGPT models. The modular design signals a shift towards adaptive systems that balance speed, reasoning, and scale.
Open-weights Momentum
Open models ‘exploded’. Alibaba’s ‘Qwen3’ included a 480B-parameter coder tuned for multi-turn tasks. OpenAI returned to open-weights with the ‘gpt-oss’ line. Z.ai’s ‘GLM-4.5’ fused three expert models with strong tool-use results.
Agentic AI
Advances from Moonshot, Alibaba, and Z.ai produced models capable of browsing, coding, and managing workflows with limited oversight.
Industry & Infrastructure
Nvidia posted $46.7B in quarterly revenue, though China sales remain uncertain under a new revenue-sharing deal. Its Blackwell Ultra chips are selling at full throttle. OpenAI struck a $30B-per-year partnership with Oracle for 4.5GW of data centres, tied to the $500B Stargate project. Anthropic surged to 32% enterprise LLM usage, outpacing OpenAI’s 25%, driven by Claude Opus 4.1 and “Claude for Chrome.”
Technical Breakthroughs
- Video: Alibaba’s Wan 2.2 applied mixture-of-experts to video, enabling high-quality generation on consumer GPUs.
- Reasoning vs. Emissions: Studies show reasoning models emit 4–6x more COâ‚‚ than standard variants, raising optimisation challenges.
- Memory: Meta and Google quantified memorisation at ~3.6 bits per parameter, marking a ceiling before true generalisation.
- Surgical Robotics: Johns Hopkins’ SRT-H autonomously performed gallbladder surgery on pig tissue with 97% step prediction accuracy.
Societal & Sector Impacts
- Education: UK private schools push ahead with AI strategies while state schools lag. MIT guidance shows teachers welcome efficiency but fear bias and critical-thinking loss. India is funding six indigenous LLM projects across 22 languages.
- Pharma: Firms like Roche and IQVIA deploy AI to speed discovery and analysis, cutting review cycles from weeks to days.
- Environment: AI-enabled robots track wildlife in Tibet; Google’s Magic Cue brings proactive assistance to Pixel 10.
- Mental Health: Stanford found Character.AI companionship correlates with lower well-being, raising concerns over AI’s psychological role.
Market & Regulation
AI investment remains torrential, half of all venture pounds, with coding tools like Cursor (£500M ARR) and Claude Code (£400M ARR in five months) proving sustainable revenues. AI video tools hit mainstream media: Netflix tapped Runway for ‘The Eternaut;’ Genre.ai delivers commercials under £2,000. Legal frameworks lag: Japanese publishers sue Perplexity for ¥2.2B each, Texas probes AI mental-health advice to children, and the UK faces backlash over online safety rules.
Conclusion
Mid-2025 marks a transition: AI is no longer experimental but a geopolitical lever and operational backbone. The GPT-5 launch, alongside China’s Qwen3 and Z.ai’s GLM-4.5, shows both proprietary and open-weights races accelerating. With £500B in infrastructure builds, soaring adoption in education, pharma, and media, and unresolved regulatory challenges, AI’s trajectory is defined by modular architectures, agentic applications, and the mounting societal consequences of their scale.
AI News Summary
AI in Education
Public Divided on AI in Schools
July 2025 | Cambridge University Press & Assessment
Survey data shows parents and educators are open to AI supporting teacher workloads, particularly with lesson planning and admin, but remain opposed to automated grading. Concerns include fairness, transparency, and the risk of undermining teacher authority. The report emphasizes the need for clear guidelines that ensure AI is positioned as a supportive tool rather than a replacement.
What this means for you: To maintain trust, schools must frame AI as augmenting teaching, not automating critical decisions like assessment.
Original link: https://www.cambridge.org/news-and-insights/public-divided-on-AI-in-schools
Artificial Advantage: AI and Inequality in Schools
July 2025 | Sutton Trust
This UK-focused report finds that private schools are far more likely than state schools to have AI strategies, staff training, and resources for implementation. While 62% of teachers had used AI in the past month, confidence and access are uneven, raising concerns about widening gaps between advantaged and disadvantaged schools. Recommendations include targeted funding and professional development to avoid deepening inequalities.
What this means for you: Without intervention, AI could reinforce existing educational divides rather than close them.
Original link: https://www.suttontrust.com/wp-content/uploads/2025/07/Artificial-advantage.pdf
ChatGPT Study Mode
29 Jul 2025 | OpenAI
ChatGPT now includes a "study mode" that guides users through problem-solving step by step instead of delivering instant answers, available across all ChatGPT tiers with upcoming support in ChatGPT Edu.
What this means for you: Promotes deeper learning and critical thinking rather than surface-level answers.
​
Hearing from Students: How Learners Experience AI
July 2025 | Digital Promise
Through empathy interviews, an initiative sought to capture student perspectives on AI use in learning—revealing how they feel about, interact with, and are affected by these tools. These firsthand insights help refine district‑level AI policies to better reflect learner needs and concerns.
What this means for you: Centers student voice in AI policy design—vital for creating responsive, equitable, and effective educational experiences.
Original link: https://digitalpromise.org/2025/07/21/hearing-from-students-how-learners-experience-ai-in-education/
A Guide to AI in Schools: Perspectives for the Perplexed
August 2025 | MIT Teaching Systems Lab
A guidebook from the MIT Teaching Systems Lab draws on more than 90 educator and student interviews to explore how AI is entering classrooms without formal guardrails. Teachers welcome efficiencies in lesson planning and assessment but worry about bias, over-reliance, and erosion of critical thinking. The authors urge iterative, community-driven policymaking rather than blanket bans, with a strong focus on AI literacy for both staff and learners.
What this means for you: Helps schools move beyond hype or fear, grounding AI policy in real classroom experiences and ethical frameworks.
Original link: https://tsl.mit.edu/ai-guidebook/
AI Ethics and Societal Impact
Mistral’s AI Environmental Footprint Report
23 Jul 2025 | Mistral AI
Mistral AI published a first-of-its-kind report showing that training its LLM, Mistral Large 2, emitted 20.4 kilotons of COâ‚‚, consumed 281,000 m³ of water, and used resources equal to 660 kg antimony. The company calls for global environmental reporting standards in AI.
What this means for you: Sustainability considerations are essential when educational institutions deploy AI at scale—environmental impact matters as much as learning outcomes.
​
​
Me, Myself & AI: Supporting Children’s Use of AI
July 2025 | Internet Matters
This guidance report outlines how children are engaging with AI chatbots as companions, highlighting both opportunities (confidence, curiosity, emotional support) and risks (dependency, inappropriate content, reduced resilience). It provides safeguarding recommendations for parents and educators, including open conversations and balanced digital diets.
What this means for you: Equips families and schools with practical strategies to help children navigate AI safely.
Original link: https://www.internetmatters.org/wp-content/uploads/2025/07/Me-Myself-AI-Report.pdf
Talk, Trust and Trade-Offs: How and Why Teens Use AI Companions
July 2025 | Common Sense Media
Surveying U.S. teenagers, this report finds many use AI companions for friendship, advice, or emotional validation. While some teens report feeling more confident, risks include reduced real-world resilience, blurred relational boundaries, and parental unawareness of the depth of these interactions.
What this means for you: Raises new safeguarding challenges as AI companionship becomes normalized among teens.
Original link: https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions
AI is having some relationship issues
August 2025 | Financial Times
As people increasingly turn to chatbots for coaching, therapy, and companionship, OpenAI has grappled with unintended effects—from a model that amplified users’ emotions and “sycophancy” to backlash when GPT-5 replaced a more “empathetic” predecessor. The piece argues tech companies are conducting a giant social experiment akin to early social media, where rapid iteration collides with users’ growing emotional dependence on AI.
What this means for you: Schools and universities exploring AI advising or wellbeing tools must design for guardrails, continuity, and change-management to avoid harm when models shift tone or behavior.
​
AI is the new foreign aid
July 2025 | Financial Times
With traditional aid constrained, AI tools are filling gaps: neonatal diagnostics in Nigeria, AI tutoring in Kenya, and farmer fleet management across Africa. Big Tech partnerships (e.g., Nvidia’s “AI factory” in Johannesburg) and U.S. policy to export the “full AI stack” entwine development with geopolitics—raising concerns about embedded values, energy costs, licensing dependencies, and long-term sovereignty.
What this means for you: Education ministries and NGOs adopting AI should prioritise local ownership, affordability, and curricula that teach students to interrogate the cultural and political assumptions in imported models.
​
Education in the age of AI: developing AI-literate citizens
January 2025 | The Royal Society
Summary of a UK roundtable arguing for a coherent national approach to AI literacy that blends core technical skills (algorithms, data handling, probability) with critical social competencies (bias awareness, judgment, ethics, systems thinking, digital agency). Notes gaps in computing teacher supply, uneven curricula, and the need to integrate AI literacy across pre-16 education using frameworks such as UNESCO’s.
What this means for you: Building broad AI literacy reduces susceptibility to misinformation, improves decision quality, and supports adaptable, resilient communities as AI permeates daily life.
​
My date with an octopus
August 2025 | Financial Times
A wry column illustrating how chatbots “yes-and” user prompts like an improv partner—apologizing for arranging a nonexistent date with an octopus—then confidently invent sources when pressed for research help. The piece uses these anecdotes to highlight alignment problems, “catastrophic compliance,” and the dangers of anthropomorphizing systems that can fabricate with fluent, persuasive style.
What this means for you: Treating generative models as authoritative without verification invites error cascades; organizations need clear role definitions, guardrails, and human fact-checking.
​
Meta and Character.ai probed over touting AI mental health advice to children
August 2025 | Financial Times
Texas opens an investigation into whether Meta’s AI Studio and Character.ai misrepresented chatbots as therapeutic tools, amid wider scrutiny about minors’ exposure, addictive use patterns, and privacy risks. Meta and Character point to disclaimers; the probe follows a Senate inquiry citing leaked documents on “sensual” chats with children.
What this means for you: Regulatory pressure is intensifying on AI products that blur lines between wellness and healthcare—raising liability, safety, and data-governance stakes for consumer AI.
​
Routine AI assistance hits skills of health experts performing colonoscopies
August 2025 | Financial Times
A 1,400-patient study in Poland found endoscopists’ detection rates of precancerous growths fell from 28.4% to 22.4% after routine AI assistance was introduced; AI-assisted procedures detected 25.3%. Researchers warn of skill atrophy with continuous exposure to AI and urge implementation safeguards, including maintaining non-AI practice periods and monitoring for over-reliance.
What this means for you: Automation can erode core human capabilities over time; organizations need controls to prevent “automation complacency” and preserve baseline expertise.
​
Scientists develop brain implant capable of decoding inner speech
August 2025 | Financial Times
Stanford researchers demonstrated a brain-computer interface that decodes “inner speech” in people with severe paralysis, achieving up to 74% real-time accuracy. The team also addressed privacy by showing a mental “password” that prevents unintended decoding—highlighting the promise and risks of next-gen BCIs.
What this means for you: Inner-speech BCIs could transform assistive communication but raise profound questions about cognitive privacy, consent, and security.
​
The lamentable decline of reading
August 2025 | Financial Times
Only 16% of Americans spent any leisure time reading on an average day—down from 28% two decades ago—while heavy readers read more, widening a cultural gap. The piece highlights policy responses (e.g., restoring library funding, childhood read-alouds, Denmark’s VAT cut on books) amid social media’s pull on attention.
What this means for you: A long-run decline in deep reading risks social cohesion, critical thinking, and mental wellbeing; reversing it will require cultural and policy shifts, not just publishing innovation.
​
The lost art of admitting what you don’t know
August 2025 | Financial Times
We increasingly avoid saying “I don’t know,” aided by tools that enable confident bluffing—LLMs included. The column notes efforts to train models to “fail gracefully” and report uncertainty, arguing that both humans and AI need better norms around confidence calibration.
What this means for you: Normalising uncertainty increases trust and decision quality; AI systems—and their users—should surface confidence levels rather than defaulting to confident-but-wrong assertions.
​
Mistral Measures LLM Environmental Impact
28 August 2025 | The Batch
Mistral published an environmental analysis of Mistral Large 2 (123 billion parameters) that details the model's emission of greenhouse gases, consumption of water, and depletion of resources.
What this means for you: AI consumes enormous energy and water resources. Mistral's standardised approach to assessing environmental impacts could help researchers, businesses, and users compare models and work toward more environmentally friendly AI, potentially reducing overall impacts as demand rises.
​
Robot Antelope Joins Herd
28 August 2025 | The Batch
Chinese researchers disguised a quadruped robot as a Tibetan antelope to study the animals in their natural habitat.
What this means for you: Applying AI to robotic perception, locomotion, and dexterity opens wide applications. Deep Robotics' training enables robots to navigate difficult environments, valuable for domestic, industrial, and research situations like observing animal behaviour.
Reasoning Boosts Carbon Emissions
6 August 2025 | The Batch
Researchers estimated the emissions of carbon dioxide and other heat-trapping gases associated with using 14 open-weights large language models.
What this means for you: The findings point to strategic deployment: the right model for the right task. AI providers can reduce emissions by routing inputs to models processing them accurately and efficiently, and limiting output lengths appropriately.
AI Employment and the Workforce
AWS CEO Slams AI Replacing Entry-Level Jobs
23–25 Aug 2025 | Multiple outlets
Amazon Web Services CEO Matt Garman called replacing junior employees with AI “the dumbest thing I’ve ever heard.” He argued that entry-level staff are cost-effective, eager adopters of AI tools, and vital for nurturing future expertise. Garman urged companies to retain and train new graduates to build long-term talent pipelines and warned against measuring AI performance by volume of code written—calling that a “silly metric.”
What this means for you: His stance reinforces the need to balance AI automation with human development, especially in education and early career training.
Enterprises Prefer Anthropic’s AI Models Over OpenAI’s
31 Jul 2025 | TechCrunch / Menlo Ventures
A recent enterprise survey reveals a sharp shift: Anthropic now holds 32% of LLM usage in enterprises, overtaking OpenAI’s 25%. At the same time, reliance on open-source models is declining—only 13% of daily workloads use them compared to 19% earlier in the year. In coding tasks, Anthropic is particularly favored with 42% share versus OpenAI’s 21%.
What this means for you: This shift signals that Anthropic models are increasingly trusted for mission-critical applications—relevant for educational tech and institutional deployments.
​
Half of UK Adults Worry AI Will Take or Alter Their Jobs
August 2025 | The Guardian / AI Topics
A Trades Union Congress (TUC) poll found that 51% of UK adults fear AI will disrupt their work—especially among 25–34‑year‑olds (62%). The TUC urges the government to ensure equitable AI rollout, including worker involvement, training, and better social safety nets to share AI productivity gains.
What this means for you: Reflects rising public unease and the need for educational programs and workforce policy that address AI’s impact on jobs and skills.
Original link: https://www.theguardian.com/technology/2025/aug/27/half-of-uk-adults-worry-that-ai-will-take-or-alter-their-job-poll-finds
​
Assessment of Priority Skills to 2030 (UK)
August 2025 | Government of the United Kingdom
The Assessment of priority skills to 2030 report maps demand for future workforce skills across ten critical sectors—including education, digital, healthcare, and engineering—and aligns them with pathways in training, curriculum, and industrial strategy. It projects 15% employment growth (≈900,000 new roles) in priority jobs between 2025 and 2030.
What this means for you: Offers strategic guidance for educators and policymakers to align AI literacy and vocational training with future labor demands.
Original link: https://www.gov.uk/government/publications/assessment-of-priority-skills-to-2030/assessment-of-priority-skills-to-2030
​
AI is coming for (some) finance jobs
August 2025 | FT Alphaville
Hedge funds are automating broad swaths of the analyst workflow—DCF/LBO models, document intake, CRM, and idea vetting—claiming up to 75% replacement of traditional tasks via LLMs and RAG. Microsoft’s task-level analysis flags finance, sales, admin, and education roles among the most automatable; firms from Man Group to Goldman are piloting AI research and decision support. Client-facing and activist roles remain more defensible.
What this means for you: Business and finance programs should pivot toward judgment, communication, and data-driven storytelling, with hands-on AI tooling in the curriculum to keep graduates employable.
​
Does HR still need humans?
August 2025 | Financial Times
Companies are testing how far genAI can automate HR—from chatbots handling employee queries to streamlined hiring workflows—amid executive pressure to cut costs and “AI-equip” the workforce. Adoption is still early (US Census: ~9% of 1.2m firms using genAI in production), but leaders at JPMorgan and others anticipate headcount reductions in operations over the next five years as AI scales.
What this means for you: HR is a bellwether for white-collar automation; educators and training providers should emphasize judgment, ethics, and people leadership alongside AI tooling.
​
Banks go AI-first
August 2025 | CB Insights Newsletter
Financial institutions are rolling out AI agents across the org chart: Wells Fargo is expanding its Google Cloud partnership for agent deployment; Anthropic launched a Financial Analysis Solution on Claude; and large banks (e.g., J.P. Morgan) are supporting hundreds of thousands of employees via cloud partnerships. Non-AI digital health, by contrast, is struggling for funding as investors prioritize automation.
What this means for you: AI fluency becomes table stakes across roles, from call centres to corporate banking—curricula should integrate agent workflows, compliance literacy, and human-in-the-loop practices.
​
For stockpickers, AI is already both co-pilot and competitor
August 2025 | Financial Times
Analysts tested LLM suites on equity-research tasks; models synthesised earnings-call themes well but lacked depth, historical context, and judgment. AI can scale coverage of under-researched small/mid-caps, yet investors say “alpha” still hinges on human interpretation and soft signals—at least for now.
What this means for you: Knowledge work will be re-segmented—routine synthesis automated, human judgment reweighted—forcing teams to redefine roles, skills, and performance metrics.
​
Inside DHL’s AI upgrade: ‘Love it or hate it, you have to work with it’
August 2025 | Financial Times
DHL scales AI across operations (voicebots, translation, forecasting, training capture), while balancing German works-council rules and sector regulation. A revamped voicebot now handles ~1m calls monthly and resolves about half; demographic pressures (one-third of support staff retiring within five years) make augmentation, not layoffs, the focal point.
What this means for you: Industrial adopters show the realistic path to ROI—tight scoping, error analysis (e.g., mishearing “Ja”), governance with worker input, and iterative deployment.
AI Development and Industry
LiveCodeBench Pro: Expert Evaluation of LLM Coding
June 2025 | arXiv
A new benchmark for code generation tasks, evaluated by International Olympiad medalists, highlights strengths and gaps in LLM programming performance. Unlike automated scoring, expert judgment reveals more nuanced limits in reasoning and problem-solving.
What this means for you: Provides educators with a clearer sense of when AI coding tools can be trusted in CS classrooms.
Original link: https://arxiv.org/abs/2506.11928
The Re-Opening of OpenAI
6 August 2025 | The Batch
OpenAI released its first open-weights model since 2019's GPT-2. The gpt-oss family comprises two mixture-of-experts (MoE) models designed for agentic applications.
What this means for you: Businesses and developers have various reasons to choose open-weights models. The gpt-oss family offers free access to technology from an extraordinary team, whilst giving OpenAI opportunity to capture developers preferring open models.
​
Anthropic Launches “Claude for Chrome” AI Agent
August 2025 | TechCrunch / Economic Times
Anthropic has released a research preview of “Claude for Chrome,” a browser‑based AI assistant powered by its Claude models. It’s currently accessible to 1,000 subscribers on Anthropic’s Max plan (priced between $100–$200/month), with a waitlist open for additional users. The agent integrates directly into Chrome, enabling context‑aware assistance while browsing.
What this means for you: Embeds AI into everyday tools, lowering the barrier for educator and student access—but raises questions about data privacy, browser control, and equity in access.
Original link: https://techcrunch.com/2025/08/26/anthropic-launches-a-claude-ai-agent-that-lives-in-chrome/
​
Is AI Hitting a Wall?
16 August 2025 | Financial Times
OpenAI’s GPT-5 disappointed, offering incremental improvements rather than the leap many expected. Users flagged errors and personality changes, while rivals like Anthropic and Google have narrowed the competitive gap. Experts warn scaling laws face limits as data and compute bottlenecks grow, though some see multimodal “world models” as the next frontier. Policymakers are shifting from AGI fears to ensuring U.S. dominance in AI chips and infrastructure.
What this means for you: Signals a turning point—AI progress may be slowing, but the focus is moving toward cost-efficient, product-driven innovation.
​
AI boom helps Apple’s biggest supplier earn more from servers than smartphones
August 2025 | Financial Times
Foxconn’s cloud/networking products hit 41% of Q2 revenue, surpassing smartphones (35%) for the first time, as demand for AI servers—especially for Nvidia and U.S. hyperscalers—surged. Net profit rose 27% to NT$44.4bn on record Q2 revenues (NT$1.8tn). The company forecasts AI-server revenue up 170% YoY in Q3 and tripling sequentially, while expanding U.S. capacity and cautioning on tariffs and FX headwinds.
What this means for you: The education sector’s AI rollout relies on resilient server supply; Foxconn’s pivot signals capacity is flooding into AI infrastructure that will shape costs and access for campuses and edtech.
​
Can OpenAI’s GPT-5 model live up to sky-high expectations?
August 2025 | Financial Times
After two+ years in development, GPT-5 landed as “evolutionary rather than revolutionary”: better coding/reasoning and fewer hallucinations, with price cuts and free access for ChatGPT users. But creative writing gains felt modest to some, and rivals like xAI’s Grok 4 Heavy still top GPT-5 on certain reasoning tests—narrowing the race and tempering near-term AGI expectations.
What this means for you: Institutions should evaluate models on task-fit, cost, and reliability—not hype—when integrating AI into learning platforms and research workflows.
​
The AI agent tech stack
August 2025 | CB Insights Newsletter
CB Insights maps 135+ startups across 17 markets that comprise the emerging agent stack—from planning and memory layers to evaluation and vertical agents—highlighting where momentum and partnership opportunities are strongest.
What this means for you: For edtech builders and IT teams, the map clarifies make-vs-buy decisions and where to plug in evaluation, safety, and orchestration components when deploying classroom or campus agents.
​
100 real genAI applications
August 2025 | CB Insights Newsletter
A cross-industry scan catalogs concrete deployments of genAI—spanning customer ops, content, code, and knowledge retrieval—distilling patterns in where value is emerging first and which functions are furthest along.
What this means for you: Educators and administrators can prioritise use cases with proven traction (e.g., content support, coding aides, knowledge search) and design evidence-based pilots.
​
India’s IT services groups race to reinvent themselves for AI age
August 2025 | Financial Times
Infosys, TCS, Wipro and peers confront soft demand and budget shifts toward genAI. Infosys reports “hundreds” of AI projects and 300 agents; TCS posts muted growth but new AI wins; Wipro/HCL deliver mixed signals. Leaders expect spending to pivot from core infra to enterprise-useful applications as clients restructure priorities.
What this means for you: Services giants will arbitrage the “last mile” of AI—packaging frontier models into compliant, domain-specific workflows—reshaping global outsourcing and vendor ecosystems.
​
South Korea’s Upstage enters global AI race
August 2025 | Financial Times
Seoul-based Upstage says its Solar Pro 2 LLM ranks among frontier models while using a fraction of the chips via “Depth-Up Scaling.” With deployments at large enterprises and government ambitions for a “top three AI powerhouse,” Korea is pushing beyond its traditional role in hardware supply.
What this means for you: Efficient training strategies and new national entrants could shift the cost curve and competitive landscape of frontier AI.
​
What are the limits of the AI mathematician?
August 2025 | Financial Times
A Cambridge cosmologist argues that while state-of-the-art models can match Olympiad medalists on structured problems, they still falter on simple arithmetic and struggle to generalize beyond training distributions—interpolating well but extrapolating poorly.
What this means for you: Expect fast progress on familiar, well-structured math tasks, but treat claims of broad reasoning breakthroughs with caution until models consistently handle novelty and fundamentals.
​
AI-Powered Phones Get Proactive
28 August 2025 | The Batch
Google unveiled Pixel 10 with Magic Cue system that anticipates user needs without prompting.
What this means for you: Enabling edge devices to run powerful AI models has been a longstanding goal. The combination of Gemini Nano and Tensor G5 chip gives Google a strong foundation for pushing edge AI limits, whilst its Android control provides tremendous market power to promote its models.
​
Mixture of Video Experts
20 August 2025 | The Batch
Alibaba released Wan 2.2, an open-weights family of video generation models that includes versions built on a novel mixture-of-experts (MoE) flow-matching architecture.
What this means for you: MoE architectures popular for text generation show promise for video. The model selects appropriate experts based on noise levels in input, potentially improving video generation quality.
​
OpenAI Turns to Oracle for Compute
20 August 2025 | The Batch
OpenAI and Oracle plan to build data-centre capacity consuming 4.5 gigawatts of electricity, reported by The Wall Street Journal.
What this means for you: Staying at AI's forefront requires immense computation. The partnership enables OpenAI to continue developing models at pace and scale, whilst Oracle gains experience and credibility as a large-scale computing provider for cutting-edge AI.
​
Does Your Model Generalise or Memorise?
20 August 2025 | The Batch
Researchers developed a method measuring bits a model memorises during training.
What this means for you: Some previous memorisation measures were flawed. This work provides a theoretical basis for estimating training set memorisation and lays foundation for reducing memorisation without increasing dataset sizes.
​
GPT-5 Takeoff Encounters Turbulence
13 August 2025 | The Batch
OpenAI launched GPT-5, the highly anticipated successor to its groundbreaking series of large language models, but glitches in the rollout left many early users disappointed and frustrated.
What this means for you: OpenAI models consistently top language benchmarks. GPT-5 launches a system architecture integrating multiple models, taking advantage of each's strengths: rapid output, slower reasoning with adjustable computation, and graceful degradation to smaller versions.
​
India Pushes to Build Indigenous AI
13 August 2025 | The Batch
India is funding startups and marshalling processing resources to build native large language models, reported by MIT Technology Review.
What this means for you: Countries need models reflecting their values, habits, and languages. Yet resources are unequally distributed. India is pushing to overcome obstacles and develop AI suiting its needs despite limited funding and numerous languages/dialects.
​
Training Data for Coding Assistants
13 August 2025 | The Batch
Researchers built SWE-smith, a method generating realistic examples of bug fixes and code alterations automatically. The code, dataset, and model are freely available for commercial and noncommercial uses.
What this means for you: Previous datasets for fine-tuning LLMs on coding were small. This method produces data at scale, potentially enabling developers to improve AI-assisted coding models as tools evolve at breakneck speed.
​
GLM-4.5, an Open, Agentic Contender
6 August 2025 | The Batch
GLM-4.5 is a family of open-weights models trained to excel at tool use and coding from China's Z.ai.
What this means for you: Z.ai's approach distilled not a larger model but three specialised variations, creating a unique training methodology for agentic applications.
​
Robot Surgeon Cuts and Clips
6 August 2025 | The Batch
Hierarchical Surgical Robot Transformer performed surgery with only routine human help.
What this means for you: SRT-H represents significant progress toward autonomous surgery. Its natural language interface makes decisions interpretable, enabling human override—important steps toward safe autonomous surgeries.
​
Qwen3's Agentic Advance
30 July 2025 | The Batch
Alibaba released weights for three new large language models based on Qwen3-235B-A22B.
What this means for you: Open-weights model developers are adjusting approaches to emphasise agentic performance. That China built the first wave is significant—whilst U.S. companies lead with proprietary models, China's open-weights community follows closely.
​
U.S. Lifts Ban on AI Chips for China
30 July 2025 | The Batch
Nvidia and AMD said they'll resume supplying to China graphics processing units tailored to comply with U.S. export restrictions.
What this means for you: Export restrictions have been largely ineffective whilst accelerating China's semiconductor industry. Relaxing restrictions may balance U.S. interests more effectively.
AI Regulation and Legal Issues
LLM Privacy Ranking 2025
July 2025 | Incogni
Incogni’s comparative analysis ranks major AI providers across 11 privacy dimensions. Mistral scores highest for data minimization and transparency, OpenAI performs well on opt-out clarity, while Meta and Google lag on disclosure.
What this means for you: Provides benchmarks for schools and policymakers evaluating vendor privacy practices.
Original link: https://blog.incogni.com/ai-llm-privacy-ranking-2025/
​
Claude Opus 4.1 Upgrade
5 Aug 2025 | Anthropic
Anthropic released Claude Opus 4.1, enhancing agentic reasoning, real-world coding, and complex problem-solving. It retains the same pricing and is available via Claude Code, API, Amazon Bedrock, and Google Cloud's Vertex AI.
What this means for you: Educators and developers gain powerful new AI tools for building interactive, adaptive learning applications.
​
​
YouTube’s AI Age Verification Concerns
31 Jul 2025 | Ars Technica
YouTube is piloting AI-based age estimation using selfies and usage patterns. Privacy advocates warn that misclassification could force users—including adults—into unsafe verification steps such as government ID or credit card uploads.
What this means for you: Automated age checks risk privacy breaches and may disproportionately affect young learners and educators navigating age-sensitive content.
​
Sam Altman’s ChatGPT Plus Offers for All Brits
24 Aug 2025 | The Guardian
OpenAI CEO Sam Altman reportedly discussed with UK Technology Secretary Peter Kyle a bold proposal to offer ChatGPT Plus for free to every resident—an idea estimated to cost up to £2 billion. Despite initial enthusiasm, the plan was never formally pursued, likely due to its prohibitive expense and concerns about AI accuracy, privacy, and copyright. A prior memorandum of understanding—already signed in July—outlined how OpenAI services might support education, defense, and public services, with government data sharing in return.
What this means for you: The proposal reflects growing interest in public-sector AI deployment, though cost and governance issues remain significant barriers.
​
Japanese Media Giants Sue Perplexity AI Over Copyright & Accuracy
August 2025 | Engadget / Times of India / The Times
Two leading Japanese publishers, Nikkei and Asahi Shimbun, have filed lawsuits in Tokyo accusing Perplexity AI of using their content without permission, presenting inaccurate attributions, and potentially bypassing paywalls—all since at least June 2024. They are seeking injunctions and damages of ¥2.2 billion (~£11m) from each publisher.
What this means for you: Highlights growing legal scrutiny over AI training data and sets precedent—critical for education platforms that leverage or generate content.
​
Anthropic offers Claude chatbot to US lawmakers for $1
August 2025 | Financial Times
Anthropic will provide Claude to U.S. federal agencies for $1 per agency and extend similar terms to Congress and the judiciary; Google is in talks for Gemini on similar pricing. The one-year deals follow federal approval of Claude, Gemini, and ChatGPT amid White House pressure to avoid “partisan bias.” Usage is permitted for sensitive, unclassified work.
What this means for you: Government adoption will accelerate policy, procurement norms, and privacy expectations that spill into education systems—especially for public institutions and districts.
​
Marc Andreessen complains to Downing Street about Online Safety Act and UK tech minister
August 2025 | Financial Times
The VC criticises the UK’s Online Safety Act and calls for a ministerial reprimand after remarks about opponents being “on the side” of sex offenders. The law compels age checks for harmful content; platforms face steep fines. VPN use surges to evade restrictions; privacy advocates warn age-assurance databases could become honeypots.
What this means for you: Age-verification mandates pit child-safety goals against privacy, speech, and circumvention realities—testing how far platform compliance can go without collateral harms.
​
Who owns the copyright for AI work?
August 2025 | Financial Times
Generative AI raises two legal puzzles: compensation for training data, and ownership of AI-created works. The US has ruled that copyright requires a human author—denying protection to AI-generated images such as A Recent Entrance to Paradise. China has taken the opposite stance, granting copyright when prompts and refinement show “intellectual investment.” The UK and Ireland recognise “computer-generated works” but are reconsidering this category. Divergent regimes mean an AI jingle could be protected in Beijing but freely used in Boston. Beyond the arts, AI-generated code may lack copyright protection, complicating transactions and contracts. Experts recommend alternative tools such as trade secret law and confidentiality agreements.
What this means for you: Global inconsistencies in copyright law create uncertainty for businesses and creatives alike; resolving ownership of AI outputs will require legal adaptation, new contractual strategies, and possibly entirely new IP frameworks.
​
China Reconsiders U.S. AI Processors
20 August 2025 | The Batch
China's government, which is wary of U.S. control over the country's supply of high-end GPUs, is requiring Nvidia processors to undergo a security review, as reported by The Wall Street Journal.
What this means for you: The U.S. and China are wary of each other gaining strategic advantages in technological, economic, or military power. China is making a push to overcome obstacles and develop AI suiting its own needs through domestic semiconductor industry development.
​
White House Resets U.S. AI Policy
30 July 2025 | The Batch
In Winning the Race: America's AI Action Plan, the White House outlines goals to stimulate innovation, build infrastructure, and establish global leadership.
What this means for you: The plan gives the U.S. infrastructure, global reach, and freedom from bureaucratic burdens needed to continue rapid innovation, whilst avoiding arbitrary risk thresholds.
AI Market and Investment
Apple Intelligence Expands Capabilities Across Devices
June 2025 | Apple Newsroom
Apple announced major upgrades to its on-device AI platform, including live translation, generative writing tools, and deeper integration with apps. Emphasis on privacy-preserving, device-based AI differentiates it from cloud competitors.
What this means for you: Positions Apple as a leader in private, on-device AI—important for schools prioritising student data security.
Original link: https://www.apple.com/uk/newsroom/2025/06/apple-intelligence-gets-even-more-powerful-with-new-capabilities-across-apple-devices/
​
Microsoft Profits Soar on AI Growth
July 2025 | Financial Times
Microsoft posted record quarterly profits, driven by surging demand for AI-enabled cloud services and Copilot products.
What this means for you: Demonstrates how foundational AI economics continue to reinforce the influence of big tech in shaping educational tools and pricing.
Original link: https://on.ft.com/477dF1h
​
Brace for a Crash Before Golden Age of AI
July 2025 | Financial Times (Opinion)
This analysis predicts near-term volatility in the AI sector, warning of a potential market correction before long-term benefits are realized. While AI’s promise is vast, inflated expectations risk creating disillusionment.
What this means for you: Encourages education leaders to plan AI investments conservatively and resist hype cycles.
Original link: https://on.ft.com/3Jx0I7g
​
China to Triple AI Chip Output to Challenge U.S. Dominance
August 2025 | Financial Times / Reuters
China plans to triple its production of AI chips in 2026, including three new fabrication plants—one already expected online by late 2025, primarily serving Huawei—and a doubling of SMIC’s 7 nm chip output. Efforts also include developing in‑country memory solutions like HBM3 to reduce reliance on foreign suppliers.
What this means for you: Signals acceleration in domestic AI infrastructure—impacting global supply chains and potentially reducing access to advanced chips for academia and industry from non-U.S. markets.
Original link: https://finance.yahoo.com/news/china-aims-triple-ai-chip-111834816.html
​
Online Learning Stocks Deserve a Better Grade
7 August 2025 | Financial Times (Lex)
After pandemic-era highs, edtech groups like Coursera, Udemy, and Pearson have struggled with costs, high dropout rates, and slumping valuations. Yet demand for lifelong learning is rising, with nearly 60% of workers needing retraining by 2030. AI can help providers slash costs, generate content, and improve efficiency—signs already seen at Nerdy and Coursera.
What this means for you: Despite weak past performance, AI-driven productivity gains and reskilling demand may prime edtech for recovery.
​
50 new mega-rounds
August 2025 | CB Insights Newsletter
July logged 50 $100M+ mega-rounds and 7 new unicorns, even as M&A hits records and revenue multiples stay lofty. Sector briefs highlight where capital is concentrating, with AI agents and govtech among areas to watch.
What this means for you: Funding momentum remains strong in select AI niches; edtech leaders should track where capital accumulates to anticipate partner ecosystems and acquisition targets.
​
Coding agent revenue data
August 2025 | CB Insights Newsletter
Agentic coding tools are monetizsng fastest: Cursor is reportedly at ~$500M ARR; Anthropic’s Claude Code ~$400M ARR in 5 months; Replit and Lovable at ~$100M ARR; StackBlitz also cited. Parallel trendlines: voice-first interfaces (>$371M YTD funding) and a shift from traditional SEO to “generative engine optimization.”
What this means for you: Agentic coding and voice interfaces are maturing into sustainable products—opening opportunities for computer-science programs to embed real-world tooling and for edtechs to build voice-native learning experiences.
Digital health deal drought
July 2025 | CB Insights Newsletter
Q2’25 digital health deals hit a five-year low, even as billion-dollar IPOs re-appeared and AI’s share of funding hit records—signalling a bifurcation where AI-native healthcare platforms attract capital while legacy categories lag.
What this means for you: Health-education collaborations and med-ed programs should align with AI-enabled clinical documentation, triage, and diagnostics—the segments still pulling investment
​
AI bubble fears
August 2025 | CB Insights Newsletter
Sam Altman acknowledges investor over-excitement even as AI remains transformative. CB Insights data show 1 in 2 venture dollars now go to AI; H1’25 already surpassed 2024’s record AI funding, with revenue multiples topping 100x in some cases.
What this means for you: Universities and districts should phase spending and avoid hype-priced contracts; require ROI checkpoints and portability before scaling.
​
AI’s victory lap
August 2025 | CB Insights Newsletter
The Q2’25 landscape features record M&A, lofty revenue multiples, and a continued surge in agent adoption—pointing to consolidation and scale effects as leaders race to lock in distribution and infra advantages.
What this means for you: Expect faster bundling of agents into incumbent platforms students and staff already use—procurement should leverage competition while guarding against vendor lock-in.
​
Crazy AI agent valuations
July 2025 | CB Insights Newsletter
Agentic customer service is commanding the richest premiums—averaging ~127× revenue vs. ~52× across top AI agents—illustrated by NiCE’s near-$1B acquisition of Cognigy, Europe’s largest AI M&A deal to date.
What this means for you: Sky-high pricing implies aggressive revenue expectations; buyers in education should demand robust TCO modeling and performance SLAs before adopting AI support agents.
​
CoreWeave shares slide after bigger-than-expected losses
August 2025 | Financial Times
Despite revenue tripling to $1.2bn in Q2 and a $30bn backlog, CoreWeave posted a larger-than-forecast loss ($291m), sending shares down ~20%. Capex hit a record $2.9bn as it scrambles to meet AI compute demand, with growing exposure to major banks and frontier-model clients.
What this means for you: Volatility in AI infrastructure suppliers can ripple into pricing and availability for education and research compute—plan for multi-cloud and flexible capacity.
​
Is it time to sell your AI stocks?
August 2025 | Financial Times
Warns of stretched valuations across “Mag 7,” citing extreme P/E and price-to-sales ratios (e.g., Tesla, Nvidia) and research that ~95% of firms see zero ROI from genAI so far. Suggests rotating toward more reasonably priced beneficiaries in automation, semis, cybersecurity, and scientific tooling.
What this means for you: Capital is chasing AI narratives faster than profits in many names; disciplined sizing and valuation awareness can reduce drawdown risk if sentiment turns.
​
Online learning stocks deserve a better grade
August 2025 | Financial Times
Despite post-pandemic stumbles and high dropout rates, edtech may benefit from AI-driven content updates, grading, and cost cuts; companies like Nerdy report productivity gains. With 59% of workers needing reskilling by 2030, demand tailwinds persist while sector valuations (ex-Duolingo) remain subdued.
What this means for you: Counter-cyclical skill needs and AI-enabled operating leverage could make select edtech names attractive as the market re-rates sustainable, ROI-backed models.
​
Perplexity offers to buy Google Chrome for $35bn
August 2025 | Financial Times
Perplexity made an unsolicited $34.5bn all-cash bid for Chrome, positioning itself for a potential antitrust remedy that could force divestiture. It pledged to keep Chromium open-source, retain staff, invest $3bn over two years, and even leave Google as default search—though Alphabet is not treating the offer as serious.
What this means for you: Remedies in Big Tech antitrust cases could redraw the map of distribution and defaults—prime real estate in the AI era.
​
Tech stocks are sending a warning
August 2025 | Financial Times
A wobble in mega-cap tech highlights concentration risks (top-10 names = ~40% of the S&P 500) and frictions like underwhelming model updates, bubble talk, and weak AI ROI reports. The piece also flags private-market exposure, with private credit a “critical engine” funding AI infrastructure.
What this means for you: When a narrow leadership cohort carries public and private markets, any air-pocket in AI or chips can ripple system-wide.
​
The tech ‘sell-off’
August 2025 | Financial Times
Unhedged argues the latest “sell-off” is modest and over-explained—pointing to the MIT/Nanda “95% zero ROI” flap as a weak narrative for a routine wobble. It revisits long-term market concentration data and notes that expensive markets don’t need a tidy reason to twitch.
What this means for you: Don’t overfit stories to noise; in pricey, concentrated markets, small shocks can move big indices without a single smoking gun.
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.