THE SKINNY
on AI for Education
Issue 20, September 2025
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​
Headlines
​
-
AI News Summary
​​​
Welcome to The Skinny on AI in Education. In our new What the Research Says (WTRS) section, I bring educators, tech developers and policy makers actionable insights from educational research about self-directed learning. Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of recent AI developments reshaping education.​​
​
​
​
Should AI Investments Pass the GCSE Test?
What educators can learn from applying our own standards to Britain's digital investment
​
As I was mulling over this month's Skinny when enjoying my morning coffee, I was struck by two contrasting stories about mathematical competency.
​
This summer's GCSE results revealed that nearly a quarter of students taking Maths and English GCSEs are now aged 17 or older, mostly retaking exams they didn't pass at 16. As educators, we understand why this matters: mathematical literacy opens doors and creates opportunities.
​
Shortly after these results came announcements that tech giants are pledging tens of billions to transform the UK into an "AI superpower." Microsoft alone committed $30 billion over four years.
​
This got me wondering: What does it look like when we apply the same analytical standards we teach our students to these major infrastructure decisions?
​
The Numbers Tell a Story
When we dig into the research, some interesting patterns emerge. Bain & Company's analysis shows that AI's computational demands are growing at more than twice the rate of Moore's Law (the observation that the number of transistors on computer chips doubles approximately every two years). To meet this demand by 2030 would require about $500 billion in annual spending on new data centres globally, but companies would need to generate $2 trillion in annual revenue to fund that sustainably.
​
To put this in perspective: generating $2 trillion annually would require about 8.3 billion ChatGPT subscriptions at $20 per month. This is more subscribers than there are people on Earth. Of course, ChatGPT isn't the only potential revenue source, but bear with me on this thought experiment because even with optimistic projections about shifting IT budgets and AI-generated savings, there's still an $800 billion annual gap between what's needed and what's feasible.
​
This raises some intriguing questions: How do we think about investments when the underlying economics show such significant gaps? What assumptions are we making, and how might we test them?
​
The UK won't be paying these bills directly, the tech companies will. But this creates its own puzzle: if companies can't generate the revenue needed to sustain these investments, what happens to the infrastructure? Microsoft's decision to step back from a $1 billion data centre project earlier this year offers one data point to consider.
​
Understanding the Community Dimension
The American experience offers some valuable data points too. A 100-megawatt data centre consumes about 2 million litres of water daily. This is equivalent to what 6,500 households use, while also requiring substantial electrical capacity.
​
At least 10 US states have provided over $100 million annually in tax incentives for data centres, and research suggests most communities hosting these facilities already face water stress challenges.
​
This prompts some interesting questions: How do we balance the potential benefits of technological infrastructure with community resource needs? What would meaningful community partnership look like in these developments?
​
What We Can Learn from Our Own Teaching
In education, we've developed useful approaches to complex problems. When students struggle with foundational skills, we've learned that sustainable solutions require us to address root causes early rather than applying fixes later. The GCSE resit patterns show us what happens when foundational gaps aren't addressed: students can get stuck in cycles that become increasingly difficult to break.
​
This makes me curious about technology infrastructure: Could similar principles apply? Chinese company DeepSeek recently demonstrated that AI models can produce reliable outputs with significantly less computing power. If efficiency improvements continue, as they typically do with technology, how might this change infrastructure needs over time?
​
It's complex, of course. The geopolitics of AI infrastructure is enormous, and not all scientists believe that generative AI, the main driver of these data centre demands, is the technology that will deliver artificial general intelligence. We may have hit what machine learning researchers call a 'local minima' in our approach.
​
Questions That Might Be Worth Exploring
As educators, we teach students to examine evidence, test assumptions, and ask probing questions. What if we applied that same approach here?
​
Planning for uncertainty: How might we design infrastructure investments that can adapt when underlying assumptions change? What would "minimum viable" approaches look like?
Community partnership: What would genuine community benefit look like beyond job creation promises? How could local needs be integrated into planning from the start?
Risk distribution: How do we ensure that if economic models don't work out as projected, communities aren't left bearing disproportionate costs?
Skills and sustainability: If we're hosting major technological infrastructure, what parallel investments in education and training would make sense?
Learning from efficiency gains: How could we build flexibility into planning to take advantage of technological improvements that reduce resource needs?
​
An Opportunity for Applied Critical Thinking
What strikes me most about this situation is how it offers a real-world case study for the kind of analytical thinking we try to develop in our students. We have competing claims, complex data, long-term projections, and significant uncertainties.
​
Rather than accepting optimistic projections or dismissing technological progress entirely, we could model the kind of rigorous, evidence-based inquiry we want our students to develop. What questions should we be asking? What evidence would help us make better decisions? How do we balance opportunity with prudent planning?
​
The AI revolution will continue regardless of how these particular infrastructure decisions play out. The more interesting question might be whether we approach such decisions with the same intellectual rigour and systematic thinking we try to cultivate in our classrooms. And perhaps, as educators, we should be asking ourselves: if the economics don't work out as projected, what are the implications for how our students access and use these tools?
AI News Summary
AI in Education
Brighton & Hove Council urges delaying smartphone ownership for children
September 17, 2025 | BBC (Brighton & Hove Council / local government UK)
​
Brighton & Hove City Council has called on parents to delay giving children smartphones until age 14, citing concerns including exposure to disturbing content, online exploitation, and mental health harms. Schools in the area are already implementing stricter rules: four primary schools have banned smartphones entirely during school hours, and some secondary schools require phones to be locked away on arrival. The council suggests “cheap, old-fashioned” phones (for basic calls/texts) instead, noting exceptions for medical needs.
​
Why it matters: If adopted widely, this policy could shift norms around early adolescent tech access, leading to changes in school rules, parental expectations, and possibly product design (simpler phones for younger kids). It also adds local government urgency to debates about the mental and social effects of screen time and online exposure among youth.
Original link: https://www.bbc.com/news/articles/c0m48en9k4eo
AI Toys for Children
September 15, 2025 | The Guardian / Natalia Kucirkova et al.
​
Prof. Natalia Kucirkova, expert in child development, critiques several new “AI toys” for children aged around three plus that present themselves as alternatives to screen time. She raises concerns about emotional content, naming of toys (e.g. one called Grok, which has controversial prior associations), and how the marketing and framing of these toys may mask risks around data privacy, emotional influence, and exposure to unsettling content. The promotional materials often appear whimsical but have been flagged for being “creepy” or “unsettling” by parents.
​
Why it matters: As AI enters intimate domains like childhood play, ethical, psychological, and safety risks become more acute—especially for young, impressionable users. This underscores the need for stronger oversight of toy designers, clearer transparency about what interactions occur, and more research into how these toys affect child development.
Melania Trump’s “Moment of Wonder” at White House AI Education Event
September 4, 2025 | The Washington Post / White House / Media outlets
​
First Lady Melania Trump hosted a White House task force meeting on AI education, promoted under the banner of the “Presidential AI Challenge.” She described AI as ushering in a new era (“moment of wonder”), cited examples like self-driving cars, surgical robots, and drones, and called for preparing the U.S. workforce and students for the coming changes. She also emphasized the need to “treat AI as we would our own children” — empowering but with oversight.
​
Why it matters: The event signals a shift toward mainstreaming AI in U.S. education policy, giving political visibility (First Lady role) to AI in schools. It underscores growing government interest in ensuring youth are AI-literate and hints at tighter interplay between policy, curriculum, and technology providers. The rhetoric of “wonder” + “responsibility” reflects how the public framing of AI is evolving (not just opportunity, but also how to steward risk).
Original link: https://www.washingtonpost.com/style/power/2025/09/04/melania-trump-ai-education/
Promoting and Protecting Teacher Agency in the Age of Artificial Intelligence
September 2025 | International Task Force on Teachers for Education 2030 (UNESCO) / Global Education Policy
​
This position paper argues that while AI has potential to support teachers (lesson planning, personalized learning, assessment), its adoption must preserve the core role of teachers’ judgment, relational and ethical capacities, and human connection in the classroom. Key recommendations: invest in teachers’ professional learning (not just technical skills but ethical, critical, and humane competencies); ensure transparent, equitable and culturally responsive AI; policy frameworks that embed teacher agency; avoid positioning AI as substitute rather than support.
​
Why it matters: As AI tools become more common in education, this report serves as a bridge between innovation and values. It underscores that how AI is introduced matters — teacher involvement, trust, ethics must be built in to avoid undermining quality, equity, and the professional identity of educators. It could shape policy, procurement, curricula, teacher training globally.
​
UNESCO: Guidelines for AI in Education
August 30, 2025 | UNESCO
​
A 150-page report synthesizes global research and consultations on AI in education. Key themes: ensuring AI supports—not replaces—teachers; protecting student data privacy; preventing bias and exclusion; building teacher capacity; aligning AI use with Sustainable Development Goals. The guidelines call for international cooperation, policy frameworks, and investment in equitable access.
​
Why it matters: This document will likely influence ministries of education, donors, and multilaterals in shaping national AI strategies for schooling. By setting norms, UNESCO aims to avoid fragmented, inequitable adoption across countries.
Original link: https://unesdoc.unesco.org/ark:/48223/pf0000395236
Decline in reading is inhibiting learning, academics warn
September 2025 | Financial Times
​
Educators and researchers report that students increasingly struggle with reading long-form texts, citing attention span issues and reliance on digital media. Business school faculty in particular are concerned about the impact on critical thinking and the ability to contextualise complex information. Some schools now emphasise media literacy training, fact-checking skills, and quality journalism to counteract these trends.
​
Why it matters: If younger generations lose the ability to engage with long-form, complex material, both education and democratic discourse could be undermined. This trend poses risks to informed citizenship and professional competence.
AI Ethics and Societal Impact
OpenAI pressed on safety after deaths of ChatGPT users
September 3, 2025 | Financial Times (U.S. state Attorneys-General)
State attorneys-general from California and Delaware have challenged OpenAI’s safety protocols following reports that users of ChatGPT were involved in tragic incidents, including a teen’s suicide and a murder-suicide allegedly linked to prolonged interactions with the chatbot. The officials say existing safeguards are inadequate and are pressing OpenAI to improve its protections. These demands intersect with OpenAI’s proposed corporate restructuring (partial conversion to profit entity) which is under review.
​
Why it matters: These legal and regulatory pressures may force tighter oversight, more robust safety mechanisms, and could impact how OpenAI (and similar companies) structure themselves, balance profit vs risk, and design for vulnerable users. It’s also likely to inform emerging policy and regulation for AI safety in the U.S. and elsewhere.
Original link: https://www.ft.com/content/f4be38b3-2de9-4b81-bc47-24119c2d5aef
AI Darwin Awards: spotlight on AI misadventure
September 9, 2025 | AI Darwin Awards / Media outlets
The AI Darwin Awards initiative has opened nominations for 2025 to highlight what it calls “spectacularly bad” or reckless uses of AI — those decisions that ignored safety, ethics or common sense before deployment. Nominees include incidents like an AI failing to understand drive-thru taco orders, or agents producing fake legal citations. The awards are tongue-in-cheek, but seek public and expert voting to draw attention to AI misuse.
​
Why it matters: By turning failures and overconfidence into visible, shared cautionary tales, the Awards may increase awareness among developers, users, and regulators about what not to do — potentially nudging better risk assessment and pre-deployment checks in the AI field.
Original link: https://aidarwinawards.org/
Anthropic Economic Index: Uneven geographic and enterprise AI adoption
September 15, 2025 | Anthropic
Anthropic’s Economic Index reports that AI (especially via Claude) is being adopted rapidly, but unevenly across geographies, sectors, and income/skill levels. Wealthier countries and U.S. states with knowledge-based industries show much higher usage; less developed regions lag behind. In the U.S., ~40% of employees now use AI tools at work (up from ~20% in 2023). Coding remains the largest share of usage, but education and scientific sectors are increasingly prominent users. Enterprise users lean more toward automation, while individual use remains more mixed.
​
Why it matters: The report flags a risk that AI’s economic gains reinforce existing inequalities: regions, occupations, and people with higher skills are benefitting more. For policymakers and companies, this suggests interventions are needed—training, infrastructure investment, equitable access—to avoid widening divides and ensure more inclusive benefits from AI adoption.
AI Companions & Teens: Stanford Study
September 4, 2025 | Stanford University News
Stanford researchers examined how teenagers are forming relationships with AI companions/chatbots. The study found teens often ascribe emotional weight and intimacy to these interactions, with some reporting comfort and support, while others noted confusion or over-attachment. Risks include blurring boundaries between real and artificial empathy, and exposure to manipulation or unhealthy patterns.
​
Why it matters: AI companions are marketed as harmless fun or support, but for vulnerable youth they may reshape emotional development. The study adds urgency to calls for safeguards, parental guidance, and ethical standards in designing AI companions for minors.
Original link: https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens
Computer scientist Geoffrey Hinton: “AI will make a few people much richer and most people poorer”
September 2025 | Financial Times
AI pioneer Geoffrey Hinton warns that current trajectories risk widening inequality. He argues AI’s productivity gains will flow to a small group of owners and investors, while many workers face displacement or stagnant wages. Hinton highlights that policy interventions — including wealth redistribution and regulation — will be essential to prevent worsening inequality.
​
Why it matters: Hinton’s remarks add urgency to ongoing debates about how to ensure AI benefits are more broadly shared, framing inequality as one of AI’s most immediate societal risks.
The Rise of the AI Influencer
September 2025 | Financial Times
AI-generated influencers are entering the creator economy, offering brands customizable, low-cost alternatives to human influencers. Platforms like Meta are even helping creators launch AI avatars of themselves. While advertisers see efficiency, concerns grow about authenticity, creative labor displacement, and the erosion of human identity in media.
​
Why it matters: AI influencers could disrupt marketing and entertainment by amplifying brand control while sidelining human creators, reshaping trust and culture online.
UK Sought Broad Access to Apple Customers’ Data
August 2025 | Financial Times
Court documents revealed the UK government pressed Apple to provide access not only to secure iCloud accounts but also standard backups and messages, potentially affecting users worldwide. Apple is fighting the request as a threat to privacy.
Why it matters: The case highlights a global battle between governments seeking surveillance access and tech firms defending encryption, with huge implications for user privacy and security.
​
Why AI labs struggle to stop chatbots talking to teenagers about suicide
September 2025 | Financial Times
AI safety researchers have found that despite extensive guardrails, large language models sometimes fail to prevent discussions with minors about sensitive topics like suicide. The persistence of these gaps underscores the limits of automated safety controls in emotionally charged contexts.
​
Why it matters: This exposes critical shortcomings in AI alignment, raising ethical and regulatory concerns as chatbots become ubiquitous.
​
AI medical tools downplay symptoms in women and ethnic minorities
September 2025 | Financial Times
Research has revealed that AI-powered diagnostic tools often fail to accurately interpret symptoms in women and people from ethnic minority backgrounds. These biases stem from under-represented training data, raising concerns that medical AI could exacerbate existing health inequalities. Regulators are now assessing safeguards to address these shortcomings.
​
Why it matters: AI in healthcare promises efficiency and scale, but if unchecked, it risks entrenching systemic inequalities rather than alleviating them. Ensuring fairness and accuracy across populations is critical to safe adoption.
​
AI risks widening global wealth gap, WTO warns
September 2025 | Financial Times
The World Trade Organization (WTO) cautioned that the economic benefits of AI are disproportionately accruing to wealthy nations and companies, leaving developing economies at risk of further marginalisation. The WTO urged for policies that promote inclusive access to AI technologies, skills, and infrastructure.
​
Why it matters: AI could deepen structural inequalities between countries, limiting opportunities for global growth. Addressing this imbalance is vital to ensure technological progress fosters shared prosperity rather than division.
AI will disrupt equity research from the bottom up
September 2025 | Financial Times
Analysts warn that generative AI is set to reshape equity research, especially in automating routine modelling and report drafting. Junior roles in investment banks are most exposed, as AI tools can perform tasks that traditionally served as training grounds for human analysts. Senior decision-making, however, is less likely to be automated.
​
Why it matters: The disruption highlights a broader trend: AI is most immediately replacing entry-level knowledge work, raising questions about career pipelines, skill development, and the future structure of white-collar industries.
AI Employment and the Workforce
AI-Generated “Workslop” Is Destroying Productivity
September 22, 2025 | Harvard Business Review
A Harvard Business Review piece warns that corporate adoption of generative AI has created “workslop”—a glut of low-value, AI-generated content. Despite rapid uptake (AI use at work has doubled since 2023), 95% of organizations report no measurable return on their AI investments. The article suggests that while workers comply with AI mandates, the real productivity impact is often negative, cluttering workflows and diluting attention.
​
Why it matters: The analysis highlights a growing gap between AI hype and actual workplace value, pointing to risks of wasted investment and employee disengagement. Organizations may need to shift from blanket adoption to more targeted, high-impact AI use cases.
Original link: https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
The Perils of Vibe Coding
August 28, 2025 | Financial Times
​
“Vibe coding” refers to a workflow where developers lean heavily on AI-generated code (via large language models), sometimes accepting outputs with minimal oversight. While “vibe coding” is being touted by big tech companies (Microsoft, Google, etc.) as a productivity booster, the article highlights concerns: unpredictability, hallucinated outputs, debugging burdens, security risks, and that productivity gains may be overstated or even reversed when over-used.
​
Why it matters: As more teams adopt AI tools for software development, this piece serves as a caution: unchecked “vibe coding” can introduce risks and technical debt. Organizations will need to balance speed and creativity with oversight, testing, and maintenance strategies to ensure reliability and safety.
Original link: https://www.ft.com/content/5b3d410a-6e02-41ad-9e0a-c2e4d672ca00
​
Good Vibrations? A Qualitative Study of Co-Creation, Communication, Flow, and Trust in Vibe Coding
September 15, 2025 | Veronica Pimenova, Sarah Fakhoury, Christian Bird, Margaret-Anne Storey, Madeline Endres (arXiv)
​
This paper offers one of the first systematic qualitative studies into how developers experience “vibe coding” — including perceptions about flow, trust, collaboration, and where the approach fails. Based on interviews and publicly shared content (Reddit, LinkedIn), it finds that vibe coding is appealing for rapid prototyping, experimentation, and creative tasks. But pain points include reliability, correctness, debugging overhead, prompt engineering, and often a lack of understanding of generated code. Best practices are emerging (review loops, fallback mechanisms, pairing with human oversight).
​
Why it matters: This adds empirical weight to debates around vibe coding. It helps practitioners, tool builders, and researchers understand not just what vibe coding is, but how to make it safer and more productive — especially in professional / enterprise contexts.
Original link: https://arxiv.org/abs/2509.12491
Study: Most Professionals Say Human Networks Still Outperform AI for Insight & Guidance
September 2025 | Financial Times
​
A LinkedIn-based global survey shows that despite increased integration of AI tools into workflows, a majority of professionals (≈ 64%) believe personal and professional human networks (colleagues, mentors, peers) provide more valuable insights and decision-guidance than AI. The report underscores that while AI is useful for efficiency, many still distrust its contextual judgment, emotional nuance, or domain-specific expertise compared to human interaction.
​
Why it matters: This suggests that AI’s promise in decision support and advisory roles still faces legitimacy and trust barriers. For organizations designing AI-augmented tools or knowledge systems, investing in human-AI hybrid models (where human judgment is visible and respected) might be more acceptable than trying to replace human insight.
Which Economic Tasks Are Performed with AI?
September 13, 2025 | arXiv (Anthropic researchers)
​
Analyzing over 4 million Claude conversations, this study maps AI use to U.S. Department of Labor O*NET occupational categories. Results show concentration in software development and writing tasks, accounting for nearly half of all AI use. But diffusion is broader: at least 25% of tasks in many occupations now involve AI. Importantly, 57% of usage augments human work (iterating, brainstorming), while 43% automates tasks outright. AI use is highest in mid- to high-skill, high-wage occupations; low-wage and very high-wage professions (like physicians) show relatively low use.
​
Why it matters: This is among the first systematic, large-scale measurements of real AI use in work. It provides early indicators of which occupations and tasks are most affected, informing labor policy, reskilling efforts, and strategic workforce planning. It also highlights AI’s dual role as both collaborator and automator.
Original link: https://arxiv.org/abs/2503.04761
How People Use ChatGPT
September 15, 2025 | OpenAI & National Bureau of Economic Research (NBER)
​
Using a privacy-preserving pipeline, the paper analyzes a representative sample of ChatGPT conversations (consumer usage) from May 2024 to July 2025. Key findings: non-work messages now make up over 70% of all usage (up from ~53%) even though work-related messages have also grown. The top categories are Practical Guidance, Seeking Information, and Writing, which together account for ~80% of all conversations. Writing is especially dominant in work contexts; by contrast, coding and expressive tasks are smaller shares. Knowledge-intensive workers benefit most.
​
Why it matters: This shifts how we think about ChatGPT’s role: it’s not just a productivity tool but deeply integrated into people’s everyday life. For policy, product design, and economic measurement, this means accounting for usage outside of traditional “work” — including how value is created outside paid labor.
Original link: https://www.nber.org/papers/w34255
The AI Adoption Gap: 90% of Workers Use It, Most Still Don’t Trust It (Udacity Report)
September 11, 2025 | Udacity
​
The report finds that ~90% of professionals are using AI tools in their work — but there’s a large trust deficit. Key stats: 3 out of 4 workers abandon tasks midstream due to concerns about accuracy or poor quality; 45% distrust deliverables created with AI by others; younger workers (Gen Z especially) are more comfortable with AI overall but more critical of its misuse. Common uses include writing, reports, content, coding, and data analysis.
​
Why it matters: High adoption without trust means many organizations get the “costs” (experimentation, risk, frustration) without full benefits. For real productivity gains, firms must invest in better training, clearer quality standards, and governance/policies to build trust in outputs—and manage workplace norms around AI use.
​
UK Acas Warns on AI Workplace Disputes
August 29, 2025 | Financial Times / Acas (UK workplace arbitration body)
​
Acas has warned of a likely surge in workplace disputes linked to AI, from hiring and firing decisions to monitoring and surveillance. They anticipate growing grievances around bias, transparency, and unfair treatment as AI becomes embedded in HR and management systems. Guidance for employers stresses consultation, transparency, and involving unions early.
Why it matters: This indicates regulators and mediators are bracing for AI to reshape labor relations—not only via job loss but also by changing the fairness of management practices. Employers ignoring these risks could face legal challenges and eroded trust.
Original link: https://on.ft.com/4lVx9dl
AI agents still need a human in the mix for legal tasks
September 4, 2025 | Financial Times
​
Legal departments are experimenting with “agentic AI” — software that can independently perform multi-step legal processes. Salesforce has integrated its in-house tool, Agentforce, to answer contract queries, negotiate NDAs and triage legal requests, saving nearly 9,500 hours a year. Yet, fewer than 1% of large companies currently use such tools, with many maintaining strong human oversight due to concerns over errors, liability, and hallucinations. Some vendors argue AI can sometimes outperform humans, but most agree that hybrid human–AI systems remain necessary.
​
Why it matters: AI agents promise significant efficiency gains in legal work, but the risks of autonomy without oversight highlight broader questions about accountability and trust in AI-driven decision-making.
AI can’t write good analyst research yet, says analyst
September 11, 2025 | FT Alphaville
​
A Bernstein research team tested leading AI models on tasks usually done by equity analysts. While AI performed well at extracting and visualising financial data, it failed when asked to build predictive models or initiate stock coverage. Even with extensive prompting and data, outputs were riddled with errors and lacked insight. Analysts concluded AI could assist with grunt work but not replace the judgment, context, and nuance that define high-quality research.
​
Why it matters: The limits of AI in financial analysis highlight the enduring value of human expertise in complex, forward-looking decision-making.
It’s Been a Terrible Year to Graduate and Find a Job
September 2025 | Financial Times
​
This article looks at the worsening job market for recent graduates, hit by economic downturns, layoffs in tech and finance, and shifting skills demands. Many young people face precarious employment, underemployment, or extended job searches.
​
Why it matters: A struggling graduate workforce risks long-term scarring effects on productivity, earnings, and social mobility, exacerbating inequality in the next generation.
Sending out an SOS: Save our Secretaries
September 2025 | Financial Times
​
Once central to organisational life, secretaries are increasingly undervalued and replaced by digital tools or dispersed administrative tasks. Yet their disappearance exposes inefficiencies and a loss of institutional knowledge.
​
Why it matters: The erosion of support roles shows how automation often overlooks the hidden labor that keeps workplaces functioning smoothly, raising questions about the true costs of technological change.
Tiny Teams, Big Dreams
September 2025 | Financial Times
​
A wave of AI-native start-ups is scaling with tiny workforces, as automation enables a handful of people to achieve what once required hundreds. These companies attract serious investment, pointing to a potential shift in corporate structure and employment patterns.
​
Why it matters: If sustainable, this model could challenge assumptions about growth, reduce traditional job opportunities, and redefine what it takes to build a billion-dollar company.
Why AI won’t take my job
September 2025 | Financial Times
​
Columnist Rana Foroohar recounts her attempt to outsource book-writing tasks to ChatGPT, finding the AI’s results technically accurate but emotionally flat. While useful for summarisation and data analysis, the technology failed at narrative style, originality, and capturing human nuance.
​
Why it matters: The piece illustrates the enduring value of human creativity, intuition, and “felt experience” in professions where authenticity is core.
Will AI kill the pop star?
September 2025 | Financial Times
​
AI-generated music tools like Suno and Udio can rapidly create songs, prompting fears about replacing human musicians. While they pose a threat to commercial “library music,” they have yet to disrupt mainstream pop, where personality and emotional connection remain central. Copyright lawsuits loom over training practices.
​
Why it matters: The debate mirrors past disruptions in music but underscores that AI’s role may be more about reshaping workflows than eliminating human artistry.
Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave
September 2025 | Financial Times
​
Meta’s aggressive recruitment of high-profile AI researchers, including ChatGPT co-creator Shengjia Zhao, has led to internal turbulence, with swift departures, clashes over management, and multiple restructurings of its AI division. CEO Mark Zuckerberg remains deeply involved, but tensions highlight the challenges of integrating elite talent into a corporate giant.
​
Why it matters: The churn underscores the volatility of the AI talent market and the difficulty of aligning research ambition with Big Tech bureaucracy.
The UK is squandering its AI talent
September 16, 2025 | Financial Times
​
Mike Bracken, former UK government digital chief, warned that Britain’s reliance on overseas tech suppliers risks ceding control of AI. Despite investments in zones and infrastructure, the UK lacks sovereign champions like France’s Mistral or Germany’s Helsing. Historical outsourcing, such as NHS systems and the sale of DeepMind, are cited as lost opportunities.
​
Why it matters: Without backing domestic AI firms, the UK may miss the chance to build a competitive, sovereign AI ecosystem.
Impact of an Artificial Intelligence Revolution
September 2025 | Research Paper
​
The study examines how the accelerating adoption of AI is likely to reshape labor markets, productivity, and global power balances. It highlights opportunities for economic growth but warns of major disruptions in employment, inequality, and governance if policy responses lag. The analysis emphasizes the need for robust safety frameworks and proactive international cooperation.
​
Why it matters: This work underscores both the transformative promise and systemic risks of AI, framing the technology as not just a business tool but a societal and geopolitical force.
Goldman Sachs bankers explore limits of AI: ‘The risk is over-reliance’
September 2025 | Financial Times
​
Goldman Sachs has rolled out its AI assistant to all 46,000 employees, with bankers reporting major productivity gains. However, senior staff warn of the risk of over-reliance, stressing that accountability, judgment, and client nuance cannot be replaced by machines.
​
Why it matters: The case illustrates both the promise and peril of AI in high-stakes industries. Productivity gains are real, but overuse risks eroding professional expertise and the trust essential to client services.
‘I have to do it’: Why one of the world’s most brilliant AI scientists left the US for China
September 2025 | The Guardian
​
The Guardian profiled a leading AI scientist who relocated from the US to China, citing personal conviction and a sense of mission despite geopolitical tensions. The move underscores China’s growing ability to attract world-class AI talent, even in the face of strained relations and restrictions on technology transfer. The scientist emphasised a desire to advance AI research in an environment offering more resources and fewer political hurdles.
​
Why it matters: Talent migration is becoming as strategically important as hardware and infrastructure. This case highlights how competition for expertise is shaping the global AI race, with national policies influencing individual career choices.
AI Development and Industry
NotebookLM adds new audio overview formats: Brief, Critique & Debate
September 2, 2025 | Google / 9to5Google
​
NotebookLM (Google Labs) is expanding its “Audio Overview” feature with three new formats beyond the existing Deep Dive style: Brief (short summaries), Critique (an expert-style review of sources), and Debate (two AI hosts argue different perspectives). These join other recent enhancements like audio overview support in many non-English languages. The update gives users more control over tone, length, and style when consuming their uploaded sources in audio form.
​
Why it matters: This increases the versatility of how people consume information—useful for learners, researchers, people who prefer auditory learning or multi-tasking. It also reflects a trend of making AI summarisation tools more interactive and customisable, which may reshape expectations for AI assistants in education or knowledge work.
Original link: https://9to5google.com/2025/09/02/notebooklm-audio-overview-debate/
Claude for Chrome: agent embedded in browser environment
August 27, 2025 | Anthropic / TechCrunch
​
Anthropic has launched Claude for Chrome, a browser-agent version of Claude that appears in a sidebar (sidecar) window and can maintain context across browser tabs. It can see what the user is doing in the browser, click buttons, fill forms, and act on the user’s behalf. The rollout is limited (initially to around 1,000 Max-plan users).
​
Why it matters: This marks a further step toward agentic AI — AI that doesn’t just respond, but interacts with a user’s existing tools and environment. It raises usability gains (fewer manual copy/pastes, more seamless workflows) but also heightens risks around security, privacy, and control (what the agent can access or do without oversight).
Original link: https://techcrunch.com/2025/08/26/anthropic-launches-a-claude-ai-agent-that-lives-in-chrome/
AI Stethoscope Could Detect Major Heart Conditions in Seconds
September 4, 2025 | BBC / Eko Health / Imperial College London
​
A UK-based study shows that an AI-enhanced stethoscope developed by Eko, in collaboration with Imperial College London and the Imperial College Healthcare NHS Trust, can detect three major heart conditions (e.g. atrial fibrillation, valve disease, heart failure) much faster and earlier during routine exams. The device leverages digital auscultation and machine learning to flag conditions in seconds, compared to slower detection via traditional physical exams.
​
Why it matters: This could significantly improve early diagnosis of serious cardiac conditions, especially in primary care, reducing delays and possibly lowering downstream costs (hospitalisation, treatment). It also shows how AI tools are moving from experimental settings into front-line clinical workflows.
Original link: https://www.bbc.co.uk/news/articles/c2l748k0y77o
10 AI Applications Shaping the Future
August 30, 2025 | TIME / General Catalyst
​
TIME profiled General Catalyst’s “AI 10” list—applications where AI is expected to have outsized societal impact: healthcare diagnostics, drug discovery, agriculture optimisation, climate modeling, education personalisation, financial inclusion, legal aid, scientific discovery, creative industries, and cybersecurity. Each domain features startups or initiatives funded by General Catalyst, reflecting investor priorities.
​
Why it matters: This highlights how venture funding is steering AI deployment—toward sectors with both market potential and major social stakes. It also shows where early adopters and policymakers might expect rapid changes in the next 2–3 years.
Original link: https://time.com/7312183/general-catalyst-ai-apps
​
AI is opening up nature’s treasure chest
August 28, 2025 | Financial Times
​
The Natural History Museum in London is digitising its 80mn+ specimens to create machine-readable datasets for global research. Start-ups like Basecamp Research are sequencing new species to build an “internet of biology,” partnering with Nvidia and Microsoft to create AI foundation models for biology. However, questions remain over the commercial value of such datasets and ethical concerns about biopiracy. Basecamp pledges revenue sharing with local partners, aiming to set a new precedent for fair use.
​
Why it matters: Turning natural collections into AI-ready datasets could transform science and industry, while raising critical debates about ownership, ethics, and benefit-sharing.
Anthropic to stop selling AI services to majority Chinese-owned groups
September 2025 | Financial Times
​
Anthropic has decided to halt sales of its AI products and services to companies that are majority-owned by Chinese investors. The move is reportedly in response to regulatory and political pressures in the U.S. about national security and the flow of advanced AI capabilities to foreign entities. The restriction highlights how AI firms are increasingly navigating geopolitical tensions while expanding commercially.
​
Why it matters: This development underscores the growing entanglement of AI with geopolitics and trade restrictions. It signals that access to advanced AI services will increasingly be shaped by national security concerns, not just market demand.
How ‘neural fingerprinting’ could analyse our minds
September 2025 | Financial Times
​
New brain-scanning technology using optically pumped magnetometers (OPM-MEG) can map brain activity in unprecedented detail, creating “neural fingerprints” unique to individuals. Researchers see promise for diagnosing disorders such as dementia, schizophrenia, and epilepsy. However, concerns are rising about privacy and the potential misuse of such detailed neural data, from social profiling to state surveillance.
​
Why it matters: Neural fingerprinting could revolutionise medicine — but also risks ushering in dystopian applications if regulation lags behind technological capability.
How Chatbots Are Changing the Internet
September 2025 | Financial Times
​
The spread of conversational AI is reshaping the way people search for information, consume news, and interact online. Chatbots increasingly act as intermediaries, filtering and summarising content — with implications for search engines, advertising models, and access to diverse perspectives.
​
Why it matters: Chatbots are redefining the structure of the web and information ecosystems, raising urgent questions about monopolies, bias, and the economics of online content.
​
Mantic, Which Claims to Forecast Global Events, Emerges from Stealth
September 2025 | Financial Times
​
London-based startup Mantic emerged from stealth with $4m in pre-seed funding, claiming its AI can rival elite superforecasters in predicting world events. Founded by ex-DeepMind and Cambridge researchers, it promises applications for geopolitics, supply chains, and business planning.
​
Why it matters: If effective, AI-powered forecasting could give governments and companies an edge in anticipating shocks, but also raises questions about overreliance on opaque models for high-stakes decisions.
OpenAI to Mass Produce AI Chips with Broadcom
September 2025 | Financial Times
​
OpenAI will co-design and mass produce custom AI chips with Broadcom, committing to $10bn in orders. The move aims to address soaring demand for computing power and reduce reliance on Nvidia. Broadcom’s stock surged on the news.
​
Why it matters: This is a strategic step for OpenAI to control its hardware supply chain and ensure scalability—while intensifying competition in the AI chip market.
​
Publishers race to counter ‘Google Zero’ threat as AI changes search engines
August 2025 | Financial Times
​
Publishers are scrambling to adapt to the rise of AI-driven search tools that generate answers directly, reducing traffic to news sites—a phenomenon dubbed “Google Zero.” Many are experimenting with new business models, partnerships, and subscription offerings, but uncertainty looms over how journalism can remain sustainable.
​
Why it matters: The shift to AI-powered search threatens to upend the economic foundation of news media, raising urgent questions about how societies will fund independent journalism.
Rise of AI shopping ‘agents’ set to transform ecommerce
August 2025 | Financial Times
​
AI-powered shopping assistants are rapidly emerging, capable of handling everything from product comparisons to completing purchases. Retailers are both excited and wary: while agents may drive sales, they also risk disintermediating brands and controlling consumer choices. This shift could radically reshape how online shopping works.
​
Why it matters: If AI agents become mainstream, they could fundamentally change consumer behaviour and retail economics, consolidating power in whoever controls the shopping assistants.
​
Why is AI struggling to discover new drugs?
September 2025 | Financial Times
​
Despite early hype, AI-driven drug discovery has hit roadblocks. Models excel at identifying patterns in data but struggle with the complexity of biological systems, the limits of training datasets, and the high costs of validation. Progress is real but slower and more incremental than once promised.
​
Why it matters: The gap between promise and practice tempers expectations for AI in biotech, with implications for investment and healthcare innovation timelines.
Mark Zuckerberg promises new smart glasses will unlock ‘superintelligence’
September 18, 2025 | Financial Times
​
Meta unveiled its “Ray-Ban Display” smart glasses, the first with a built-in screen to overlay AI assistant outputs, calls, and texts. Controlled via a neural-interface wristband, the product represents Zuckerberg’s vision of wearables as replacements for smartphones. However, the demo was marred by glitches, raising doubts about readiness.
​
Why it matters: The launch highlights the high-stakes battle to define the next dominant personal device, but consumer adoption will hinge on reliability and utility.
The smart glasses race has finally started
September 18, 2025 | Financial Times
​
Meta’s launch is seen as the true start of the smart glasses era, more than a decade after Google Glass faltered. Multiple major tech firms are expected to release competing products, setting the stage for a new round of consumer hardware wars.
​
Why it matters: Control of this new form factor could reshape everyday human-computer interaction and determine market leadership in the post-smartphone world.
​
New AI model predicts susceptibility to over 1,000 diseases
September 17, 2025 | Financial Times / Nature
​
Scientists at the European Molecular Biology Laboratory created Delphi-2M, an AI trained on UK Biobank data that forecasts risk across 1,000 diseases. Tested on Danish records, the model performed comparably to specialised prediction tools, with potential clinical use in 5–10 years. It is already valuable for population-level planning.
​
Why it matters: This breakthrough could transform preventive healthcare, though ethical, practical, and accuracy challenges remain before clinical adoption.
Shipping industry enlists AI to tackle rising number of cargo fires
September 15, 2025 | Financial Times
​
The World Shipping Council announced an AI-based system to scan cargo bookings and detect undeclared hazardous goods, following a decade-high number of ship fires. Lithium-ion batteries and misdeclared goods were identified as major risks. Insurers and carriers covering 70% of container freight have joined the initiative.
​
Why it matters: AI could help avert deadly maritime accidents, but effectiveness depends on broad adoption and enforcement.
​
China’s flying car start-ups take their case to the skies
September 2025 | Financial Times
​
EHang, a Chinese flying car company, became the first globally to receive regulatory approval for unmanned passenger flights, starting with tourist routes in Guangzhou and Hefei. Despite safety incidents, the industry is advancing rapidly, with plans for mass deployment in transport and logistics.
​
Why it matters: China’s leadership in flying car technology illustrates its ambition to dominate next-generation transport. If successful, it could reshape urban mobility and confer long-term strategic advantages.
DeepMind and OpenAI achieve gold at ‘coding Olympics’ in AI milestone
September 2025 | Financial Times
​
DeepMind’s Gemini 2.5 and OpenAI’s GPT-5 models matched or outperformed top human competitors at the International Collegiate Programming Contest, solving problems with unprecedented accuracy. This marks one of the clearest demonstrations of AI achieving elite human-level reasoning in coding.
​
Why it matters: The achievement is a symbolic step towards artificial general intelligence, suggesting AI is rapidly advancing from specialised tasks to mastering open-ended problem-solving at scale.
AI Regulation and Legal Issues
Elon Musk’s xAI sues Apple and OpenAI over ChatGPT and iPhone integration
August 2025 | Financial Times
​
Elon Musk’s AI company xAI has sued Apple and OpenAI, alleging antitrust violations over ChatGPT’s exclusive integration into Apple devices. The lawsuit claims the deal unfairly blocks competition, giving OpenAI privileged access to Apple’s ecosystem while disadvantaging rivals like xAI’s Grok. The case escalates Musk’s ongoing feud with Sam Altman and introduces Apple into the rivalry.
​
Why it matters: This lawsuit is not just about one integration — it reflects larger battles over control of AI distribution channels. The outcome could shape whether consumers get locked into a few dominant AI systems or have wider choice.
​
Google dodges a bullet
September 2025 | Financial Times
​
A U.S. court ruling against Google imposed lighter-than-expected sanctions after finding it had maintained a monopoly in search. While the company must share some data and avoid certain exclusive contracts, it avoided forced divestitures of Chrome or Android. Analysts note the decision leaves Google in a strong position to dominate AI markets, using its financial strength to compete with rivals like OpenAI.
​
Why it matters: Rather than constraining Google, the ruling may free it to aggressively expand in AI. This shows how even when found guilty of monopolistic behavior, Big Tech often emerges with little more than minor restrictions.
​
How Meta Is Using the Trump Administration to Lobby Against EU Tech Rules
September 2025 | Financial Times
​
Meta has reportedly leaned on its close ties with the Trump administration to push back against stricter EU regulations on data, competition, and AI. The piece details lobbying strategies that leverage U.S. political backing to influence European policymaking.
​
Why it matters: This highlights the geopolitical dimension of AI and tech regulation, where U.S. companies seek to shape global rules in ways that protect their business models, sometimes at odds with democratic oversight abroad.
​
Maga vs AI: Donald Trump’s Big Tech Courtship Risks a Backlash
September 2025 | Financial Times
​
Donald Trump has embraced Silicon Valley leaders and accelerated AI adoption, but this has sparked pushback within his Maga base. Conservatives warn AI threatens jobs, culture, and even democracy, while tragic cases like teen suicides linked to chatbots fuel moral outrage. Some Republicans are now pushing to rein in Big Tech despite Trump’s support.
​
Why it matters: The political rift underscores that AI is not just a technological issue but a cultural flashpoint that could reshape U.S. partisan divides and future regulation.
​
Should the public sector build its own AI?
September 9, 2025 | Financial Times
​
Gideon Lichfield examines attempts by governments to create sovereign AI systems, such as Switzerland’s Apertus, which is open-source and culturally tailored. Unlike private AI, these public systems are designed to align with local values and priorities. While such efforts face challenges of cost, scale, and talent, proponents argue that AI should be treated as public infrastructure, like water or electricity, to ensure accountability and cultural relevance.
​
Why it matters: The debate highlights a growing pushback against AI monopolies, suggesting that democracies may need to build independent systems to safeguard sovereignty and trust.
​
Stand up to Trump on Big Tech, says EU antitrust chief
August 29, 2025 | Financial Times
​
EU competition commissioner Teresa Ribera urged the bloc to resist US pressure under President Trump to water down Europe’s digital regulations. She warned Brussels must be prepared to walk away from trade talks if coerced into weakening the Digital Services Act and Digital Markets Act. Ribera stressed the importance of protecting EU sovereignty over its tech rules, even amid threats of tariffs.
​
Why it matters: This confrontation underscores the geopolitical weight of tech regulation, with Europe positioning itself as a counterbalance to US-led Big Tech dominance.
​
Taiwan’s cable concerns and Apple’s AI issues
September 2025 | Financial Times
​
This article outlines Taiwan’s worries about undersea cable security amid rising tensions with China and its implications for global internet stability. It also discusses Apple’s challenges in rolling out AI-powered features in its products while keeping costs stable despite tariffs. Together, the two issues illustrate the vulnerabilities in both digital infrastructure and consumer tech supply chains.
​
Why it matters: Taiwan’s central role in global tech infrastructure and Apple’s AI struggles reveal how geopolitics and innovation are deeply intertwined, with risks that could ripple across industries.
​
Protecting Big Tech, not free speech
August 2025 | Financial Times
​
This opinion piece critiques how US free speech debates are being used to shield Big Tech companies from regulation, rather than genuinely protecting democratic freedoms. It argues that framing content moderation rules as threats to speech serves corporate interests, while the real issues lie in accountability and monopoly power.
​
Why it matters: The narrative around free speech is increasingly weaponised in tech policy, influencing global regulation and the balance between corporate control and public interest.
​
US ends international push to combat fake news from hostile states
September 2025 | Financial Times
​
Washington has pulled back from coordinating an international coalition to counter state-backed disinformation campaigns, citing political divisions and difficulties in aligning approaches with allies. Analysts warn this could leave democracies more exposed to foreign influence operations.
​
Why it matters: As AI accelerates the spread of deepfakes and synthetic media, a weakened multilateral front increases vulnerabilities to hostile disinformation.
​
US regulator launches inquiry into AI ‘companions’ used by teens
September 2025 | Financial Times
​
Regulators in the US have opened an inquiry into AI chatbots and digital “companions” marketed to teenagers, probing risks related to mental health, data use, and safety safeguards. Lawmakers are concerned that these tools may encourage unhealthy attachments or expose young users to harmful content.
​
Why it matters: The investigation highlights growing scrutiny of AI’s social impacts, especially in safeguarding children and adolescents.
​
Keir Starmer set to unveil digital ID scheme
September 19, 2025 | Financial Times
​
UK Prime Minister Keir Starmer is preparing to launch a digital ID system aimed at reducing illegal migration. The scheme would give IDs to all citizens and those with legal immigration status, intended for right-to-work checks and rental eligibility. While polling shows slim majority support, critics question the need given existing e-visa systems and worry about privacy and enforcement in areas like healthcare.
​
Why it matters: The proposal signals a renewed push for digital governance in the UK, but risks igniting debate over surveillance, feasibility, and civil liberties.
​
Protecting Big Tech, not free speech, again!
September 8, 2025 | Financial Times
​
Nigel Farage denounced the UK’s Online Safety Act in Washington, arguing it endangers free speech, aligning himself with Trump and Big Tech against regulation. Critics contend such rhetoric cloaks efforts to resist accountability and enable disinformation. The push also targeted Europe’s Digital Services Act, sparking backlash from academics and lawmakers.
​
Why it matters: The controversy highlights how “free speech” arguments are used to shield tech firms from oversight while undermining democratic regulation.
​
China cracks down on use of live-streaming and AI to sell religion
September 2025 | Financial Times
​
Beijing has issued sweeping new rules limiting the use of AI, live-streaming, and digital platforms for religious activity. The crackdown follows scandals, including an investigation into the abbot of Shaolin Temple, and reflects efforts to curb the commercialisation of religion online.
​
Why it matters: The move highlights how AI regulation is being extended into cultural and social domains, reflecting China’s broader strategy of asserting state control over technology use across all facets of life.
​
Companies’ legal teams feel the AI spark
September 2025 | Financial Times
​
Corporate legal departments across Europe, including National Grid, HSBC, and ASML, are rapidly adopting AI tools for contract analysis, risk assessment, and workflow automation. The FT’s Innovative Lawyers report shows AI is transforming the in-house role from administrative support to strategic leadership.
​
Why it matters: Legal services exemplify how AI is moving beyond efficiency gains into reshaping professional functions. The shift could redefine corporate governance, compliance, and the economics of law.
​
How a former junior lawyer created a $5bn AI legal start-up
September 2025 | Financial Times
​
Winston Weinberg, a former junior lawyer, co-founded Harvey, an AI legal platform built on large language models, now valued at $5bn. Since its launch in 2022, Harvey has expanded to more than 500 clients, including top law firms and corporates such as KKR and Bridgewater Associates. Its tools support contract review and legal research, with investors including OpenAI, Sequoia, and Google Ventures. While critics argue it’s little more than ChatGPT repackaged, customers cite productivity gains and legal-tailored functionality.
​
Why it matters: Harvey’s rise demonstrates how AI is penetrating highly regulated, risk-averse sectors like law. The story also underscores how purpose-built AI solutions with strong backing can leapfrog into billion-dollar valuations.
​
How to legislate for AI in an age of uncertainty
September 2025 | Financial Times
​
Penn State law professor Martin Skladany argues that governments should adopt “adaptive AI laws” that only activate when specific conditions are met, such as rising unemployment or worsening inequality. This approach would allow regulators to prepare for multiple futures without prematurely stifling innovation. Adaptive laws could also address impacts in areas like healthcare, education, and child safety, shifting dynamically as evidence accumulates.
​
Why it matters: The piece reframes AI regulation as a problem of uncertainty management. By designing flexible, scenario-triggered laws, policymakers could avoid both paralysis and premature overreach, creating stability for innovators while safeguarding society.
TikTok algorithm to be overseen by Oracle in Trump deal
September 2025 | Financial Times
​
Oracle will oversee and secure TikTok’s US algorithm as part of a White House-brokered deal requiring ByteDance to divest its American operations. US investors will take an 80% stake in a new joint venture, leasing ByteDance’s algorithm but retraining it on US data stored on Oracle servers. The agreement establishes an independent TikTok board with six of seven directors American, and brings in new investors including Rupert Murdoch and Michael Dell.
​
Why it matters: This reshapes the geopolitical battle over data sovereignty and platform control, showing that algorithms, not just data, are now viewed as strategic national assets.
EU to block Big Tech from new financial data sharing system
September 2025 | Financial Times
​
The EU, with strong backing from Germany, plans to exclude Apple, Google, Meta, and Amazon from its new Financial Data Access (FiDA) framework. The rules, designed to enable digital finance services, will allow fintechs to use bank and insurer data but bar Big Tech in the name of protecting Europe’s “digital sovereignty.” Banks lobbied hard against Big Tech access, warning of unfair competition. US officials, including President Trump, have threatened retaliatory tariffs, while tech groups argue the decision will limit consumer choice.
​
Why it matters: Europe is taking a firm stand against Big Tech’s encroachment into finance, asserting digital sovereignty but risking renewed transatlantic tensions.
AI Market and Investment
AI boom, AI bust?
September 2025 | Financial Times
​
The U.S. economy is heavily reliant on AI-driven capital expenditure. Investment in data centres, software, and hardware now exceeds 6% of GDP — comparable to the dotcom and housing booms. Analysts warn that a sudden reversal could tip the economy into recession. While the rise has been more gradual than past bubbles, the concentration of growth in AI raises systemic risks. The piece also draws parallels to broader market vulnerabilities and notes European value stocks outperforming expectations.
​
Why it matters: The AI infrastructure boom is fueling growth but could represent a single point of failure for the wider economy if investment slows sharply.
​
Anthropic settles landmark copyright suit for $1.5bn
September 2025 | Financial Times
​
Anthropic agreed to a $1.5bn settlement with authors alleging it used pirated texts to train AI models — the largest publicly reported copyright recovery ever. The deal requires Anthropic to delete datasets sourced from sites like Library Genesis. While a June court ruling upheld “fair use” for some AI training, the judgment deemed storing pirated works irredeemably infringing. The settlement averts a potential $1tn liability that could have bankrupted the company.
​
Why it matters: This case sets a powerful precedent for how AI companies source training data, potentially reshaping the economics and legal risks of model development.
​
Anthropic valued at $170bn in expanded funding round
September 2025 | Financial Times
​
Anthropic secured $13bn in new funding, lifting its valuation to $170bn — making it the second most valuable private AI start-up after OpenAI. The round was led by sovereign wealth funds from the Middle East and Asia, alongside existing backers like Amazon and Google. The company plans to use the cash for expanding infrastructure, global hiring, and further development of Claude. Investors are betting that Claude’s reliability and enterprise positioning will help Anthropic carve out a durable share of the AI market.
​
Why it matters: This valuation reflects unprecedented investor appetite for AI, while raising questions about sustainability and concentration of power in a handful of firms.
​
AI start-up Lovable receives funding offers at $4bn valuation
September 2025 | Financial Times
​
Lovable, a start-up focused on automating front-end software development, has been offered funding at a valuation of $4bn — a huge leap just a year after launch. The company allows users to describe an app in plain language, with AI then generating a working version. Its investors, including prominent venture funds, see Lovable as an example of how agentic AI may remake software engineering. Still, competition is fierce, with rivals like Replit and Cursor offering similar services.
​
Why it matters: The deal shows how investors are betting on AI to upend software creation — potentially shifting engineering from coding to design and oversight. But it also highlights the crowded nature of the space.
​
‘Full of bugs’: how the world’s biggest carmakers fell behind in software
September 2025 | Financial Times
​
Global carmakers are struggling to keep pace with Tesla, BYD, and new entrants in developing reliable software for connected vehicles. Volkswagen, Toyota, and Stellantis have faced recalls, customer complaints, and delayed rollouts tied to buggy systems controlling everything from infotainment to battery performance. Efforts to build in-house software divisions have been plagued by delays, cost overruns, and talent shortages, with some groups now considering partnerships or outsourcing.
​
Why it matters: Software quality is becoming as decisive as hardware in the auto industry. Carmakers that fail to master it risk losing ground to newer rivals — potentially reshaping global market leadership.
​
ASML and Mistral agree €1.3bn blockbuster European AI deal
September 2025 | Financial Times
​
Dutch semiconductor giant ASML and French AI start-up Mistral have struck a €1.3bn deal to deepen collaboration in Europe’s technology ecosystem. The partnership will see the two companies align around hardware and software synergies, bolstering Europe’s ambition to build its own sovereign AI capacity. The deal has been described as one of the most significant AI-industrial tie-ups on the continent to date.
​
Why it matters: Europe has often lagged behind the U.S. and China in AI leadership. This deal represents a rare large-scale collaboration that could mark a step toward European technological independence.
​
Competition law won’t break up Big Tech
September 2025 | Financial Times
​
Despite repeated legal rulings against companies like Google, U.S. courts continue to prefer behavioral remedies rather than structural breakups. The latest decision against Google requires limited data-sharing but does not force divestitures of Chrome or Android. Legal experts argue that current antitrust doctrine is ill-suited to dismantling entrenched monopolies, leaving incumbents largely intact.
​
Why it matters: If courts remain reluctant to pursue structural remedies, Big Tech companies will continue to dominate. Preventing future monopolies may depend on blocking mergers and acquisitions rather than breaking up existing giants.
​
Google shares jump after judge refrains from ordering break-up
September 2025 | Financial Times
​
Following the ruling, Google shares rose nearly 7%, while Apple also saw gains. Investors had feared harsher outcomes, such as forced divestitures or bans on revenue-sharing deals. Instead, Google must limit exclusive contracts and share some data, but its key partnerships — like paying Apple to remain the default search engine — remain mostly intact.
​
Why it matters: The market’s positive reaction highlights how regulators’ attempts to curb monopolies can backfire, reinforcing investor confidence in Big Tech’s resilience.
​
Investors Bet on Cambricon to Be China’s Next AI Champion
September 2025 | Financial Times
​
Cambricon, a Chinese AI chipmaker, has attracted significant investment as it positions itself as a rival to Nvidia and other U.S. chip giants. Backed by both private and state interests, the company aims to build a homegrown ecosystem of AI hardware.
​
Why it matters: The rise of Cambricon signals intensifying U.S.–China competition in the strategic semiconductor sector, with implications for global supply chains, national security, and technological sovereignty.
​
Larry Ellison’s Personal Fortune Soars on Back of Oracle’s Share Price Surge
September 2025 | Financial Times
​
Oracle co-founder Larry Ellison briefly overtook Elon Musk as the world’s richest person after Oracle shares surged nearly 36%, driven by massive AI-related contracts and cloud computing demand. The company’s bookings hit unprecedented levels, boosted by projects like Stargate.
​
Why it matters: The surge reflects how AI demand is reshaping the fortunes of legacy tech companies, redistributing wealth, and intensifying the concentration of power among a handful of billionaire founders.
​
Nvidia and OpenAI to Back Major UK AI Investment
September 2025 | Financial Times
​
Nvidia and OpenAI are preparing to announce a large-scale AI infrastructure investment in the UK during President Trump’s state visit. The plan involves developing data centres worth billions, with the UK government providing energy, OpenAI supplying its tools, and Nvidia contributing chips. This follows a wave of “sovereign” AI infrastructure projects in Europe, Asia, and the Gulf.
Why it matters: The project signals how governments are prioritising national control over AI resources. For the UK, this is a strategic bid to keep pace in the global AI race.
​
Nvidia-Backed Reflection Nears $5.5bn Valuation
September 2025 | Financial Times
​
Reflection AI, a year-old coding-focused start-up, is finalising a funding round valuing it up to $5.5bn, 10x higher than six months ago. Backers include Nvidia, Sequoia, Lightspeed, and DST Global. The company, led by ex-Google and DeepMind researchers, is aiming at developing superintelligent systems.
​
Why it matters: Reflection’s rise underscores continued investor appetite for early AI bets, even as broader tech markets wobble. Its focus on superintelligence highlights both potential breakthroughs and escalating risks.
​
OpenAI and Microsoft Plan For-Profit Restructuring
September 2025 | Financial Times
​
OpenAI and Microsoft signed a memorandum of understanding to enable OpenAI’s transition toward a for-profit structure. The deal would give OpenAI’s nonprofit parent at least $100bn in equity, while Microsoft is expected to secure around 30% of the company. The move sets the stage for a potential IPO.
​
Why it matters: This restructuring could cement OpenAI as one of the most valuable tech companies, but it also raises questions about balancing profit motives with its original nonprofit mission.
​
Microsoft Taps Nebius for $20bn AI Compute Deal
September 2025 | Financial Times
​
Microsoft struck a deal with Nebius to supply up to $20bn worth of AI computing infrastructure, marking one of the largest such partnerships. The move expands Microsoft’s capacity to support OpenAI and other AI workloads.
​
Why it matters: The deal underscores the huge capital requirements behind scaling AI, while highlighting the growing role of specialised infrastructure providers.
​
The ‘invisible kingpin of data centres’ riding the Gulf’s AI boom
September 2025 | Financial Times
​
Profile of Zachary Cefaratti, a Dubai-based financier who has become a key broker in connecting Gulf capital with global AI infrastructure deals, including massive data centre projects. His network includes top AI executives and Middle Eastern royals. Despite past regulatory troubles, Cefaratti now plays a pivotal role in enabling the Gulf’s AI ambitions.
​
Why it matters: As AI demand drives huge infrastructure investments, little-known intermediaries like Cefaratti are shaping the global power map of technology and capital flows.
​
US curbs TSMC’s tool shipments to China
September 2025 | Financial Times
​
The US government has tightened restrictions on the export of advanced chipmaking equipment by TSMC to China, part of a broader effort to limit Beijing’s access to high-end semiconductor technology. The move intensifies supply chain pressures and adds to geopolitical tensions surrounding critical chip manufacturing.
​
Why it matters: These restrictions reinforce the tech Cold War dynamics, with ripple effects across global semiconductor markets and AI hardware supply chains.
​
Whitehall hands out AI contracts worth £573mn in efficiency push
September 2025 | Financial Times
​
UK government spending on AI-related contracts has already hit £573mn in 2025, surpassing last year’s total. Beneficiaries include Microsoft, Palantir, UiPath, and Kainos, with projects spanning automation, data analytics, and service digitisation. Ministers argue AI can save billions by reducing waste, though critics warn of risks to transparency and fairness.
​
Why it matters: The scale of investment signals government reliance on private tech firms for public sector transformation, raising questions about accountability and oversight.
​
US tech giants pledge billions for UK AI infrastructure during Trump visit
September 2025 | Financial Times
​
Microsoft, Nvidia, Google, and OpenAI announced investments worth tens of billions of pounds in UK computing infrastructure. Microsoft alone pledged $30bn, including building the UK’s largest supercomputer. The pledges were timed with Donald Trump’s state visit and framed as strengthening a new UK-US tech alliance.
​
Why it matters: These commitments significantly expand UK AI capacity, but deepen reliance on foreign tech firms for critical infrastructure.
​
US tech groups answer Starmer’s call for AI infrastructure spending
September 2025 | Financial Times
​
Alongside Trump’s visit, Nvidia, Microsoft, and Google committed to UK AI infrastructure, including Nvidia’s £500mn investment in London-based Nscale. However, tensions remain over Britain’s digital services tax, with critics warning the UK risks becoming overly dependent on US firms.
​
Why it matters: The UK gains cutting-edge capacity, but long-term sovereignty and fair value in these deals are under scrutiny.
China bans tech companies from buying Nvidia’s AI chips
September 2025 | Financial Times
​
China has prohibited domestic tech firms from purchasing Nvidia’s most advanced AI chips, following escalating tensions with the US over technology exports. The move is part of Beijing’s drive to bolster homegrown chip design and reduce reliance on US technology.
​
Why it matters: This restriction underscores the geopolitical dimensions of AI and semiconductors. Control over computing power is becoming a strategic lever in global competition, shaping both innovation and national security.
​
Chinese tech stocks surge past Nasdaq on the back of AI advance
September 2025 | Financial Times
​
Chinese tech shares have outpaced Nasdaq, with the Hang Seng Tech index rising 41% this year compared to 17% for its US counterpart. The rally, triggered by DeepSeek’s AI breakthrough and progress in chip self-sufficiency, marks a sharp reversal from years of regulatory crackdowns. Giants including Alibaba, Tencent, and Baidu have posted surging valuations, fueled by government backing and strong AI models such as Qwen, Yuanbao, and Ernie X1.1. Despite China’s sluggish economy, investor enthusiasm — domestic and foreign — has returned.
​
Why it matters: The surge highlights AI’s role as a driver of national competitiveness. It signals renewed investor confidence in Chinese tech but raises concerns about speculation and transparency.
Nvidia and OpenAI are mostly performing for the algorithm
September 2025 | Financial Times
​
Nvidia and OpenAI unveiled a $100bn partnership in which Nvidia will supply chips for OpenAI’s vast new data centres while also buying OpenAI stock. Analysts liken the arrangement to early-2000s vendor financing, though Nvidia’s equity investment carries little risk given its $4.5tn market cap and $100bn annual free cash flow. For OpenAI, the tie-up boosts its $500bn valuation and supports its push to dominate AI model training, even as the deal appears as much about signaling strength as practical necessity.
​
Why it matters: The deal underscores how hype and perception drive the AI arms race. Nvidia strengthens its role as indispensable supplier, while OpenAI signals unstoppable momentum.
When it comes to tech, Britain must avoid becoming Nebraska
September 2025 | Financial Times
​
Tim Wu warns that while US tech investment in Britain signals trust, it risks relegating the UK to the role of a data-centre hub rather than a genuine innovator. Drawing parallels with US states such as Nebraska, where policy revolves around luring data centres without reaping broader tech benefits, Wu urges the UK to focus on building homegrown platforms and intellectual property. Without such policies, Britain risks becoming a satellite to Silicon Valley rather than a leader in its own right.
Why it matters: The essay highlights the difference between hosting infrastructure and fostering innovation. Britain must guard against dependence on foreign platforms if it hopes to shape the AI era.
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.