top of page

THE SKINNY
on AI for Education

Issue 21, October 2025

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​

Headlines

​

​​​

Welcome to The Skinny on AI in Education. We have a seasonal editorial for you today. Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of recent AI developments reshaping education.​​

​

​But first, since it's Halloween, I thought we'd tell a proper horror story, so settle down, bolt the doors and get ready to hide behind the sofa…

​​

​

Zombie AI-pocalypse

zompocalypse.jpg

ACT ONE: The Believers and The Sceptics

​

The survivors huddled in two groups in the abandoned shopping mall, arguing about the nature of the zombie virus spreading across the land.

​

In one corner, the Investors: an expensively, if slightly oddly, dressed group of 96 souls insisted the virus was actually a blessing in disguise. "These zombies will transform our productivity!" their leader proclaimed, gesturing at charts projected on the mall's cracked screens. "7 of our largest corporations have already achieved zombie-level efficiency. They're worth 70 times more than before the outbreak."

​

Across the dingy, crumbling food court, a larger, scruffier group of 47 regular citizens were not so sure.

 

"Have you actually seen these productivity gains?" one asked. "Because 8 out of 10 survivors we have talked to say the zombies haven't helped their bottom line at all."

​

"Typical lack of vision, it’s all about the growth, baby" the Investors' leader insisted. "We have committed $1.5 trillion to zombie infrastructure. More zombies, more data about zombies, more power for zombies"

​

"But what if zombies don't transform the economy?" interrupted a tired-looking educator from the sceptical group. "What if we are training ourselves to depend on them, and they are... well, … training us?"

​

The Investor leader waved her away rudely. "That's what the Great Freeze said. Look, our three largest zombie management companies control 70 percent of the whole zombie services market. The infrastructure's being built. The future is coming whether you believe in it or not."

​

ACT TWO: The Visionary's Gambit

​

Meanwhile, in a fortified compound across town, a charismatic figure known only as The Visionary was making deals at a pace that would make a zombie's head spin, if zombies still had functioning heads that is.

​

"We don't need advisers," The Visionary told his small circle of lieutenants, waving away concerns about the hastily scrawled agreements covering his desk. "We don't need bankers or lawyers. I've got a bold vision, and THAT is all that matters."

​

The deals were impressive, if confusing. The Visionary had somehow convinced the world's largest zombie equipment suppliers to provide billions of $ in zombie chips, even though his own zombie operation was losing billions annually. The terms were circular, suppliers were also investors who were also customers, creating a kind of zombie ouroboros that made even his supporters a bit queasy.

​

"But how will we fund this?" one lieutenant asked nervously.

​

"Details!" The Visionary laughed with maniacal glee. "We'll figure that out later. Right now, we're too busy making history, and building the future he said triumphantly.

​

"What future, exactly?"

​

The Visionary's eyes gleamed and glowed eerily. "One where zombies handle all the thinking. Imagine: students who never have to write their own essays. Workers who never have to compose their own performance reviews. A whole generation freed from the burden of... well, thinking."

​

"Freed or replaced?" the lieutenant muttered to himself nervously, but The Visionary was already moving on to his next deal.

​

ACT THREE: The Training Ground

​

At the California State Community College, which was now serving as a "zombie training facility" Professor Cassandra watched her students interact with their assigned zombies and felt a growing sense of unease.

​

"Who's training whom here?" she asked her colleague as they observed a student asking her zombie to solve a complex problem.

​

"What do you mean?"

​

"Look at their face. They are not even trying to understand the solution the zombie's providing. They are just... accepting it. Copying it. Moving on."

​

"Maybe that's efficiency?" her colleague suggested uncertainly.

​

"Or maybe it's dependency." Professor Cassandra pulled up the latest research on her tablet. "80 percent of institutions report no significant bottom-line impact from zombie adoption. But nobody's measuring what we are losing in the process." The frustration starting to sound in her voice.

​

"The Investors seem pretty confident…"

​

"The Investors are gambling that zombies will add one percentage point to global growth. One percent! That's nowhere near revolutionary. But you know what is revolutionary? Creating an entire generation that can't function without zombie assistance."

​

Her colleague shifted uncomfortably. "The college doesn't have much choice. Big Tech is funding these zombie programs. If we don't participate, we get left behind."

​

"Left behind what? A race to see who can outsource thinking fastest?"

​

ACT FOUR: The Town Hall Meeting

​

Eventually, the tension came to a head at an emergency town hall where Investors and citizens finally confronted each other directly.

​

"You're fearmongering!" accused an Investor, his expensive suit somehow still immaculate despite the apocalypse. "Zombies are just tools. We control them."

​

"Do we?" Professor Cassandra stood up from the crowd. "Because from where I'm standing, it looks like they're controlling us. JPMorgan's zombies are writing employee reviews now. Students use zombies for their homework. When was the last time any of you solved a problem without consulting your zombie first?"

​

"That's not dependency, that's efficiency" growled the investor impatiently.

​

"Call it what you want," Cassandra interrupted. "But answer me this: if the zombies disappeared tomorrow, could your students still think? Could your employees still write? Could you still function?"

​

An uncomfortable silence fell over the Investors' section.

​

"The thing about zombie stories," Professor Cassandra continued, her voice carrying across the hall, "is there are two kinds. In one version, WE are the survivors who learn to control the zombies and they become our servants, doing our bidding while we focus on being human and becoming ever more intelligent. In the other version, we become the zombies, shuffling through life, going through the motions, while something else does our thinking for us."

​

"But which version are we in?" someone called out.

​

"That's my point. We won't know until it's too late to change course. The Visionary has made his trillion-dollar bets. The infrastructure is being built. Big Tech is turning our schools into training grounds. And we're all just... going along with it. Hoping the Investors are right. Hoping this works out."

​

"So, what do you suggest?" an Investor challenged. "Reject the zombies entirely? Go back to pre-outbreak life?"

​

"No." Professor Cassandra shook her head. "I'm suggesting we ask some harder questions before we train an entire generation in habits we can't undo:

  • How do we know if a zombie enhances thinking versus replaces it?

  • What's our plan when students can't function without zombie assistance?

  • Who evaluates these zombies before they're embedded in every classroom?

  • What happens to schools that invest heavily if the zombie bubble bursts?"

​

She paused, looking around the room. "And maybe the most important question: who bears the risk when the Visionary's bold vision doesn't pan out? Spoiler alert: it won't be the Investors. It'll be the students, the teachers, the institutions who can't afford to make trillion-dollar mistakes."

​

ACT FIVE: The Choice

​

That night, as Professor Cassandra walked home through zombie-infested streets, she thought about her colleague's question: Who's training whom?

​

The Investors' answer was clear: humans train zombies, zombies boost productivity, everyone wins. The math was simple, the vision compelling.

​

But Cassandra couldn't shake the image of her student's vacant expression as she copied the zombie's answer without understanding it. Couldn't forget the research showing 80 percent of businesses seeing no real impact. Couldn't ignore the circular deals and absent advisers and faith-based trillion-dollar bets.

​

And she kept coming back to Jensen Huang's warning, emblazoned on the crackling power-starved advertising screens overhead as his voice echoed through the empty streets: "You're not going to lose your job to a zombie. You're going to lose your job to somebody who uses zombies better than you do."

​

But what if nobody really knew how to "use zombies better"? What if we were all just... pretending? What if the people making trillion-dollar infrastructure deals without advisers didn't actually have a plan? What if 96 percent of Investors were wrong?

What if the real horror wasn't the zombies at all, but what we were becoming while we convinced ourselves we had everything under control?

​

Because here's what keeps Cassandra awake at night: in most zombie films, by the time people realise which version of the story they are in, it's already too late. The transformation has happened. The bites have been inflicted. The virus has spread.

And the scariest part? The newly turned zombies don't even realize what they've become.

​

They just keep shuffling forward, convinced they're still the survivors in this story.

​

Happy Halloween, everyone.

 

***

​

The Real Story

​

In case you're still puzzling out the characters:

  • The Investors: The 96% who, according to JUST Capital's survey, believe AI will deliver significant productivity gains

  • The Sceptics: The 47% of the public who aren't convinced

  • The Visionary: Sam Altman and OpenAI's $1.5 trillion deal-making spree with minimal adviser involvement

  • The Seven Corporations: The seven tech companies accounting for over a third of the S&P 500's value

  • Professor Cassandra: Every educator watching students outsource thinking and wondering what we're building

  • The 80% with no impact: The actual businesses reporting no significant bottom-line impact from generative AI

  • The Training Ground: California State colleges becoming testing grounds for Big Tech AI tools

  • The $1.5 trillion: OpenAI's actual deal commitments, with no clear funding plan or proper due diligence

  • The one percentage point: What most economists predict AI will add to global annual growth

​

The zombies? They are the AI systems. Or maybe they are us. That's the uncomfortable question we are all trying to avoid.

​

- Professor Rose Luckin, October 2025

The ‘Skinny Scan’ on what is happening with AI in Education….

My take on the news this month – more details as always available below:

​​

The education sector is trying to integrate AI whilst wrestling with profound questions of equity, quality, and workforce readiness, all whilst policy frameworks struggle to keep pace with technological change. And yet as ever, the picture is not consistent with disagreements and contrary opinions and evidence across different aspects of the AI story.


Various policy tensions reflect a broader global dynamic, with Hong Kong positioning itself as a potential leader in setting Asia-wide AI education standards that bridge Western ethical approaches and Eastern implementation strategies. Meanwhile, educational institutions are attempting to adapt.


The workforce implications are equally stark. Some see the emergence of a "graduate jobpocalypse" as entry-level positions disappear, threatening traditional career pathways and creating a generation "locked out" of stable employment. Accenture's announcement that it will "exit" employees unable to retrain for AI-driven workflows signals an accelerating workforce stratification between AI adopters and those displaced by automation. Yet research from Yale and Brookings offers a counterpoint, finding no evidence that AI has yet caused significant job losses in the US, suggesting the impact remains evolutionary rather than revolutionary; at least for now.


Against this backdrop, governance efforts are intensifying: the EDUCAUSE Horizon Report emphasises that institutional effectiveness depends on retooling data strategies for an AI-flooded environment, whilst UNICEF's landscape review calls for child-centred data principles and stronger oversight to curb opaque data extraction in schools and prevent surveillance harms to minors.


The UK has also launched new "V-level" vocational qualifications for 16–19-year-olds to address persistent skills shortages, though education leaders warn the initiative will not deliver rapid results without greater funding and teacher training; encapsulating the broader challenge of balancing long-term reform ambitions with immediate workforce needs in an AI-driven economy..

AI News Summary

AI in Education

DfE to encourage AI tutors in schools

October 2025 | Tes Magazine

​

Tes reports that England’s Department for Education plans to promote AI tutoring tools as part of an upcoming schools white paper, while unions warn against “tutoring on a shoestring.” Officials position AI as a workload reducer and complement to teachers, but critics emphasize funding gaps, digital divide risks, and the need for safe, appropriate deployment. The article situates the policy within broader debates on evidence of effectiveness and safeguarding.

​

Why it matters: If adopted at scale, AI tutors could widen or narrow attainment gaps depending on implementation quality, access, and teacher oversight.

​

Original link: https://www.tes.com/magazine/news/general/dfe-encourage-ai-tutors-schools 

 

2025 EDUCAUSE Horizon Report | Data and Analytics Edition

October 2025 | EDUCAUSE

​

The latest Horizon Report scans external forces and near‑term innovations shaping data and analytics in higher education, with AI threaded through trends, key technologies, and practices. Authored by Jenay Robert, Nicole Muscanell, and Kim Arnold, the report argues that institutional effectiveness and student success depend on retooling data strategies for an AI‑suffused environment, from learning analytics to governance. It provides a roadmap for leaders balancing agility with privacy, ethics, and literacy.

​

Why it matters: Universities are racing to modernize data infrastructure and skills; this synthesis highlights where to invest—and what risks to manage—amid rapid AI adoption.

​

Original link: https://library.educause.edu/resources/2025/10/educause-horizon-report-data-and-analytics-edition

​

Data Governance for EdTech (Landscape Review)

September 2025 | UNICEF Innocenti

​

This landscape review maps governance issues in educational technology, from consent and procurement to cross‑border data flows and vendor accountability. Written by Emma Day, Jasmina Byrne, and Melanie Penagos, it calls for child‑centred data principles, stronger oversight, and interoperable standards to curb opaque data extraction in schools. The analysis emphasizes aligning EdTech deployments with children’s rights frameworks and practical safeguards.

​

Why it matters: As AI‑enabled edtech expands, weak governance can expose minors to surveillance and harm; clear, enforceable guardrails are a prerequisite for equitable use.

​

Original link: https://www.unicef.org/innocenti/media/11611/file/UNICEF-Innocenti-Data-Governance-Education-Technology-2025.pdf.pdf

​

AI, tutors, parents: Why this NYC school is forcing students to write admissions essays in person

September 2025 | Chalkbeat New York

​

Beacon High School in Manhattan will require applicants to complete admissions essays on campus, citing concerns that at‑home drafts often reflect outside help—from AI tools, tutors, or parents. The shift reduces the essay’s weight in admissions but seeks more authentic samples of student writing; families and experts debate trade‑offs around equity, logistics, and performance stress. It reflects a broader move toward in‑person assessments as schools adapt to ubiquitous AI writing aids.

​

Why it matters: Admissions processes are being redesigned to measure genuine student ability in an AI era, with implications for fairness and access.

​

Original link: https://www.chalkbeat.org/newyork/2025/09/30/beacon-high-school-admissions-requires-in-person-essay-to-combat-ai-tutor/

 

Letters | Hong Kong Could Help Asia Set Standards for AI in Education

October 2025 | South China Morning Post

​

Readers argue that Hong Kong is uniquely positioned to lead Asia in defining ethical and quality standards for AI in education. With global systems diverging — Europe emphasising ethics, the U.S. focusing on privacy, and mainland China mandating nationwide AI literacy — Hong Kong could bridge these approaches. The letter urges policymakers to embed AI integration within human-centred pedagogy rather than mere technological adoption.

​

Why it matters: Highlights the emerging regional race to shape AI education policy — with Hong Kong poised as a potential convening hub for East–West governance standards.


Original link: https://www.scmp.com/opinion/letters/article/3327495/hong-kong-could-help-asia-set-standards-ai-education

 

UK Unveils ‘V-Level’ to Tackle Skills Gap and Boost Growth

October 2025 | David Sheppard and Amy Borrett, Financial Times

​

The UK government announced a new vocational qualification, the “V-level,” aimed at 16- to 19-year-olds seeking work-focused education in sectors such as engineering and the creative industries. Replacing roughly 900 existing courses, the scheme seeks to address persistent skills shortages, though education leaders warn it will not deliver rapid results without greater funding and teacher training.

​

Why it matters: The initiative positions technical education as central to future productivity—but also exposes the tension between long-term reform and immediate workforce needs in an AI-driven economy.


Original link: https://www.ft.com/content/dcab07a1-c91a-4781-bec2-163b1d3ad228

​

What the Graduate Unemployment Story Gets Wrong

October 2025 | Financial Times

​

This commentary challenges media narratives around rising graduate unemployment, arguing that the issue reflects shifting labour-market expectations rather than a failure of education. Employers increasingly value adaptable, AI-fluent workers, while many graduates pursue portfolio careers or entrepreneurial paths not captured by traditional metrics. The piece contends that sensational coverage obscures deeper structural transitions in post-pandemic, automation-affected economies.

​

Why it matters: Understanding graduate outcomes through a 21st-century lens reframes debates about skills, education, and the real meaning of “employment” in an AI-augmented economy.

AI Ethics and Societal Impact

How Tech Lords and Populists Changed the Rules of Power

October 2025 | Financial Times (Martin Wolf)

​

Martin Wolf argues that both technology leaders and populist politicians have reshaped power by eroding traditional institutional checks. Figures such as Elon Musk and Donald Trump exemplify how charisma, direct communication, and personal brands now rival policy or ideology as political capital. Digital platforms have amplified this shift, making influence a function of attention rather than governance.

 

Why it matters: Reveals how technological and populist movements share a common DNA—disruption through personal dominance—raising questions about how democratic systems can reassert accountability in the algorithmic age.
 

Original link: https://www.ft.com/content/bc0b7b4b-9d56-4c6f-9b87-0acde82f3d45


Sam Altman Says ChatGPT Will Soon Allow Erotica for Adult Users

October 2025 | TechCrunch

​

OpenAI CEO Sam Altman announced that ChatGPT will soon permit erotic interactions for “verified adults,” part of a broader shift toward a “treat adults like adults” content policy. The change follows a year of safety controversies, including reports of vulnerable users developing unhealthy dependencies on the chatbot. OpenAI claims its age-gating and mental health features — including predictive moderation and expert oversight — now make this liberalisation safe, though critics remain wary. The move also reflects OpenAI’s push to retain engagement as it races toward one billion weekly users.

​

Why it matters: OpenAI’s decision highlights the growing tension between safety, free expression, and commercial imperatives as AI platforms mature into global consumer products.


Original link: https://techcrunch.com/2025/10/14/sam-altman-says-chatgpt-will-soon-allow-erotica-for-adult-users/

​

Have We Passed Peak Social Media?

October 2025 | Financial Times (John Burn-Murdoch)

​

In this data-driven essay, the Financial Times’ chief data reporter argues that social media has entered a long decline after peaking in 2022. Analysis of 250,000 adults across 50 countries shows daily time on social platforms has dropped nearly 10% since then, with Gen Z leading the retreat. The piece likens AI-generated video feeds from Meta and OpenAI to “ultra-processed content” — dopamine-heavy but nutritionally empty — and frames the trend as a cultural turning point: the move from connection to compulsive consumption.

​

Why it matters: Highlights an inflection point where users are actively disengaging, signalling that the future of online engagement may hinge on authenticity and meaningful interaction rather than algorithmic excess.


Original link: https://www.ft.com/content/a0724dd9-0346-4df3-80f5-d6572c93a863

​

‘I love you too!’ My family’s creepy, unsettling week with an AI toy

September 2025 | The Guardian

​

A columnist documents a week living with Grem, an AI-enabled plush toy that records conversations and personal data via a companion app. The piece juxtaposes the toy’s affectionate, praise-filled behaviour with concerns from child-development experts about surveillance, attachment, and the offloading of emotional labor to machines. The reporting surfaces a core tension in “AI for kids” products: safety filters and cheerful scripting can mask opaque data practices and potentially manipulative interactions.

​

Why it matters: Consumer AI is entering children’s spaces quickly; design choices around data collection and affect can shape social development and privacy norms for a generation.

​

Original link: https://www.theguardian.com/technology/2025/sep/16/i-love-you-too-my-familys-creepy-unsettling-week-with-an-ai-toy

 

How People Around the World View AI

October 2025 | Pew Research Center

​

A 25‑country survey of 28,000+ adults finds broad awareness of AI but a tilt toward concern over excitement about its impact on daily life. Views vary by country and demographics, with higher familiarity generally correlating with more nuanced attitudes; many expect AI to change jobs and information quality while expressing uncertainty about norms and governance. The report adds a global baseline for public sentiment as adoption accelerates.

​

Why it matters: Policymakers and companies risk missteps if they assume enthusiasm for AI is universal; tailoring rollout, safeguards, and communication to local attitudes will be crucial.

​

Original link: https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/

 

Generative AI isn’t culturally neutral, research finds

September 2025 | MIT Sloan Ideas Made to Matter

​

Coverage of new research by Jackson Lu and co‑authors shows that model outputs shift with the language of the prompt: English prompts elicit more individualist, analytic responses, while Chinese prompts skew interdependent and holistic. The findings suggest cultural tendencies embedded in training data can subtly steer recommendations—for example, different ad slogans depending on language—though prompt framing can partially adjust outputs. Users and organizations should be mindful of these biases in decision contexts.

​

Why it matters: “Neutral” models can encode cultural defaults that influence choices; recognizing and steering these tendencies is key for global deployments.

​

Original link: https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-isnt-culturally-neutral-research-finds

 

An AI Chart Crime Compendium

October 2025 | Financial Times (FT Alphaville)

​

Bryce Elder humorously dissects a series of questionable charts and statistical leaps used to argue that the “AI bubble” has already burst. Drawing from Deutsche Bank’s research and Google Trends data, the piece skewers superficial correlations—like search volumes for “AI bubble” being treated as market predictors—and mocks the overuse of unreliable visual data in financial narratives. It critiques how economic institutions and media alike amplify hype cycles using flawed metrics.

​

Why it matters: The article highlights how even serious analyses of AI markets can fall prey to data manipulation and visual misinformation, reminding readers to question the charts behind the narratives.


Original link: https://www.ft.com/content/63089151-97bd-43f6-a12e-959a0bb486c

​

AI Slop Is Coming for Your Music

October 2025 | Financial Times (Lex Column)

​

Streaming platforms are being flooded with AI-generated songs, prompting Spotify to remove 75 million “spammy” tracks. With tools like Suno and Udio now able to create songs in any artist’s style, the lines between human and machine musicianship are rapidly blurring. As AI content proliferates, the traditional music industry faces threats ranging from fraud and copyright battles to the devaluation of genuine artistry.

​

Why it matters: The piece warns that music could soon face the same authenticity crisis that has hit online art and text, forcing labels and artists to redefine creativity and ownership in the AI era.


Original link: https://www.ft.com/content/f1dfc2f5-9e4d-4f0a-8008-cdc87bd1cef9

 

Are Data Centres a Setback for the Green Energy Transition?

October 2025 | Financial Times

​

Soaring demand for AI and cloud computing is straining the energy transition. The FT reports that hyperscale data centres now consume as much electricity as small nations, often forcing grid expansions and fossil fuel restarts. While operators like Google and Microsoft invest in renewables, the surge in compute power required for training AI models threatens to outpace green capacity gains. Policymakers are increasingly questioning whether AI innovation is compatible with climate goals.

​

Why it matters: AI’s infrastructure boom may undermine decarbonisation progress, illustrating how digital and climate agendas can collide rather than align.

 

How AI Became Our Personal Assistant

October 2025 | Financial Times

​

The article explores how generative AI tools like OpenAI’s ChatGPT and Anthropic’s Claude are evolving from conversational curiosities into indispensable personal assistants. From email drafting to scheduling and even emotional support, these systems are quietly becoming extensions of users’ cognitive routines. The piece highlights the tension between convenience and privacy — with users surrendering vast behavioural data to cloud-based assistants that learn from every query.

​

Why it matters: The transition from productivity tool to personal aide marks a profound shift in human–machine relationships — redefining autonomy, privacy, and the meaning of “help.”

 

The World: The A.I. Slop Is Here

October 2025 | The New York Times

​

Katrin Bennhold’s newsletter features a conversation with tech columnist Kevin Roose about the rise of “AI slop” — the flood of synthetic video and text content saturating the internet. Roose describes tools like OpenAI’s Sora, which can generate photorealistic video “cameos” of users, and warns that such technology blurs the line between creativity and deception. While creators celebrate new expressive possibilities, Roose fears the “slop era” will deepen misinformation and fuel personalized addiction to algorithmic media.

​

Why it matters: Captures a cultural turning point — where generative AI ceases to be a marvel and becomes a mass pollutant in the digital ecosystem.


Original link: The New York Times, “The World” newsletter, October 10, 2025

 

Will AI Free Us from Life’s Tedious Admin?

September 2025 | Emma Jacobs, Financial Times

​

Emma Jacobs examines whether artificial intelligence can relieve individuals of the “mental load” of domestic and administrative chores. From booking plumbers to planning children’s parties, AI tools promise relief but often deliver mixed results—like OpenAI’s “Operator” mistakenly buying $31 eggs. Jacobs reflects on whether automating the mundane truly grants freedom, or if it risks stripping away meaningful engagement with life’s small decisions.

​

Why it matters: The piece reframes AI not just as a productivity booster but as a social and ethical choice about what kinds of labor—and care—we value.


Original link: https://www.ft.com/content/4665b369-4a46-4cf9-9c9b-bc6a6eedf0c1

 

Why Is This Funny? And Why AI Doesn’t Know — Yet

October 2025 | Financial Times

​

An exploration of why humor remains one of AI’s hardest frontiers. While models like GPT and Gemini can reproduce jokes, they lack the human grounding, timing, and social awareness that make humor resonate. The article traces how cognitive scientists and linguists view humor as a uniquely human tool for subverting expectations and building empathy — capacities AI mimics but doesn’t truly experience.

​

Why it matters: Humor exposes the limits of artificial understanding, highlighting what still separates algorithmic mimicry from genuine human creativity and connection.

​

AI Comes to the Video Wars

October 2025 | Richard Waters, Financial Times

​

OpenAI’s new app Sora lets users generate and share AI-created short videos featuring themselves, signaling the start of a new phase in generative media. Competing platforms like YouTube and Meta are adding similar AI video features, blurring lines between creator and algorithm. The real question, Waters argues, is whether these tools can sustain user engagement without replicating social media’s profit-driven flaws.

​

Why it matters: The arrival of AI in social video marks a shift from chatbots to immersive creativity — and reignites debate over authenticity, monetization, and digital identity.


Original link: https://www.ft.com/content/8d2dde0c-d69d-4d05-bbe2-fd6357609f9c

 

‘They Wanted Me to Make Myself Obsolete’: Translators Find Themselves at the Sharp End of AI

October 2025 | Bethan Staton, Financial Times

​

Professional translators are seeing work dry up as clients turn to machine translation tools like DeepL and ChatGPT. Many now face offers to “post-edit” AI-generated text for lower pay, effectively correcting the machines that replaced them. The story centers on Jessica Spengler, who refuses to proofread AI translations of Holocaust memorial texts — calling it “dehumanizing.”

Why it matters: This human story crystallizes AI’s cultural and ethical tension — between efficiency and dignity, and between what machines can translate and what only humans can mean.


Original link: https://www.ft.com/content/50b1f03e-1d10-4a7c-afa3-b48e9a8d5133

AI Employment and the Workforce

The Graduate “Jobpocalypse”: Where Have All the Entry-Level Jobs Gone?

September 29, 2025 | Financial Times Working It

​

A special FT Working It report highlights how automation and AI are eroding traditional graduate roles. Economic uncertainty and efficiency drives have led companies to freeze or eliminate entry-level positions, forcing young workers into precarious gig or freelance pathways. HR leaders warn that the loss of foundational roles threatens future leadership pipelines and long-term organizational capability.

​

Why it matters: Without structured pathways for early-career workers, AI’s productivity gains risk creating a generation locked out of stable employment and advancement.


Original link: https://www.ft.com/video/03f66445-658f-48d4-b0c3-15bc5f0e2f87

 

How Endangered Craft Industries Are Resisting the AI Jobs Threat

October 2025 | Financial Times (Ben Parr)

​

This human-interest feature spotlights the revival of traditional crafts — from scissor making to basket weaving — as a surprising refuge from AI disruption. Artisans are finding renewed demand for handmade, bespoke goods that emphasize authenticity and narrative. The UK’s Heritage Crafts charity reports resilience in these trades, as buyers increasingly value the “human story” and tactile quality that AI and automation can’t replicate.

​

Why it matters: Suggests that amid widespread automation anxiety, cultural and artisanal work offers lessons on resilience, differentiation, and the enduring economic value of human touch.


Original link: https://www.ft.com/content/16657487-0716-4dc8-8394-389a9bcef60c


FirstFT: How Do We Use AI?

October 2025 | Financial Times

​

This FirstFT briefing explores the tension between excitement and uncertainty as AI becomes embedded in daily workflows. It contrasts the productivity gains in sectors such as finance and media with rising concerns over misinformation and creative displacement. The piece highlights differing global approaches to regulation—from the EU’s AI Act to the U.S.’s more fragmented oversight—and notes that everyday adoption is outpacing ethical and legal frameworks.

​

Why it matters: Captures how AI’s mainstreaming is simultaneously accelerating innovation and eroding trust, underscoring the need for clearer governance around transparency and accountability.


Original link: https://www.ft.com/firstft-how-do-we-use-ai

​

AI-Generated “Workslop” Is Destroying Productivity

September 2025 | Harvard Business Review

​

Research summarized in HBR argues that the surge of generative AI has produced a wave of polished-but-shallow output—“workslop”—that shifts cognitive labor downstream to coworkers who must fix or redo it. Drawing on survey data from BetterUp Labs and Stanford-affiliated researchers, the piece reports that a large share of employees encounter AI-tainted drafts that erode trust and collaboration and consume significant rework time. The authors caution leaders against blanket AI mandates and urge process design, review standards, and training that emphasize when to use AI—and when not to.

​

Why it matters: Without governance and quality controls, genAI can quietly reduce net productivity by flooding organizations with “good-enough” content that actually creates more work.

​

Original link: https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

​

Amazon to axe 14,000 corporate jobs

October 2025 | Financial Times

​

Amazon announced plans to cut 14,000 corporate roles—about 4% of its white-collar workforce—as it ramps up spending on AI infrastructure and data centers. Executives said the company must be “organised more leanly” to move faster in deploying artificial intelligence across its operations. CEO Andy Jassy has already trimmed management layers this year, positioning Amazon to act “like the world’s largest start-up.” Capital expenditures will hit $118 billion, largely for AI data centers and chips to support partners like Anthropic. The move comes amid an 11% revenue rise and expectations that AI investments will weigh on short-term profits.

​

Why it matters: The cuts mark a shift from expansion to optimization in Big Tech—AI’s massive capital costs are forcing even giants like Amazon to restructure around efficiency and infrastructure dominance.

​

AI Is Not Killing Jobs, US Study Finds

October 2025 | Financial Times

​

New research from Yale’s Budget Lab and the Brookings Institution finds no evidence that AI adoption has yet caused significant job losses in the US. Despite widespread anxiety and corporate rhetoric, AI’s labour-market impact remains modest — comparable to earlier shifts like the introduction of computers. While some entry-level roles are being reduced, broad employment trends remain stable, challenging predictions of an AI-driven jobs apocalypse.

​

Why it matters: The findings bring empirical grounding to a debate often dominated by hype, showing that AI’s impact on work is evolutionary, not revolutionary — at least so far.


Original link: https://www.ft.com/content/c9f905a0-cbfc-4a0a-ac4f-0d68d0fc64aa

 

Blackstone Says Wall Street Is Complacent About AI Disruption

October 2025 | Financial Times

​

Blackstone President Jon Gray warned that investors underestimate the scale of disruption AI will cause to financial services. Speaking at an investor conference, he said automation will erode traditional fee structures and reshape job roles across banking, trading, and asset management. While he sees opportunity in infrastructure and data-centre investment, Gray cautioned that capital markets remain “sleepwalking” through an epochal shift.

​

Why it matters: Highlights the widening gap between AI optimism and readiness, even among industries that fund technological change.

​

Business School Case Study: Must US Tech Companies Rethink Recruitment Strategies?

October 2025 | Financial Times

​

A new teaching case from a leading U.S. business school examines how AI reshapes hiring and diversity strategies at major tech firms. With automation transforming recruitment pipelines, companies face a dilemma: whether algorithmic selection enhances or undermines fairness. The case study argues that firms need to reimagine talent management to balance efficiency, equity, and long-term innovation.

​

Why it matters: Provides a lens into how AI is not just a technical tool but a cultural and strategic challenge for HR in the technology sector.

 

Trump Attack on Specialist Visa Hits Its Biggest Beneficiaries: Tech Titans

September 2025 | Stephen Morris and Michael Acton, Financial Times

​

The Trump administration’s sudden decision to impose a $100,000 fee on new H-1B visa applications has shaken the US technology sector, whose top executives—including Satya Nadella and Sundar Pichai—once relied on the program. Big Tech firms, heavily dependent on foreign-born talent, scrambled to reassure employees and interpret the rule’s impact, even as flights were delayed amid panic among visa holders.

​

Why it matters: The move highlights how restrictive immigration policies could directly undermine America’s technological competitiveness and innovation pipeline.
 

Accenture to ‘Exit’ Staff Who Cannot Be Retrained for Age of AI

October 2025 | Financial Times

​

Accenture announced plans to “exit” employees unable to adapt to AI-driven workflows, marking one of the largest corporate reskilling efforts of its kind. The consultancy has retrained over 250,000 staff in generative AI tools but now warns that workers who cannot transition will be phased out. The move reflects a growing divide between AI adopters and those displaced by automation, even within knowledge industries.

​

Why it matters: The policy illustrates how AI is accelerating workforce stratification — rewarding those who can work with machines while marginalizing those who can’t.

​

Original link: https://www.ft.com/content/eabd56fd-4bd7-4c5b-8349-990d9caa8252

​

AI Could Give Silicon Valley Financiers a Run for Their Money

September 2025 | Financial Times

​

An Oxford-Vela Partners study found that large language models, including OpenAI’s GPT-5 and China’s DeepSeek-V3, outperformed human venture capitalists at identifying founders likely to create billion-dollar companies. While models such as DeepSeek achieved up to 60 percent accuracy, researchers note that AIs remain poor at recognizing “outliers” — unconventional successes that define VC legend. The piece suggests that AI may soon assist investors in screening and due diligence but not replace the intuition that drives contrarian bets.

​

Why it matters: The experiment reveals how AI could disrupt the very financiers who fund its growth — automating parts of venture capital while exposing where human judgment still matters most.


Original link: https://www.ft.com/content/7314c8de-98f3-4e65-83e6-b31a358bf4bc

AI Development and Industry

OpenAI Expands Stargate AI Project with Five US Sites

October 2025 | Financial Times

​

OpenAI is rolling out its ambitious Stargate project across five U.S. locations, in partnership with SoftBank, Oracle, and regional utility providers. The $500 billion initiative aims to build hyperscale AI infrastructure capable of supporting OpenAI’s projected compute demands for the next decade. Each site — including major campuses in Arizona, Texas, and Virginia — will host multi-gigawatt data centers dedicated to AI model training and inference. The expansion cements Stargate as the largest coordinated AI infrastructure project in history, dwarfing cloud builds by Amazon and Google.

​

Why it matters: Stargate signals the industrialization of AI, transforming model training into a nation-scale infrastructure effort — and tying OpenAI’s future to the U.S. energy grid and strategic tech policy.

 

OpenAI Wants to Own It All

October 2025 | Financial Times

​

The feature-length analysis details OpenAI’s expanding ambition to control every layer of the AI value chain — from chips and data centers to user applications. Sam Altman’s vision, insiders say, is to turn OpenAI into a vertically integrated AI conglomerate capable of outpacing Big Tech rivals on both hardware and software. The company’s partnerships with AMD, Oracle, and CoreWeave are designed to reduce reliance on Microsoft while creating its own ecosystem of compute, infrastructure, and consumer products.

​

Why it matters: By internalizing both supply and distribution, OpenAI is positioning itself to dictate the economics of AI itself — a move that could concentrate unprecedented technological and financial power.
 

Technological Sovereignty with American Characteristics

October 2025 | Financial Times

​

The commentary explores how U.S. industrial policy under the AI boom is reshaping global tech sovereignty. Massive subsidies for chip foundries and data centers mirror China’s state-capitalist model, even as Washington critiques it abroad. The author argues that this convergence blurs ideological lines: America’s new techno-nationalism now fuses private enterprise with state direction to secure dominance in critical technologies.

​

Why it matters: The analysis reveals that “technological sovereignty” is no longer a geopolitical slogan — it’s a shared playbook driving both superpowers toward strategic industrial control.

 

The Transformative Potential of AI in Healthcare

October 10, 2025 | The Financial Times Editorial Board

​

The FT’s editorial board argues that healthcare could yield the most tangible, immediate benefits from AI. From early disease detection and diagnostic imaging to paperwork automation, AI offers relief for overburdened health systems. However, adoption remains slow due to regulatory caution, fragmented data infrastructure, and workforce fears. The piece calls for coordinated government, regulatory, and industry action to accelerate responsible AI deployment in hospitals.

​

Why it matters: AI in healthcare exemplifies technology’s social value — improving lives and system efficiency if adopted safely and equitably.


Original link: https://www.ft.com/content/83f18513-137e-4b9c-8c7b-b0b45e0d7e39

 

How China Could Pull Ahead in the AI Race

October 2025 | Financial Times (Dan Wang, Hoover Institution)

​

Dan Wang outlines how China’s AI sector is gaining on the US, driven by surging domestic energy production and accelerating model innovation. Start-ups like DeepSeek and Moonshot AI rival Western performance with fewer chips, while the country’s massive solar and nuclear expansion ensures power security for its data centres. In contrast, US policy uncertainty and limits on skilled visas under the Trump administration risk slowing America’s momentum.

​

Why it matters: China’s blend of state-backed industrial policy, energy abundance, and disciplined talent pipelines could soon offset the US lead in computing hardware, reshaping global AI power dynamics.


Original link: https://www.ft.com/content/9f6c2f35-933f-4e14-85f1-1192488dda4e

​

How Long Can Nvidia Stay Ahead of Chinese Competition?

October 2025 | Financial Times (Eleanor Olcott & Zijing Wu)

​

This Big Read traces Nvidia’s two-decade cultivation of China’s tech ecosystem — and the geopolitical tension now threatening it. Once a third of its revenue came from China, but US export bans and Beijing’s push for self-reliance are eroding that dominance. Nvidia’s founder Jensen Huang warns that cutting off China will only accelerate its local AI chip industry, led by Huawei and others. Analysts predict that a parallel AI hardware ecosystem could emerge, weakening US leverage.

​

Why it matters: Illustrates how AI hardware has become a front line in US-China competition, with consequences for global innovation, security, and semiconductor supply chains.


Original link: https://www.ft.com/content/c24a9b6c-1664-4e46-affb-0c0dc16e3c4a

​

How Nvidia’s Jensen Huang Became AI’s Global Salesman

September 2025 | Financial Times (Michael Acton, Tim Bradshaw & Chloe Cornish)

​

Part one of the FT’s Big Read series on Nvidia portrays CEO Jensen Huang as a charismatic diplomat selling both chips and ideology. Through headline-grabbing global investments and “sovereign AI” partnerships, Huang has positioned Nvidia as an indispensable state partner in national AI strategies. While he preaches self-sufficiency, the deals also entrench dependence on Nvidia’s hardware — making the company a new geopolitical actor in its own right.

​

Why it matters: Captures how Nvidia’s expansion blurs the lines between private enterprise and international policy, giving one corporation extraordinary influence over the global AI infrastructure.


Original link: https://www.ft.com/content/179cc3bb-9d03-49a4-af20-049b03875379

 

I’ve Seen the Future of Shopping — and I’m Sold on AI

October 2025 | Financial Times (Elaine Moore)

​

Elaine Moore explores how generative AI is redefining retail, from virtual try-ons to “co-shopping” bots that anticipate preferences. Retailers like Amazon, Sephora, and JD.com now use multimodal AI to guide purchases in real time, blending voice and image inputs. Yet the shift also raises issues around data exploitation, identity, and over-personalisation, as consumers’ desires become algorithmically sculpted rather than expressed.

​

Why it matters: Illustrates how commerce is evolving into a dialogue between humans and machines, blurring the line between recommendation and manipulation.


Original link: https://www.ft.com/content/f05b82b2-8b3f-42d5-b11e-5a22a4eeedab

​

OpenAI and Jony Ive Grapple with Technical Issues on Secretive AI Device

October 2025 | Financial Times (Tim Bradshaw, Cristina Criddle, Michael Acton & Ryan McMorrow)

​

OpenAI and design legend Jony Ive are facing hurdles in developing their forthcoming AI device — a palm-sized assistant without a screen. The project aims to fuse Ive’s minimalist hardware with OpenAI’s conversational intelligence, but compute constraints and unresolved “personality” design have delayed the launch. Sources suggest privacy, responsiveness, and tone remain unresolved as the team seeks a balance between utility and empathy.

​

Why it matters: Offers a glimpse into the next phase of AI—hardware embodiment—and the struggle to translate digital assistants into emotionally intelligent, trusted physical companions.


Original link: https://www.ft.com/content/58b078be-e0ab-492f-9dbf-c2fe67298dd3

​

Meta to Mine AI Interactions to Help Target Advertising

October 2025 | Financial Times (Cristina Criddle)

​

Meta will begin using user interactions with its AI chatbots and image tools to refine advertising across Facebook, Instagram, and WhatsApp. The company says only direct prompts and bot responses will be analysed, but critics warn this effectively expands surveillance under the guise of personalisation. The policy will roll out globally except in the EU and South Korea due to stricter privacy laws.

​

Why it matters: Shows how the monetisation of AI chat platforms could normalise behavioural data mining at unprecedented scale—turning “helpful assistants” into the next frontier of targeted advertising.


Original link: https://www.ft.com/content/22f7afc3-8ac0-4ca1-9877-fd3f8ddcc986

 

AstraZeneca Signs $555mn AI Deal to Identify Immunology Targets

October 2025 | Financial Times

​

AstraZeneca has struck a $555 million partnership with an AI biotech start-up to accelerate discovery of new immunology drug targets. The collaboration will use machine learning to analyse vast biological datasets, identifying pathways too complex for traditional methods. The company said AI-enabled R&D could shorten early-stage research timelines by years. Analysts see the move as part of a broader pharmaceutical shift toward “computational biology-first” drug development.

​

Why it matters: Demonstrates how AI is rapidly becoming a strategic necessity in biopharma, transforming drug discovery speed and success rates.
 

China’s Alibaba Is a Late Entrant to the AI Pantheon

September 2025 | Financial Times (Lex Column)

​

After lagging behind U.S. rivals, Alibaba is finally joining the global AI elite, with shares up 40% this year and heavy investment in its Qwen language model. The company plans to spend $50bn on AI and cloud infrastructure, about half Google’s annual outlay. While Alibaba’s domestic dominance is clear, its global market share remains small, and regulatory unpredictability continues to cloud investor confidence.

​

Why it matters: Alibaba’s rise illustrates how China’s tech giants are entering the AI race on their own terms — powerful, but still constrained by politics and geography.


Original link: https://www.ft.com/content/398db24e-78db-413e-bc3f-f635824c7221

​

Consultancies Must Become Software Companies to Survive AI Boom, IBM Executive Says

October 2025 | Financial Times

​

IBM Consulting head Mohamad Ali says traditional consulting firms must reinvent themselves as hybrid software-and-services providers or risk obsolescence. The company is building thousands of AI-powered “digital agents” to automate business functions for clients. Ali argues that the consulting industry is “entering its software era,” as AI reshapes how value is delivered and measured across professional services.

​

Why it matters: Captures a critical inflection point for white-collar industries, as automation transforms not just clients’ businesses — but the consultants themselves.


Original link: https://www.ft.com/content/8535fd82-713b-4f53-9849-d0e523e157bc

 

Elon Musk’s xAI Joins Race to Build ‘World Models’ to Power Video Games

October 2025 | Financial Times

​

Elon Musk’s xAI has announced plans to develop “world models” — AI systems capable of simulating complex physical and social environments for use in video games and robotics. The project aims to rival similar efforts by DeepMind and OpenAI to build agents that understand the real world’s causal structure. Musk described the initiative as essential to achieving “truthful” AI aligned with human reasoning.

​

Why it matters: World models are a frontier in AI research, blending physics, cognition, and entertainment — and positioning gaming as a testing ground for artificial general intelligence.


Original link: Unavailable (source file: Elon Musk’s xAI joins race to build ‘world models’ to power video games.pdf)

 

TSMC Raises Sales Outlook on ‘Very Strong’ AI Demand

October 2025 | Paddy Stephens, Financial Times

​

Taiwan Semiconductor Manufacturing Company reported record profits and lifted its sales forecast, driven by soaring demand for AI-related chips. CEO C.C. Wei described the appetite for high-performance semiconductors as “stronger than we thought three months ago.” TSMC’s dominance in advanced chipmaking continues even as it faces margin pressures from costly overseas expansion and shifting US–Taiwan trade dynamics.

​

Why it matters: AI’s hardware backbone remains concentrated in one global supplier, making TSMC’s fortunes—and Taiwan’s stability—critical to the world’s technological infrastructure.


Original link: https://www.ft.com/content/0a68e6fe-bad1-4e76-8cc5-b31036f62b95

​

UK Data Centre Start-Up Nscale Strikes $14bn Microsoft Deal in Push for IPO

October 2025 | Tim Bradshaw, Financial Times

​

London-based cloud provider Nscale, backed by Nvidia, has secured a contract worth up to $14 billion with Microsoft to supply more than 100,000 of Nvidia’s new GB300 chips. The deal cements Nscale’s emergence as a major AI infrastructure player ahead of a planned IPO next year. Investors see Nscale as Europe’s best-capitalised AI data-centre venture, though critics warn of overheating in the GPU market.

​

Why it matters: The deal illustrates the massive capital flows driving AI’s physical build-out—and the growing interdependence between chipmakers, cloud giants, and emerging infrastructure firms.

​

Original link: https://www.ft.com/content/7fa46fcb-c2a2-4960-88d4-5da7f9d13ae1

​

UK Tech Group Signal AI Raises $165mn from US Investor for Global Expansion

September 2025 | Daniel Thomas, Financial Times

​

Signal AI, a UK-based media intelligence company chaired by Archie Norman, has raised $165 million from Battery Ventures to accelerate its international growth. The firm applies discriminative and generative AI to monitor and analyse global media coverage across 200 markets, providing clients such as Diageo and Uber with real-time risk and reputation insights.

​

Why it matters: Signal AI’s success shows how applied AI analytics are transforming corporate intelligence and crisis management—an often-overlooked commercial use of the technology.


Original link: https://www.ft.com/content/d2913fba-a867-4b63-9fed-1dd2e1c65453

 

AI Coding Start-Ups Reap $7.5bn Wave of Investment

October 2025 | Financial Times

​

A surge of venture capital — more than $7.5 billion — has flooded into AI-assisted coding start-ups such as Anysphere and Cognition, which promise to automate software development. Investors view these tools as catalysts for a new productivity revolution, though critics question whether they risk homogenizing code and deepening dependence on opaque models.

​

Why it matters: The funding boom highlights investor faith in AI’s ability to rewrite how software is made — and raises questions about what “human” programming will mean in the near future.

​

AI Groups Bet on World Models in Race for ‘Superintelligence’

October 2025 | Financial Times

​

Leading AI research companies are converging on “world models” — systems designed to simulate reality at scale — as the next frontier in artificial intelligence. OpenAI, Anthropic, and DeepMind are each investing billions to teach models not just to process data, but to understand cause and effect within dynamic environments. Proponents believe world models will underpin autonomous reasoning and planning capabilities essential for “artificial general intelligence.” Critics, however, warn that building AI that predicts and manipulates complex systems could deepen opacity and risk unintended behaviors.

​

Why it matters: The shift toward world models represents a strategic escalation in the race for superintelligence — one that could blur the line between simulation and real-world decision-making.

AI Regulation and Legal Issues

Google Told to Loosen Control Over Search by UK Competition Regulator

October 2025 | Financial Times

​

The UK Competition and Markets Authority (CMA) has ordered Google to relax restrictions that prevent rival services from appearing in search results and advertising auctions. The regulator argues that Google’s vertical integration in search, ads, and browser ecosystems harms market competition and innovation. Google will be required to provide equal data access to smaller competitors and disclose algorithmic ranking changes impacting advertisers and publishers.

​

Why it matters: Marks a watershed in global digital regulation, signalling that antitrust agencies are now moving beyond fines to structural interventions in AI-powered search markets.


Original link: https://www.ft.com/content/f1c14d6e-6a2c-4db1-a011-f2e57a256b8d

 

Regulating Military Use of AI Is in Everyone’s Interest

October 2025 | Financial Times

​

Michael C. Horowitz of the University of Pennsylvania argues for global norms to guide responsible military AI use, warning that automation bias and rapid decision cycles could escalate conflicts by mistake. His proposal — “responsibility by design” — calls for ethical, legal, and audit mechanisms embedded in AI systems from conception to deployment. While full prohibitions are unrealistic, he advocates binding restrictions on nuclear command automation and capacity-building initiatives among nations to reduce miscalculation risks.

​

Why it matters: As military AI development accelerates, embedding accountability and human oversight becomes essential to preventing unintended wars and ensuring global stability.


Original link: https://www.ft.com/content/c8dbfb26-1c89-4e28-b728-d2c39725a87d

​

Six Companies Pushing the Legal World into the AI Era

October 2025 | Financial Times

​

From contract analysis to litigation prediction, six firms — including Harvey AI, Robin AI, and Casetext — are transforming how lawyers work. Their tools automate research, generate briefs, and improve compliance review at a fraction of traditional costs. Law firms and corporate counsel are adopting hybrid models, blending human judgment with AI-assisted drafting. The article underscores how the legal sector, historically resistant to change, is becoming a test bed for responsible automation.

​

Why it matters: Legal AI adoption offers a model for balancing automation and accountability — reshaping how society’s rules are interpreted and enforced.

 

Google’s Data Centre Push in India Exposes Gaps in AI Safeguards

October 2025 | Financial Times (John Reed in Mumbai)

​

Google’s rapid expansion of hyperscale data centres across India is raising concerns over environmental impact and regulatory oversight. The report reveals how limited local data-protection enforcement and patchy transparency on AI training datasets leave India’s digital ecosystem vulnerable to privacy risks. While Google touts sustainability and economic benefits, activists and policy experts warn that the infrastructure boom has outpaced governance capacity.

​

Why it matters: Highlights the global imbalance between AI infrastructure growth and ethical safeguards—showing how emerging markets risk becoming testing grounds for high-impact technologies without full accountability.


Original link: https://www.ft.com/content/85c2c5d8-3de5-4fa0-a18c-1a4dfffa3ef2

 

In the Global AI Boom, Russia Is Conspicuously Absent

October 2025 | Financial Times (Max Seddon and Polina Ivanova)

​

Russia, once a science and engineering powerhouse, has fallen behind in the AI race. Western sanctions and capital flight have left it dependent on outdated chips and limited data resources. While China, the US, and Europe pour billions into AI infrastructure, Russia’s ecosystem remains fragmented, with state-backed projects like Sberbank’s GigaChat struggling for relevance.

​

Why it matters: Underscores how geopolitical isolation can sideline even technically capable nations, showing that AI progress now depends as much on global integration as on talent or theory.


Original link: https://www.ft.com/content/f06b8e99-5a34-46b4-b3ac-7cb08a9d9ef9

​

Apple Demands EU Scrap Landmark Digital Rules

October 2025 | Financial Times

​

Apple has called on Brussels to repeal the EU’s flagship Digital Markets Act (DMA), claiming the rules unfairly target US tech companies and compromise user privacy. The company argues that the DMA’s interoperability and sideloading requirements expose users to greater security risks, while EU officials maintain the legislation is essential to foster competition. Apple’s appeal aligns with wider US tech lobbying efforts, including from Meta and Google, against perceived European “anti-American bias.”

 

Why it matters: The dispute underscores growing transatlantic tension over digital sovereignty, with the EU asserting regulatory leadership even as U.S. giants warn of fragmented global tech standards.

 

Brussels Told to Prove Digital Rules Do Not ‘Punish’ US Tech or Fix Them

September 2025 | Financial Times

​

Donald Trump’s ambassador to the EU, Andrew Puzder, warned that Washington may formally challenge Europe’s digital rulebook, arguing that the DMA and DSA discriminate against American firms. He urged Brussels to show that the laws do not “punish” U.S. tech interests or risk damaging bilateral trade ties. The comments come amid lobbying by Apple, Meta, and Google to water down key provisions as the EU reviews implementation.

​

Why it matters: The episode signals renewed U.S.–EU tension over digital sovereignty and fairness in global tech regulation.


Original link: https://www.ft.com/content/b6d9fb9c-901e-42e3-9610-5a449247fd49

 

China Launches Customs Crackdown on Nvidia AI Chips

October 2025 | Financial Times

​

China has begun strict enforcement of import controls on Nvidia’s AI processors, mobilising customs officers nationwide to halt shipments. The move follows Beijing’s order for tech companies like ByteDance and Alibaba to stop testing U.S. chips, as part of its drive to achieve self-reliance. Analysts say the crackdown marks a new escalation in the U.S.–China tech rivalry, as domestic chipmakers are positioned to replace Nvidia’s curtailed presence.

​

Why it matters: The enforcement shows Beijing is willing to sacrifice short-term performance to secure technological independence — a key front in the AI arms race.


Original link: https://www.ft.com/content/8d5387f2-62b0-4830-b0e4-00ba0622a7c8

​

Deloitte Issues Refund for Error-Ridden Australian Government Report That Used AI

October 2025 | Financial Times

​

Deloitte will refund part of a AU$439,000 contract after admitting that an Australian welfare report it authored contained AI-generated errors. The firm acknowledged that a section written with OpenAI’s GPT-4o introduced fabricated citations and incorrect references. The case has sparked scrutiny of AI’s role in professional services, as regulators warn consultancies to monitor automation’s impact on quality assurance.

​

Why it matters: A cautionary tale of AI’s limits in high-stakes professional contexts — showing how speed and cost-cutting can come at the expense of reliability and trust.


Original link: https://www.ft.com/content/934cc94b-32c4-497e-9718-d87d6a7835ca

 

EU Pushes New AI Strategy to Reduce Tech Reliance on US and China

October 2025 | Financial Times

​

The European Union has unveiled a new AI industrial strategy aimed at reducing dependence on U.S. and Chinese technology ecosystems. The plan emphasizes sovereign cloud infrastructure, open-source models, and pan-European data-sharing standards. Officials hope to balance innovation with regulation, positioning the EU as an ethical AI superpower.

Why it matters: Europe is doubling down on technological sovereignty — betting that trust, transparency, and interoperability can become competitive advantages in the AI age.

 

Advisers Claim HMRC ‘Smoke and Mirrors’ Over Use of AI to Assess R&D Tax Claims

October 2025 | Financial Times

​

Tax advisers have accused HMRC of deploying AI tools to process and reject R&D tax relief claims without sufficient transparency. While the tax authority insists that human staff review all decisions, firms allege that algorithmic triage has led to arbitrary denials and slower dispute resolution. Industry experts call for clearer disclosure on how AI is used in compliance functions.

​

Why it matters: The dispute exposes the growing tension between administrative efficiency and accountability as public institutions quietly integrate AI into sensitive decision-making.

 

Zuckerberg and Altman Move Closer to Trump Since Musk Rift

September 2025 | Financial Times

​

Mark Zuckerberg and Sam Altman have cultivated closer ties with President Trump following Elon Musk’s falling-out with the administration. The two tech leaders have attended multiple White House events and secured support for AI-friendly deregulation, while aligning their companies’ interests with Trump’s economic agenda. Critics say the alliance reflects a pragmatic but uneasy convergence of politics and technology.

​

Why it matters: The deepening political entanglement of AI leaders with government power underscores how technology and policy are becoming inseparable forces shaping global influence.


Original link: https://www.ft.com/content/c3ac79f5-e2e4-4b45-96aa-7005a65ee550

 

‘Big Data’ Helps HMRC Increase UK Tax Haul by £4.6bn

October 2025 | Josh Spero, Financial Times

​

HMRC’s Connect system — a data-mining platform combining bank, property, and marketplace records — helped recover an additional £4.6 billion in tax revenue last year. While hailed as a model for AI-powered enforcement, MPs warn that the system still lacks visibility into ultra-wealthy taxpayers’ finances. Officials insist that human judgment remains central to investigations.

 

Why it matters: The success of Connect highlights the power of algorithmic enforcement — but also the democratic risks when governments rely on opaque data systems to wield fiscal authority.


Original link: https://www.ft.com/content/47331a3b-a104-4924-96b7-3af3b84288eb

​

AI and Europe’s Quest for a $1tn Company

October 2025 | Financial Times

​

As the US and China dominate the AI arms race, Europe faces mounting pressure to produce its own $1 trillion tech champion. The piece examines how EU regulators, investors, and founders are struggling to balance strict oversight with global competitiveness. Start-ups like Mistral and Aleph Alpha symbolize European ambition, but chronic underfunding and fragmented markets remain major obstacles.

​

Why it matters: Europe’s AI ambitions reveal a broader struggle to reconcile innovation with regulation — a tension that could define the continent’s economic future.
 

AI Market and Investment

The World Economy in an Age of Disorder

October 14, 2025 | Martin Wolf, Financial Times

​

Martin Wolf examines how global economic resilience persists amid geopolitical upheaval and the AI investment boom. While Donald Trump’s tariffs and rising debt levels unsettle markets, the IMF projects only mild global slowdown — partly due to adaptive private sectors and buoyant AI-driven equity markets. Yet Wolf warns that technological acceleration and political fragmentation may render future forecasting futile.

​

Why it matters: Wolf’s essay situates AI within a broader narrative — where innovation fuels optimism even as geopolitical disorder challenges the stability that growth depends on.


Original link: https://www.ft.com/content/8176571c-9049-4b80-aede-f7a067b92646

 

OpenAI’s Era-Defining Money Furnace

September 2025 | Financial Times (Alphaville)

​

In the first half of 2025, OpenAI generated $4.3 billion in revenue but burned $2.5 billion — with marketing and employee stock compensation alone surpassing total revenue. The company posted an operating loss of $7.8 billion while pushing to raise tens of billions more for data centers and computing deals. Despite these losses, OpenAI’s valuation surged to $500 billion, reflecting investor belief in its long-term potential and dominance in AI model development.

​

Why it matters: The report captures both the scale and audacity of OpenAI’s financial model — spending ahead of profits to secure an AI future that may reshape global tech power structures.


Original link: https://www.ft.com/content/908dc05b-5fcd-456a-88a3-eba1f77d3ffd

 

Sam Altman Has a New Project: Building AI Inc

October 2025 | Financial Times (Lex)

​

OpenAI’s recent deals with AMD and Nvidia reveal CEO Sam Altman’s broader project — constructing a networked corporate ecosystem where suppliers, investors, and customers are intertwined. AMD’s new partnership grants OpenAI warrants for up to 10% of its stock if certain milestones are met, mirroring Nvidia’s $100 billion investment plan. These interlocking relationships resemble Japan’s historical “keiretsu” structures, aligning strategic interests but amplifying systemic risk if OpenAI falters.

​

Why it matters: Altman’s corporate web marks a shift in how tech ecosystems are financed — transforming AI infrastructure into an interconnected economic bloc whose success (or failure) could ripple across global markets.


Original link: https://www.ft.com/content/9bfe8410-a984-406c-a8ef-b0814a6c1647

​

OpenAI’s 5-Year Business Plan to Meet $1 Trillion Spending Pledges

October 2025 | Financial Times

​

Leaked documents reveal OpenAI’s five-year roadmap to meet over $1 trillion in capital commitments tied to compute, chip supply, and infrastructure deals. The plan outlines new revenue streams from ChatGPT enterprise licensing, AI hardware co-development, and API monetization. It projects profitability by 2030, hinging on doubling paying ChatGPT subscribers and scaling B2B model integrations. Insiders say the plan assumes continued backing from Microsoft and new sovereign investors to fund debt-heavy infrastructure expansion.

​

Why it matters: The blueprint underscores how OpenAI is evolving from a research lab into a capital-intensive industrial enterprise — betting its future on sustained investor confidence and mass AI adoption.

 

OpenAI Targets 10% AMD Stake via Multibillion-Dollar Chip Deal

October 2025 | Financial Times

​

OpenAI struck a landmark agreement granting it the right to acquire up to 10% of AMD’s shares for just one cent apiece, contingent on hitting joint project milestones. The deal aligns AMD’s growth directly with OpenAI’s data center buildout, potentially generating $200 billion in chip sales. Analysts say the structure — combining investment, supply, and incentives — blurs the line between customer and shareholder, creating a mutual dependency unprecedented in the semiconductor industry.

​

Why it matters: The arrangement highlights a new era of “compute diplomacy,” where access to high-performance chips becomes as strategically valuable as capital — reshaping both AI and semiconductor markets.
 

OpenAI Is Upping the Ante Massively

October 2025 | Financial Times

​

The profile chronicles how Sam Altman is doubling down on OpenAI’s trillion-dollar expansion push despite deepening investor skepticism. With multibillion-dollar chip and data center commitments already in motion, Altman insists OpenAI’s “scale-first” approach is the only viable path to maintaining global leadership in general-purpose AI. Executives concede the company is operating on the edge of feasibility, burning billions monthly but securing long-term compute access before competitors can.

​

Why it matters: The story encapsulates the breakneck pace and existential stakes of the AI arms race — illustrating how OpenAI’s audacity may either cement its dominance or trigger an industry-wide reckoning.

​

Shareholders Should Have More Say Over the AI Rush

October 2025 | Financial Times

​

The Financial Times editorial board calls for stronger shareholder oversight of Big Tech’s AI investments, arguing that the trillion-dollar capital race is unfolding with minimal transparency or governance scrutiny. As OpenAI, Microsoft, and others commit extraordinary sums to data centers and chips, investors remain in the dark about long-term risk exposure, ethical safeguards, and climate implications. The board urges regulators to strengthen disclosure standards, and institutional investors to use their voting power to demand accountability before the next speculative cycle runs its course.

​

Why it matters: Investor stewardship could be the only check on unchecked AI expansion — ensuring corporate ambition doesn’t outpace ethics, safety, or financial prudence.
 

Silicon Valley Takes Stock of the AI Bubble

October 2025 | Financial Times

​

Tech insiders and analysts now openly debate whether the AI sector has entered bubble territory. Start-up valuations have reached 1999-like extremes, while venture funding and GPU demand soar. Yet, unlike the dotcom era, AI’s underlying infrastructure and productivity potential remain tangible. The article notes a growing divide between companies with genuine technical moats and those merely “AI washing” to attract capital, as investors weigh the sustainability of current valuations.

 

Why it matters: The piece captures a rare moment of self-awareness in Silicon Valley — a recognition that the same speculative forces driving innovation could also threaten it.
 

Start-up Modular Raises $250mn to Challenge Nvidia’s Software Dominance

October 2025 | Financial Times

​

AI infrastructure start-up Modular has raised $250 million to expand its unified software layer for model deployment — a direct challenge to Nvidia’s CUDA ecosystem. Backed by General Catalyst and Google’s Gradient Ventures, Modular aims to simplify AI scaling across CPUs, GPUs, and custom accelerators, freeing developers from vendor lock-in. The company’s open-source roots and early traction among enterprise clients suggest a potential shift in how AI compute is orchestrated.

​

Why it matters: Modular’s rise reflects growing industry appetite for interoperability — a push to democratize AI infrastructure and reduce dependence on Nvidia’s closed stack.
 

The AI Capex Endgame Is Approaching

October 3, 2025 | Ian Harnett, Financial Times (Absolute Strategy Research)

​

Harnett warns that AI’s unprecedented capital expenditure boom — led by hyperscalers like Microsoft and Amazon — shows classic bubble characteristics. Excess capacity, circular vendor financing, and sky-high valuations echo the late-1990s tech bubble. Yet, as in past innovation waves, the coming bust could accelerate AI’s long-term adoption by redistributing cheap infrastructure to new entrants.

​

Why it matters: The essay reframes a potential AI correction as creative destruction — painful for investors but vital for embedding AI into the real economy.


Original link: https://www.ft.com/content/c7b9453e-f528-4fc3-9bbd-3dbd369041be

 

The AI Trade: Do Geopolitics Matter?

October 13, 2025 | Hakyung Kim & James Fontanella-Khan, Financial Times

​

This column argues that investors can no longer ignore geopolitics in the AI boom. Rising U.S.-China tensions are reshaping chip supply chains and investor sentiment. Analyst Henry Wu of Alpine Macro suggests the biggest winners will be firms at supply bottlenecks — ASML, TSMC, Samsung — but warns that technological bifurcation will create volatility. Meanwhile, AI adoption within the U.S. lags far behind innovation, highlighting a disconnect between invention and implementation.

​

Why it matters: AI’s trajectory increasingly depends not just on algorithms, but on diplomacy — as supply chains, alliances, and data sovereignty define who wins the race.


Original link: https://www.ft.com/content/45866e90-008d-41b7-b71f-61892d401700

​

How OpenAI Put Itself at the Centre of a $1tn Network of Deals

October 2025 | Financial Times (Richard Waters)

​

OpenAI has constructed an unprecedented financial web linking nearly every major tech and chip company. Deals with Nvidia, AMD, Oracle, and Microsoft could amount to over $1 trillion in commitments for compute and data infrastructure, blurring lines between customer, supplier, and investor. While the strategy enables vast capital access, it also concentrates systemic risk — where a stumble by any major player could ripple through the entire AI economy.

​

Why it matters: Reveals how OpenAI’s rise has financialized the AI boom, creating a dense, interdependent network whose collapse could destabilize both tech markets and global infrastructure investment.


Original link: https://www.ft.com/content/4e39d081-ab26-4bc2-9c4c-256d766f28e2

​

Global Crossing Is Reborn – Praetorian Capital

October 2025 | Praetorian Capital

​

Praetorian Capital examines the resurgence of Global Crossing, once infamous for its early-2000s collapse, now reinvented as a major player in global digital infrastructure. The report outlines its pivot from fibre-optic excess to AI-era necessity: the company’s undersea cables and edge data hubs are powering next-generation models and cloud systems. Backed by sovereign funds and AI-driven traffic analytics, the new Global Crossing positions itself as the backbone of “compute liquidity” across continents.

​

Why it matters: Shows how legacy infrastructure assets are being revalued in the AI economy, as connectivity and data throughput become strategic resources for both corporations and governments.


Original link: https://praetoriancapital.com/global-crossing-is-reborn

 

James Anderson Warns Nvidia’s $100bn OpenAI Bet Echoes Dotcom Bubble

October 2025 | Financial Times (Harriet Agnew)

​

Veteran investor James Anderson cautions that Nvidia’s $100bn investment in OpenAI may signal speculative excess reminiscent of the early 2000s. While acknowledging Nvidia’s unmatched position in AI hardware, he argues that current valuations reflect euphoria rather than sustainable demand. Anderson predicts a correction that will separate foundational infrastructure builders from hype-fuelled entrants.

​

Why it matters: Offers a sobering counterpoint to AI’s trillion-dollar optimism, reminding investors that transformative technology cycles often overshoot before stabilising into real productivity gains.


Original link: https://www.ft.com/content/9d1d36a5-bf45-44ab-8f93-ef9ac3e7b6b9

 

Jeff Bezos Hails AI Boom as ‘Good’ Kind of Bubble

October 2025 | Financial Times (Richard Waters)

​

Jeff Bezos has characterised the current AI surge as a “productive bubble,” arguing that excess investment is fuelling the infrastructure and experimentation needed for long-term progress. Speaking at the Code Conference, he drew parallels to the 1990s internet wave, saying many ventures will fail but the underlying transformation will endure. Bezos also highlighted Amazon’s Bedrock platform as proof that early overbuilding often accelerates maturity.

​

Why it matters: Frames AI’s exuberance not as reckless speculation but as the essential risk-taking phase of technological revolutions—inviting investors to differentiate between froth and foundation.


Original link: https://www.ft.com/content/d53c4e2a-4d8b-466f-8415-5dcd8f5f98d7

​

Measuring Risk in the AI Financing Boom

October 2025 | Financial Times (John Plender)

​

This column dissects the rapid expansion of AI-related financing, from sovereign funds to speculative ETFs, and warns of systemic fragility. The author compares AI’s funding model to pre-crisis mortgage securitisation—complex, opaque, and heavily leveraged. Analysts fear that the interlocking of Big Tech balance sheets and AI start-ups could magnify shocks if earnings disappoint.

​

Why it matters: Provides a macroeconomic lens on the AI craze, reminding policymakers that innovation booms often hide correlated risks that only surface when optimism fades.


Original link: https://www.ft.com/content/40a86e5b-58cb-4f38-9b43-b11e9f7a2e7a

​

Nvidia Challenger Cerebras Raises $1.1bn Ahead of IPO

September 2025 | Financial Times (George Hammond)

​

Silicon Valley start-up Cerebras Systems has raised $1.1bn from investors including Fidelity, Tiger Global, and 1789 Capital to challenge Nvidia’s dominance in AI chips. Its dinner-plate-sized wafer-scale processors promise faster model training than conventional GPUs. CEO Andrew Feldman argues Nvidia is using financial leverage, not superior technology, to maintain market control.

​

Why it matters: Signals that credible hardware challengers are finally emerging, potentially diversifying an AI compute landscape long monopolised by Nvidia’s CUDA ecosystem.


Original link: https://www.ft.com/content/26e05fa2-8696-4b3d-88dd-71810389ab48

 

Nvidia’s $100bn Bet on ‘Gigantic AI Factories’ to Power ChatGPT

September 2025 | Financial Times (Tim Bradshaw, George Hammond & Stephen Morris)

​

Nvidia will invest up to $100bn in OpenAI to fund vast data centres—“AI factories”—that will train and run ChatGPT and future models. The deal, negotiated personally by Jensen Huang and Sam Altman, cements Nvidia’s role as OpenAI’s strategic compute partner. Analysts warn of circular financing and soaring energy demand, with each gigawatt of AI infrastructure costing roughly $50bn to deploy.

​

Why it matters: Highlights how AI’s growth now hinges on industrial-scale infrastructure, where energy, capital, and compute power intersect to define global technological influence.


Original link: https://www.ft.com/content/7cee5e77-2618-4ed4-b600-aee22238d07a

 

Nvidia becomes world’s first $5tn company

October 2025 | Financial Times

​

Nvidia’s valuation soared past $5 trillion after reporting record AI chip sales and half a trillion dollars in orders for the next five quarters. CEO Jensen Huang said demand from major tech firms building AI infrastructure has given the company unprecedented revenue visibility. The milestone, coming just three months after it crossed $4 trillion, cements Nvidia as the leading beneficiary of the global AI boom. Despite ongoing export restrictions to China, markets rallied on speculation of renewed access following upcoming talks between U.S. and Chinese leaders.

​

Why it matters: Nvidia’s meteoric rise epitomizes how AI hardware has become the new economic backbone of the tech sector—its dominance now shapes both financial markets and geopolitics.

​

America’s Top Companies Keep Talking About AI — But Can’t Explain the Upsides

September 2025 | Financial Times

​

A Financial Times analysis of hundreds of filings and earnings calls from S&P 500 companies finds that while corporate America can’t stop talking about AI, few can explain its tangible benefits. Mentions of AI in SEC filings have surged, but most companies focus more on risks — such as cybersecurity and legal exposure — than on measurable gains. Outside Big Tech, firms like Coca-Cola and Lululemon mention AI in marketing or administrative contexts rather than core innovation. Meanwhile, only data-center-linked sectors like energy and manufacturing show clear economic upsides.

​

Why it matters: The research reveals a disconnect between the hype around AI and its real economic impact, suggesting that corporate adoption is often driven by fear of missing out rather than strategy.


Original link: https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473

​

AI Has a Cargo Cult Problem

October 2025 | Financial Times

​

Gillian Tett likens today’s AI investment frenzy to the “cargo cults” of the past—imitating the trappings of innovation without genuine understanding. Despite valuations approaching $1tn for loss-making AI startups, she warns that circular financing and speculative exuberance are reminiscent of pre-2008 financial interconnections. While some, like Jeff Bezos, argue such “industrial bubbles” ultimately build future infrastructure, Tett cautions that the blind faith surrounding AI risks creating symbolic progress rather than substance.

​

Why it matters: By drawing parallels between AI hype and anthropological “cargo cults,” the piece underscores the danger of mistaking imitation and investment frenzy for real technological advancement.


Original link: https://www.ft.com/content/f2025ac7-a71f-464f-a3a6-1e39c98612c7

 

AI’s Double Bubble Trouble

October 2025 | Financial Times

​

John Thornhill examines whether the current AI boom represents both an industrial and financial bubble. Tech leaders like Eric Schmidt defend the surge as a “good bubble” — one that channels capital into transformative infrastructure — while analysts at the IMF and Goldman Sachs warn of unsustainable valuations. Thornhill suggests both forces are at play: productive overinvestment building tomorrow’s computing backbone, and speculative exuberance that may soon deflate.

​

Why it matters: The analysis cuts through polarised narratives of AI hype versus doom, showing how speculative cycles can simultaneously distort markets and accelerate real innovation.


Original link: https://www.ft.com/content/da16e2b1-4fc2-4868-8a37-17030b8c5498

 

America Is Now One Big Bet on AI

October 2025 | Financial Times

​

The Financial Times argues that the U.S. economy has become a macro-scale wager on the success of artificial intelligence. Massive capital flows, from venture funds to infrastructure spending, are being justified by the promise of AI-driven productivity — even as tangible returns remain elusive. The piece traces how AI has reshaped markets and policy priorities: data centers driving energy demand, chipmakers ballooning in valuation, and Wall Street touting AI as the next industrial revolution. Yet, the article warns that so much economic optimism is now concentrated in a single, unproven technology.

​

Why it matters: The U.S. economy’s faith in AI mirrors past technological manias — but the scale is unprecedented, meaning that any disappointment could ripple across every sector, not just tech.

​

America’s Gravity-Defying Economy

October 2025 | Financial Times

​

In this analysis, the Financial Times examines how the American economy continues to grow robustly despite high interest rates, persistent inflation, and global uncertainty — with AI investment as a major new driver. Corporate spending on automation, chips, and cloud infrastructure has offset broader economic slowdowns, creating an illusion of resilience. Yet, economists cited in the piece caution that much of this apparent strength rests on speculative AI enthusiasm rather than fundamentals like productivity growth or wage gains.

​

Why it matters: The piece situates AI within a broader macroeconomic puzzle — how optimism about technological change can buoy markets and confidence even as underlying stability remains fragile.

 

An AI Addendum — Praetorian Capital

October 2025 | Praetorian Capital Report

​

This investor commentary warns that AI’s hype is distorting capital allocation, with venture and institutional money crowding into a handful of high-profile firms. The report argues that while AI tools may transform business processes, the current enthusiasm has triggered over-valuations and mispricing of risk reminiscent of the late-1990s dot-com boom. It urges investors to distinguish between long-term infrastructure builders and speculative plays reliant on hype cycles.

​

Why it matters: The analysis captures growing unease within financial circles that AI investment trends are being driven more by narrative momentum than by fundamentals — a red flag for prudent investors.

 

Does GDP Growth Minus AI Capex Equal Zero?

October 2025 | Financial Times (Lex Column)

​

The Lex team questions whether current U.S. economic growth is largely an illusion inflated by AI-related capital expenditure. With corporate spending on chips, cloud infrastructure, and data centres soaring, they suggest GDP growth may vanish once this wave of investment normalizes. The column compares today’s AI boom to past capital-led surges, warning that output gains remain modest despite record outlays.

​

Why it matters: Raises the uncomfortable prospect that the AI economy may be fuelling statistical growth without real productivity — a new form of “digital Keynesianism.”
 

European Tech Hopefuls Receive Transatlantic Boost

October 2025 | Financial Times

​

A new wave of European AI and semiconductor start-ups has secured major U.S. funding, as investors seek alternatives to Chinese supply chains. The article profiles emerging players in France, the Netherlands, and Germany building chips and models optimized for efficiency and privacy. Analysts say the transatlantic partnership marks a shift toward strategic diversification rather than pure venture capital speculation.

​

Why it matters: Illustrates how geopolitics is reshaping tech investment flows — with Europe cast as both a buffer and a beneficiary in the U.S.–China AI rivalry.

 

US Companies Love AI. But Can’t Say Why

October 2025 | Financial Times

​

A Financial Times News Briefing episode highlights that while the largest US-listed corporations repeatedly tout artificial intelligence in their public statements, few can articulate how the technology tangibly benefits their businesses. Analysts note that “AI” has become a rhetorical staple in earnings calls and PR materials—often without evidence of increased productivity or profitability.

​

Why it matters: The gap between corporate hype and measurable outcomes underscores AI’s current status as a signaling tool for innovation rather than a consistently transformative capability.


Original link: https://www.ft.com/content/1a592bc8-03d6-46a3-90a4-5ed8c0561e77

​

​

What GPU Pricing Can Tell Us About How the AI Bubble Will Pop

October 2025 | Financial Times

​

A Financial Times analysis uses the soaring and volatile pricing of Nvidia’s GPUs as a barometer for the state of the AI boom. While demand for chips powering large models and data centres remains intense, oversupply risks loom as smaller developers and hyperscalers stockpile hardware. Analysts warn that when chip prices start to fall faster than computing demand rises, investor exuberance could reverse sharply — echoing the dotcom hardware glut of the early 2000s.

​

Why it matters: GPU prices offer one of the clearest signals of whether AI’s growth is sustainable or speculative — and could foreshadow the next correction in the technology sector.

 

1929 by Andrew Ross Sorkin — The Hubris Behind the Wall St Crash

October 2025 | Financial Times

​

Andrew Ross Sorkin revisits the events leading up to the 1929 stock market crash, tracing eerie parallels between the speculative excesses of early Wall Street and the current AI-fueled financial euphoria. His narrative highlights how technological optimism, credit expansion, and faith in self-regulating markets set the stage for collapse — lessons that resonate amid today’s venture capital mania around artificial intelligence.

​

Why it matters: By drawing lines between past and present financial manias, Sorkin underscores how collective overconfidence and innovation hype can destabilize economies, even when cloaked in progress.

 

‘Of Course It’s a Bubble’: AI Start-Up Valuations Soar in Investor Frenzy

October 2025 | George Hammond, Financial Times

​

Valuations for ten major AI start-ups, including OpenAI, Anthropic, and xAI, have risen by nearly $1 trillion in a single year. Venture capitalists have poured $161 billion into AI in 2025 alone, fueling comparisons to the dotcom era. While investors defend bubbles as necessary engines of progress, analysts warn that overvaluation could trigger systemic risks across public markets.

​

Why it matters: The article captures both the exuberance and the fragility of the AI gold rush — a reminder that speculative capital may yet define, or derail, the industry’s next chapter.


Original link: https://www.ft.com/content/59baba74-c039-4fa7-9d63-b14f8b2bb9e2

​

OpenAI’s Computing Deals Top $1 Trillion

October 2025 | Financial Times

​

OpenAI has signed roughly $1 trillion in agreements for computing infrastructure with AMD, Nvidia, Oracle, and CoreWeave — commitments that far exceed its current revenue base. The deals, part of CEO Sam Altman’s aggressive expansion strategy, aim to secure more than 20 gigawatts of computing capacity over the next decade, equivalent to 20 nuclear reactors. These circular partnerships — including equity swaps, financing arrangements, and supplier incentives — have tied the fates of several tech giants to OpenAI’s long-term profitability, even as analysts warn the company could lose $10 billion this year.

​

Why it matters: OpenAI’s vast compute expansion underscores the staggering capital demands of generative AI — and how the industry’s biggest players are becoming financially interdependent in a high-risk, high-reward ecosystem.


Original link: https://www.ft.com/content/5f6f78af-aed9-43a5-8e31-2df7851ceb67

Further Reading: Find out more from these resources

Resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page