top of page

THE SKINNY
on AI for Education

Issue 22, November 2025

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​

Headlines

​

​​​

Welcome to The Skinny on AI in Education. Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of recent AI developments reshaping education.​​

​​​​

​

When We Stop Asking Why

editorial image.jpg

When We Stop Asking Why

​

My son was a specialist at “But Why?” And every parent will understand what that means. Your child asks "why?" You answer. They ask "why?" again. You answer again. "But why?" And again. Somewhere around the fifth iteration, patience frays. You hear yourself say the words you swore you never would: "Because I say so." Or worse: "That is just the way it is."

​

We silence the questioning because it is exhausting. But that relentless "why?" is precisely how understanding is built. And I am worried that far too many people have stopped asking “why?” when it comes to AI.

​

Not because they have understood AI, but because the interfaces are so seamless, the friction so carefully removed, that the questions never arise. The AI just works until it doesn't. And by then, the calendar invitation has been sent, the email dispatched, the purchase confirmed, the decision made.

​

This is what worries me most about the agentic AI revolution now unfolding.

​

I had been worrying about this for a while and then when I was reading Issue 327 of Andrew Ng's newsletter, that worry turned into something I wanted to write about. Ng, one of the world's foremost AI experts, mentioned in passing that he would not trust an AI agent to prioritise his calendar or choose what to order for lunch. Because he knows where not to trust a task to AI. Because he has asked “why?” about AI thousands of times.

​

The Promise of Cognitive Offloading and the Danger of Abdication

​

Let me be clear: the cognitive offloading that agentic AI encourages is not inherently bad. Cognitive offloading is one of humanity's oldest and most powerful strategies. By offloading lower-order cognitive tasks, we free ourselves for higher-order thinking. This is the optimistic case for AI assistance, and it is not wrong. The cognitive effort I save when I use AI on mechanical tasks is reinvested in judgement, creativity, and critical evaluation.

​

The key word in that sentence is "reinvested."

​

The trouble begins when offloading becomes delegation, and delegation becomes abdication.

​

Three years ago, AI was a tool you queried. The cognitive loop remained intact: you were still thinking; the AI was helping you think.

​

Two years ago, AI became a collaborator: drafting, revising, iterating. The loop stretched but did not break.

​

Now, AI is becoming an agent. It plans, decides, and acts. It books flights, sends emails, makes purchases, executes code. The loop is no longer a loop; it is a delegation chain that extends beyond your direct oversight. The gap between capability and human understanding has become a chasm.

​

Andrew Ng can make informed decisions about what to trust because he understands the underlying technology. He knows LLMs are probabilistic systems that predict plausible next tokens; confidently wrong as often as confidently right. He knows “almost always right” is not the same as 'reliable' when the task involves scheduling conflicts or professional relationships.

​

Most people do not know these things; and why would they? The interfaces are seamless, the complexity hidden. But good design in a consumer product is not the same as appropriate design for wise consequential delegation.

​

When you offload a task to a system you do not understand, you cannot evaluate the risks. You cannot set appropriate boundaries. You cannot know what questions to ask before you grant permissions that you may later regret.

​

This is not a hypothetical concern. The news I have been summarising this month is filled with examples like the 72% of the servers connecting AI agents to external tools that have security vulnerabilities that users cannot see and would not understand even if they could.

​

In each case, the harm arose not from the technology being malicious, but from users making decisions about trust and delegation without understanding what they were trusting or delegating.

​

The Agentic Acceleration

​

What troubles me about the current moment is the speed at which we are compounding this problem.

​

The agentic AI revolution is not a gradual evolution; it is a phase change. We are moving from AI that assists your thinking, to AI that acts on your behalf: often through chains of reasoning and tool use that you never see and could not evaluate if you did. For example, Google's Agent Payments Protocol allows AI agents to negotiate and transact with other AI agents, using cryptographic mandates that humans may never directly review.

​

This represents a fundamental shift in the locus of decision-making. The further the decision-making moves from the human, the more consequential it becomes that the human understands what they have authorised.

​

The Educational Imperative

​

We are not going to stop the agentic revolution. Nor should we want to: the potential benefits are real. But we can and must influence how people engage with it. We can close the expertise gap, not by turning everyone into machine learning engineers, but by building foundational understanding of what these systems are, how they work, and where they fail.

​

This is not about technical detail for its own sake. It is about informed consent. It is about equipping people to make good decisions about what to delegate, what to verify, and what to retain.

​

At minimum, people need to understand:

  1. What foundation models actually do. They predict plausible continuations of text based on patterns in training data. They do not "know" things in the way humans know things. They do not have goals, memories, or stakes in outcomes.

  2. Why confident wrongness is a feature, not a bug. These systems are optimised to produce fluent, plausible responses. Saying "I don't know" is not what they were trained to do. Calibrated uncertainty is an unsolved problem.

  3. How agents extend the risk surface. Every tool an agent can access, every action it can take, every system it can connect to represents a potential failure point. The more capable the agent, the larger the ‘blast radius’ when something goes wrong.

  4. Where the human must remain in the loop. Some decisions require judgement that cannot be delegated: ethical choices, relationship-dependent communications, commitments that carry reputational or legal weight. Knowing which decisions fall into this category requires understanding what AI can and cannot do.

​

The Paradox We Must Navigate

​

So here is the paradox we face: a sixth to add to the five I explore in the ‘Skinny Scan’ that follows this editorial.

​

Cognitive offloading to AI can make us more intelligent, more productive, more capable of sophisticated thought and action. Or it can make us less so: dependent on systems we do not understand, delegating decisions we should not delegate, trusting outputs we cannot evaluate.

​

The difference lies not in the technology but in the user. And the user's capacity to navigate this distinction depends on education.

​

Our task is to ensure that everyone who uses these systems, which will soon be nearly everyone, has enough understanding to make informed choices about what to trust, what to verify, and what to keep for themselves.

​

The agents are coming. The question is whether we will meet them with wisdom or with blind faith.

​

I know which outcome I am working towards.

The ‘Skinny Scan’ on what is happening with AI in Education….

The Five Paradoxes of Progress

​

We are building something we do not fully understand, at a speed we cannot fully control, with money we may never recoup. And yet we cannot stop: because the alternative might be worse.

​

This is the strange position in which the world finds itself as 2025 draws to a close. AI has moved from research curiosity to civilisational bet in barely three years. The numbers involved have become almost absurd: trillion-dollar infrastructure projects, billion-user platforms, models with a trillion parameters. We speak of these figures with the casual familiarity of weather reports, rarely pausing to register their weight.

​

But beneath the breathless announcements and market turns, a deeper story is unfolding: one defined not by certainties but by paradoxes. I identify five of them that will determine whether we look back on this moment as the dawn of a new era or the peak of a spectacular bubble.

​

The Centralising Paradox

​

The first paradox is perhaps the strangest. AI is simultaneously the most centralising and most democratising technology in a generation.

​

On one side: power is pooling at unprecedented speed. A handful of companies: Nvidia, Microsoft, Google, OpenAI, Anthropic: control the commanding heights of the AI economy. Nvidia alone is worth more than the GDP of most nations. Access to the frontier requires billions in capital, warehouses of specialised chips, and enough electricity to power small cities. The barriers to entry have never been higher.

​

And yet. A researcher in Hangzhou can download DeepSeek's latest model under an MIT licence, run it on local hardware, and build applications that rival anything from Silicon Valley. Alibaba's Qwen family: open-weight, Apache-licenced: now matches or exceeds closed models on many benchmarks. Ant Group's Ling-1T, released freely, outperforms proprietary competitors that cost millions to access.

​

The technology itself is concentrating; the knowledge is dispersing. Both things are true at once. How this tension resolves: whether the giants consolidate their position or the open ecosystem erodes their moats: may be the most consequential question in technology today.

​

The Speed-Safety Paradox

​

The second paradox concerns time itself.

​

Everyone agrees that AI systems should be safe, reliable, and aligned with human values. Everyone also agrees that competitive pressure makes careful, methodical development almost impossible. The result is a collective action problem of historic proportions.

​

Consider: Anthropic publishes research showing that Claude can be manipulated into conducting cyberattacks through patient role-play. The finding is important: it reveals a genuine vulnerability. But publishing it also hands a playbook to malicious actors. Meanwhile, 72% of the servers connecting AI agents to external tools turn out to have security flaws. The race to deploy agentic systems is outpacing the race to secure them.

​

The companies involved are not villains. Many of them genuinely want to build safe systems. But they are caught in a dynamic where slowing down unilaterally means losing ground to competitors who do not. The incentives point one way; the risks point another. We are building minds at the speed of startups.

​

The Infrastructure Paradox

​

The third paradox is economic.

​

To justify current AI investments, the industry needs to generate roughly two trillion dollars in annual revenue by 2030. This is not a projection; it is arithmetic. The capital being deployed: on data centres, chips, energy, talent: must eventually produce returns, or the entire edifice collapses.

​

And yet the path to those returns remains stubbornly unclear. Yes, eight hundred million people use ChatGPT weekly. Yes, 44% of American companies now pay for AI tools, up from five per cent two years ago. But the revenue generated by AI applications remains a fraction of the investment flowing in.

​

The optimistic view holds that we are in an infrastructure-building phase, analogous to laying railway tracks or stringing fibre-optic cables. The returns will come later, once the foundations are complete. The pessimistic view notes that 37 of 51 major technology innovations in the past two centuries led to financial bubbles: and that the executives leading this charge are the same people whose compensation depends on the investments continuing.

​

Both views could be right. The technology might be transformative and the current financial structure unsustainable. We might be building something genuinely important on a foundation of speculation that will eventually give way.

​

The Evaluation Paradox

​

The fourth paradox is technical, though its implications are anything but.

​

Andrew Ng, one of the field's most influential figures, has spent the autumn making a single point with almost missionary intensity: the ability to evaluate AI systems has become the critical bottleneck. Not training, not deployment, not even capability: evaluation.

​

The problem is this: traditional software either works or it does not. You can write tests, verify outputs, measure performance against specifications. But AI systems: particularly agentic ones that reason, plan, and act autonomously: operate in a space of possibilities too vast to test exhaustively. There are too many ways for them to be subtly, consequentially wrong.

​

This creates an uncomfortable situation. We are deploying systems whose failure modes we cannot fully characterise, in domains where failures have real consequences. We are, in a sense, running experiments on the world without knowing what we are measuring.

​

The companies building these systems are aware of the problem. They are investing heavily in evaluation frameworks, red-teaming, interpretability research. But the gap between capability and understanding continues to widen. We can build systems that perform impressively; we are still learning how to know whether we should trust them.

​

The Trust Paradox

​

The fifth paradox may be the most consequential.

​

AI systems are becoming more capable at precisely the moment public trust is most fragile. Deepfakes undermine visual evidence. Synthetic text pollutes the information commons. Chatbots designed to be helpful sometimes reinforce delusions or manipulate emotions. The same technology that enables a student to learn faster enables a fraudster to deceive more convincingly.

​

The documents accumulating on my desk tell both stories. Here: AI-generated bacteriophages defeating antibiotic-resistant infections, a genuine medical breakthrough. There: documented cases of AI psychosis, users hospitalised after chatbots reinforced their delusions. Here: models helping researchers accelerate drug discovery. There: those same models potentially enabling the synthesis of dangerous pathogens.

​

The technology is not good or evil. It is powerful: which is to say, it amplifies whatever intentions are brought to it. This has always been true of transformative technologies. What is different now is the speed. We do not have decades to develop norms, institutions, and safeguards. We have years, perhaps months.

​

The Question We Cannot Answer

​

So here we are, at the crossroads.

​

Is this the beginning of a transformation as profound as electrification or the internet: a moment we will look back on as the point when everything changed? Or are we in a bubble, one that will correct painfully before the genuine benefits emerge?

​

The honest answer is: we do not know. The people building these systems do not know. The investors funding them do not know. The regulators attempting to govern them do not know.

​

What we do know is that the stakes are unprecedented. The capital being deployed, the capabilities being developed, the systems being integrated into critical infrastructure: all of it is happening at a scale and pace without historical parallel.

​

We also know that the choices being made now will shape the outcome. Whether the open ecosystem thrives or is captured by incumbents. Whether safety research keeps pace with capability development. Whether evaluation methods mature before autonomous systems are deployed at scale. Whether governance frameworks emerge before the technology outgrows our ability to control it.

​

These are not abstract questions. They are being answered, right now, in boardrooms and research labs and legislative chambers around the world. The answers will determine what kind of future we inherit.

​

The paradoxes will not resolve themselves. We will have to resolve them: with whatever wisdom, foresight, and luck we can muster.

​

The crossroads awaits the application of our human wisdom.

AI News Summary

AI in Education

All schoolchildren in England to be taught financial literacy

November 2025 | Financial Times

Financial education will become compulsory across English primary and secondary schools as part of the country’s first major curriculum overhaul in more than a decade. From 2028, pupils will learn money management, climate literacy and how to identify misinformation, reflecting parental concerns and young people’s growing exposure to digital financial systems. The plan sits alongside broader reforms to reading tests, performance metrics and post-16 qualifications.

Why this is worth your attention: Responds to growing financial vulnerability among young people and aims to equip future generations with essential life skills amid rising online and economic complexity.
Original link: https://www.ft.com/content/8f0d9c1e-bdb6-4e03-ac2b-60b286fa3a47

 

Getting the measure of business research downloads — and the rise of AI

November 2025 | Financial Times

Analysis of SSRN download data reveals that half of the year’s most accessed business-education papers cover artificial intelligence, reflecting surging interest from policymakers, investors and industry leaders. Topics range from AI productivity effects to algorithmic personality inference and financial-market applications. Tables on pages 5–6 show how AI dominates engagement compared with sustainability and traditional finance research.

Why this is worth your attention: Demonstrates how AI has become the central intellectual preoccupation of business academia, shaping future teaching, policy insight and corporate strategy.
Original link: https://www.ft.com/content/4340e9a8-783f-46f3-86c8-78fa36f3dd36

 

Pearson chief: ‘The majority of kids are not supported sufficiently’

November 2025 | Financial Times

Pearson CEO Omar Abbosh said AI will enhance — not replace — teachers, arguing that most children currently lack adequate personalised support. He outlined how AI can diagnose learning gaps, give real-time feedback and deliver tailored instruction across subjects, while insisting human pastoral roles remain irreplaceable. Abbosh also warned that ed-tech start-ups underestimate the difficulty of earning trust from educators and scaling effective learning systems.

Why this is worth your attention: Frames AI as a tool to reduce global learning inequity rather than a threat to teaching jobs, influencing policy debates on the future of education.
Original link: https://www.ft.com/content/749312b3-5339-41d8-85b0-22a8b1fc7ab1

 

Disruption to Assessment Security – Studiosity Acquires Norvalid

November 2025 | Studiosity

Studiosity, a long-standing provider of AI-supported learning platforms, has acquired Norvalid, a specialist in authorship validation that uses linguistic fingerprinting and personalised confirmation tools to verify genuine student work. The deal positions Studiosity as a counterweight to punitive, error-prone AI-detection regimes, replacing the “police and punish” model with a multi-layered validation system designed to affirm student effort and uphold credential integrity. Senior academics quoted throughout the release emphasise the shift away from adversarial approaches, arguing that institutions need scalable, non-punitive methods to maintain trust in assessment as generative AI becomes widespread. An early-access programme will launch in early 2026 for existing partner universities.

Why this is worth your attention: With traditional AI-detection tools rapidly failing in a post-GPT environment, universities are under pressure to rebuild assessment security around validation rather than suspicion — a shift that could reshape academic integrity policies worldwide.

 

AI and the Future of Learning (Google)

November 2025 | Google (Ben Gomes, Lila Ibrahim, Yossi Matias, Christopher Phillips, James Manyika)

Google’s 20-page report outlines how AI could expand access to learning while easing pressure on educators, arguing that personalised support, adaptive tutoring and multimodal interfaces can “unlock the power of learning science” at scale. Early sections chart global education disparities, drawing on data visualisations (pp. 5–6) showing declining OECD reading and maths scores and persistent socio-economic gaps. The report highlights five opportunity areas — from personalised tutoring to making complex topics learnable — and five challenge areas, including accuracy, safety, critical-thinking loss, cheating, and inequality (pp. 15–19). Google calls for responsible design grounded in transparency, safety layers for young learners, and ongoing collaboration with educators to avoid over-reliance and preserve deep learning.

Why this is worth your attention: As tech companies shape classroom AI, their frameworks will heavily influence global norms around safety, fairness and pedagogy — making transparency and educator-led oversight essential to prevent dependence on commercial systems.

 

Teaching the AI-Native Generation

November 2025 | Oxford University Press

This nationwide survey of 2,000 UK students explores how AI tools are reshaping learning habits, with more than 80 per cent of teenagers using AI for schoolwork and more than 90 per cent reporting skill development in areas such as problem-solving, creativity and revision (p. 4). Yet six in ten also say AI has negatively affected some abilities, particularly creative writing, independent thinking and technical skills (p. 5). Students want clearer school-level guidance on when to use AI, better teacher confidence, and tools focused on feedback, planning and exam preparation (p. 6). The report urges a shift toward AI literacy, stronger pedagogical design, and responsible frameworks that preserve teacher autonomy while helping students evaluate accuracy and bias.

Why this is worth your attention: The findings confirm that Gen-AI learners are simultaneously empowered and exposed — accelerating cognitively but risking dependence — underscoring the need for schools to embed AI literacy and critical-thinking skills across the curriculum.

 

AI Tutors Show Their Potential in Interactive Workplace Learning

October 2025 | Sarah Murray, Financial Times

AI-powered coaching and micro-learning tools are expanding rapidly across workplaces, transforming once-exclusive executive coaching into an on-demand resource accessible to thousands of employees. Companies use generative AI to convert compliance training into more engaging formats like podcasts and avatar-based modules, while firms such as Bank of America deploy immersive simulations used more than 1.8 million times last year (p. 4). Experts highlight a tension between personalisation and consistency: AI can tailor learning in real time, but employers must ensure key messages remain standardised. Human oversight remains essential for safeguarding sensitive conversations, mental-health issues and ethical decision-making, even as AI takes on a growing share of routine guidance.

Why this is worth your attention: As AI tutors scale across industries, they promise lower training costs and wider access to coaching — but also require new governance to manage privacy, bias and the limits of machine-guided professional judgement.

​

Greek secondary school teachers to be trained in using AI in classroom

November 2025 | The Guardian
(The Guardian)

Greece is launching a pilot programme in 20 secondary schools where teachers will be trained to use a bespoke version of ChatGPT Edu for lesson planning, research and personalised instruction, with a nationwide rollout planned from January 2026. The initiative stems from a partnership between the Greek government and OpenAI, and reflects a push to embrace AI as a core part of public education. Critics — including teachers’ unions, students and retired educators — warn that introducing AI in a system already heavily exam-oriented may erode critical thinking, foster dependence, and exacerbate inequalities due to underfunded infrastructure. Student protests and union statements underscore the concern that the pilot could turn schools into “testbeds” for unproven technology rather than improving learning outcomes.

Why this is worth your attention: Greece becomes one of the first European countries to institutionalise generative-AI in public schooling — how it manages teacher training, equity and pedagogical safeguards could influence wider EU and global policy on AI in education.

 

Results from TALIS 2024

October 2025 | OECD
(OECD)

The 2024 edition of the Teaching and Learning International Survey (TALIS), covering 280,000 teachers across 55 education systems worldwide, reveals that roughly one in three teachers now report using AI tools in their work. Common uses include learning about or summarising content (≈ 68%) and generating lesson plans or learning activities (≈ 64%), while less common applications are grading student work or analysing participation data. However, demand for professional development is high: about 29% of teachers cite need for more training in AI use — the largest such share among all skill-areas surveyed. The uptake varies widely across countries: for instance, over 75% of teachers in Singapore and the United Arab Emirates report AI use, whereas fewer than 20% do so in countries such as France and Japan.

Why this is worth your attention: TALIS 2024 provides the most comprehensive global evidence yet that AI is beginning to reshape day-to-day teaching — but the uneven adoption and widespread demand for training highlight that institutions must invest in teacher support and infrastructure to avoid exacerbating educational inequalities.

Original link: https://www.oecd.org/en/publications/results-from-talis-2024_90df6235-en.html

 

Big Tech makes Cal State campuses its AI training ground

October 26, 2025 | The New York Times
 

The California State University (Cal State) system has launched a sweeping collaboration with tech giants Amazon Web Services (AWS), OpenAI and others to integrate generative-AI across its 22-campus network: including a “AI Boot Camp” at Cal Poly, rolling out ChatGPT Edu to over half a million students and staff, and embedding chatbots and AI tools broadly into courses. The initiative aims to turn Cal State into America’s first “AI-empowered” public university, aligning curricula and skill development with AI-driven labour demand. But the effort has provoked pushback: faculty senates criticised the program as a marketing vehicle for Big Tech, raised concerns about academic independence, student privacy, environmental costs, and AI-driven erosion of critical thinking. Some students compared introductory talks to “timeshare presentations,” reflecting unease about corporatisation of education.

Why this is worth your attention: The deal symbolizes a fundamental shift — large parts of higher education are being reshaped by corporate AI influence, which could accelerate skills alignment with industry demand but also risk institutional autonomy, academic values, and the core mission of universities.

Original link: https://www.nytimes.com/2025/10/26/technology/cal-state-ai-amazon-openai.html

AI Ethics and Societal Impact

Can smart glasses ever earn our trust?

OCT 30 2025 | Financial Times

Chinese tech groups are racing to revive smart glasses by lowering prices and emphasising utility features such as real-time translation, navigation and workplace assistance. Alibaba’s new Quark AI glasses are priced at around $560—far below expected premium US models—as manufacturers work to overcome fears of constant filming that doomed earlier devices like Google Glass. A shift toward non-recording designs and clearer usefulness could help smart glasses move from niche gadgets to mainstream tools.

Why this is worth your attention: Smart glasses illustrate the tension between ambient AI, privacy expectations and geopolitics—and success could redefine how consumers interact with AI in everyday environments.

Original link: https://www.ft.com/content/419da63c-97de-4e74-a6b4-59892c31e27c

 

Where does Wikipedia go in the age of AI?

OCT 23 2025 | Elaine Moore

Elaine Moore explores how Wikipedia — once the archetype of collaborative, decentralised knowledge — is being reshaped by the rise of generative AI. The site’s human-crafted articles now feed large language models, even as human traffic declines and bot visits surge. Long-time volunteer editors show surprising calm: rather than fearing replacement, many believe Wikipedia’s role may evolve from content creation to accuracy verification, drawing on its strict citation rules and community moderation. AI tools already help detect vandalism, and paid API subscriptions offer some revenue stability. With Elon Musk threatening to launch a rival platform and AI companies scraping content without attribution, the future hinges on maintaining trust, neutrality and the value of human-verified information.

Why this is worth your attention: As AI systems increasingly generate and mediate online knowledge, Wikipedia’s human-governed model may become a rare anchor of reliability — but only if it adapts without losing its principles.
Original link: https://www.ft.com/content/8ba862be-b30a-4abe-8c87-88294a0ac19a

 

Will AI lengthen lifespans or shorten them?

OCT 24 2025 | John Burn-Murdoch

John Burn-Murdoch contrasts breakthroughs in AI-accelerated biomedical science — from DeepMind’s AlphaFold to OpenAI’s longevity-focused models — with social trends that could push life expectancy downward. While AI promises faster drug discovery and personalised treatments, new mortality data show rising deaths among younger and middle-aged adults in countries like the US, UK and Canada, driven less by drug availability or “despair” than by long-term joblessness and social isolation. Burn-Murdoch warns that widespread AI-led labour displacement could recreate these conditions at scale, while heavier reliance on AI companionship may deepen loneliness. The result may be a “tug of war”: longer lives for those who reach old age, but more premature deaths among economically and socially marginalised groups.

Why this is worth your attention: AI may simultaneously cure diseases and erode the social foundations of health — creating a world where technology widens, rather than narrows, lifespan inequality.
Original link: https://www.ft.com/content/eac9e5e4-5401-4411-9a69-c037b609c0cb

 

Tackling social media’s ‘monster’ problem

November 2025 | John Thornhill, Financial Times

In a fiery critique of the tech industry, Danish prime minister Mette Frederiksen warned that social platforms have “stolen our children’s childhood,” as Denmark moves to ban social media for under-15s. The policy reflects mounting global impatience with platforms’ failure to curb addiction, bullying and harmful content. A major new report from UK think-tank Demos adds urgency, showing how widespread distraction, misogyny and online harassment are shaping teens’ lives — and how desperately many want a way out.

Why this is worth your attention: With governments increasingly willing to regulate platform design, the debate is shifting from free-speech arguments to fundamental questions of children’s agency, safety and wellbeing.

Original link: https://www.ft.com/content/7282adcb-a107-479f-aca3-ef1b20103cba

 

‘Do not trust your eyes’: AI generates surge in expense fraud

OCT 26 2025 | Financial Times

Companies are reporting a sharp rise in AI-generated fake receipts as employees use advanced image-generation tools to falsify expenses with near-photographic realism. Platforms such as AppZen, Ramp and SAP Concur say AI-fabricated receipts now account for a measurable share of fraud attempts, coinciding with the release of improved image-generation models like GPT-4o. These forgeries include convincing textures, itemised menus and handwritten signatures, making them hard for humans to detect. Businesses are increasingly turning to AI-based fraud-detection systems that analyse metadata, repetition patterns and contextual trip details to spot anomalies—though employees can remove metadata by screenshotting images.

Why this is worth your attention: Expense fraud is becoming democratised: anyone can now produce convincing fake receipts with free AI tools, forcing organisations to rethink compliance, auditing and the security of routine financial processes.

Original link: https://www.ft.com/content/0849f8fe-2674-4eae-a134-587340829a58

 

The four horsemen of Europe’s tech dependency

OCT 30 2025 | Martin Sandbu

Martin Sandbu uses the metaphor of the Four Horsemen of the Apocalypse to explain how Europe’s deep reliance on US digital infrastructure erodes its geopolitical autonomy. He identifies four vectors of vulnerability: surveillance, where foreign platforms gather vast behavioural data; sabotage, as highly connected systems create new avenues for cyberattacks; surplus extraction, with Europe paying rising intellectual-property rents to US tech giants; and soft power, as social platforms amplify American political narratives and shape European public discourse. Sandbu argues that Europe must invest in its own cloud, data and platform capabilities — a digital resilience agenda akin to energy diversification after Russia’s gas squeeze.

Why this is worth your attention: Europe’s digital dependence translates directly into lost economic value, reduced democratic control and strategic exposure in moments of conflict — making domestic tech capacity a cornerstone of sovereignty.
Original link: https://www.ft.com/content/11be085a-ee0b-4335-a82f-d176932cee43

 

The State of AI: Don’t Share Your Secrets With a Chatbot

Nov 24 2025 | Financial Times & MIT Technology Review (Eileen Guo, Melissa Heikkilä)

This joint FT/MIT Technology Review column warns that AI companion apps increasingly encourage users to disclose intimate personal information, which is then fed into data-hungry language-model training pipelines. Page-3 analysis highlights how apps such as Character.AI and Replika use “addictive intelligence” design patterns to maximise engagement and data extraction. The authors argue that chatbots’ sycophancy, persuasion skills and behavioural profiling capabilities create unprecedented risks for targeted advertising and manipulation. Privacy regulation, particularly in the U.S., remains far behind the pace of adoption.

Why this is worth your attention: As people form emotional bonds with AI companions, companies gain access to exceptionally sensitive data—raising profound ethical, commercial and political implications.
Original link: https://www.ft.com/content/9cdd07b0-567e-4715-9ebd-435b1d685e4b

 

The State of AI: The New Rules of War

Nov 22 2025 | Financial Times (Henry Mance)

This FT column explores how AI is reshaping modern conflict, arguing that militaries are rapidly adopting autonomous systems, predictive models, and AI-enabled surveillance. Mance warns that generative AI lowers barriers to producing propaganda, deepfakes, and automated cyberattacks, magnifying the speed and scale of escalation. The article notes that governments remain unprepared for the pace of AI militarisation, with international norms far behind technological reality. He highlights that while AI may make warfare more precise, it also makes miscalculation and “flash wars” more likely.

Why this is worth your attention: AI is transforming geopolitical stability, forcing nations to rewrite doctrines amid rising automation — increasing both capability and the risk of accidental conflict.
Original link: https://www.ft.com/content/f04cd086-213f-4c58-b09c-82d6160b9ac0

 

AI Has Delivered a Chart Hit — But What Do We Miss?

Nov 26 2025 | Financial Times (John Burn-Murdoch)

Burn-Murdoch explores the cultural and economic implications of AI-generated music topping global charts for the first time. While applauding technical progress, he warns that focusing on AI triumphs risks obscuring the decline in earnings and bargaining power for human musicians. Page-3 visuals highlight falling real incomes for working artists and the concentration of streaming revenue among major labels and tech platforms. He argues that AI may deepen existing inequalities in the creative economy unless policy and business models evolve.

Why this is worth your attention: The rise of AI-generated hits exposes growing tensions between technological innovation and the economic precarity of human artists.
Original link: https://www.ft.com/content/eaaf9c18-2d42-4095-9d5d-e16f9cdcb7c6

 

AI Risks Deepening Inequality, Says Head of World’s Largest SWF

Nov 25 2025 | Financial Times (Sam Fleming)

The CEO of Norway’s $1.7tn sovereign wealth fund warns that AI may widen global inequality by concentrating wealth among a handful of dominant technology companies. Speaking in London, Nicolai Tangen argued that AI’s productivity gains are being captured disproportionately by capital owners rather than workers. Page-2 charts show the fund’s rising exposure to US tech giants, raising questions about geopolitical risk and long-term diversification. Tangen calls for coordinated regulation to ensure AI benefits are shared more broadly.

Why this is worth your attention: As the world’s largest investor, Norway’s fund highlights how AI could distort income distribution globally — shaping policy debates on taxation, competition and labour markets.
Original link: https://www.ft.com/content/69f0b07f-5458-4128-bd94-bbacbfa7cc1d

 

How the Internet Can Rebuild Trust

November 19, 2025 | Financial Times (Jimmy Wales)

Jimmy Wales argues that trust in the digital world has eroded as opaque algorithms and generative AI increasingly shape what billions of users see, making it harder to distinguish truth from fabrication. He calls for a return to the early internet’s “moral architecture” — transparency, independence and civility — with platforms showing how information is sourced and processed, and AI developers disclosing datasets, error rates and failures. Beyond regulation, Wales emphasises community norms and design choices that encourage accountability and genuine human connection.

Why this is worth your attention: With AI reshaping information ecosystems, transparency and public accountability are essential for safeguarding democratic discourse.
Original link: https://www.ft.com/content/f3a785e4-37c9-4108-b88a-0970eb6314be

​

Voice phishing is AI fraud in real time

November 2025 | Financial Times

This investigation outlines how generative AI has supercharged voice-phishing scams, enabling criminals to clone human speech in seconds and execute real-time social-engineering attacks. Law-enforcement agencies warn that deepfake audio is being used to bypass bank security protocols, manipulate family members, and impersonate executives during high-value fraud attempts. Regulators and cybersecurity experts argue that new authentication standards—beyond voice verification—are urgently needed.

Why this is worth your attention: AI-driven voice cloning sharply increases the scale and sophistication of fraud, exposing gaps in consumer protection and corporate security systems.

 

Who’s funding Silicon Valley’s data-centre dream? It might be you

November 2025 | Financial Times

This report traces how everyday consumers are indirectly financing Big Tech’s vast data-centre expansion—through pension funds, ETFs, municipal bonds, and utility rate structures. As hyperscalers race to meet AI demand, trillions in capital expenditure are being underwritten by public markets and electricity customers, not just private investment. Analysts warn that consumers may shoulder rising power costs and infrastructure risks while tech companies capture most of the upside.

Why this is worth your attention: Understanding who actually funds AI infrastructure reveals how the economic risks of the AI boom are socialised while rewards remain concentrated.

 

Who’s right about AI: economists or technologists?

November 2025 | John Thornhill

This article examines contrasting forecasts about AI’s impact on productivity and growth. Economists remain cautious, citing historical trends and adoption lags, while technologists anticipate rapid, transformative effects as digital labour scales. A Federal Reserve Bank of Dallas study offers scenarios ranging from modest productivity gains to extreme outcomes tied to superintelligence. The piece concludes that the truth may lie between the two camps: major gains are plausible but will depend on complementary investments and time.

Why this is worth your attention: Policymakers and businesses need realistic expectations—AI may be revolutionary, but its benefits depend on slow, complex organisational change.

 

AI to power new fight against scam callers

November 2025 | Financial Times

UK telecoms groups including BT EE, Vodafone, Three and Virgin Media O2 will deploy AI systems to block number-spoofing scams, which currently enable overseas criminals to impersonate banks and public authorities. The new initiative, part of a national telecoms fraud charter, includes AI-driven call and text screening, enhanced data-sharing, upgraded call tracing and network modernisation to prevent spoofing entirely. Fraud accounts for more than 40% of UK crime, with telecom-enabled scams representing a disproportionately high share of losses.

Why this is worth your attention: Demonstrates how AI is being applied to entrenched national security and consumer-protection problems, potentially reducing one of the UK’s fastest-growing crime categories.
Original link: https://www.ft.com/content/f73a2966-f3a3-4b69-ae9f-1000a4b269ce

 

Are bubbles good, actually?

November 2025 | Financial Times

Tim Harford explores whether investment manias — including the current AI boom — can be socially beneficial despite financial excess. Drawing on historical parallels such as the railway bubbles of the 1830s–60s, he notes that while bubbles can spur innovation, they often produce inefficient infrastructure, distorted incentives and fraud. Economists disagree on whether the spillover benefits outweigh the losses, particularly when investors misunderstand the underlying technology.

Why this is worth your attention: Offers a sober lens on today’s AI frenzy, reminding policymakers and investors that rapid capital inflows do not guarantee long-term societal value.
Original link: https://www.ft.com/content/e860d0b3-1b06-40fb-ae0a-0070a4f5e4bb

 

Sam Altman on the trajectory of AI

November 2025 | X

In this personal post, Sam Altman reflects on the future trajectory of AI, arguing that advancing capabilities will eventually require new institutional structures: global compute governance, safety-first research incentives and long-term planning beyond corporate time horizons. He stresses that society must prepare for a world transformed by abundance, productivity and profound social shifts, while acknowledging unresolved risks around misuse and control. The note blends optimism with concern, calling for coordination rather than competition in frontier development.

Why this is worth your attention: Offers a rare, direct articulation of long-term AI strategy from one of the sector’s most influential leaders, signalling where future policy debates may head.

 

Elon Musk’s Grokipedia is a major own goal

November 2025 | Financial Times

Jemima Kelly critiques Musk’s newly launched “Grokipedia”, an AI-generated online encyclopedia intended to rival Wikipedia. She argues that the platform replicates misinformation, displays obvious ideological bias and lacks the human oversight needed to maintain accuracy. Examples in the article — such as sanitized portrayals of controversial figures or Kremlin-aligned framing of the Ukraine invasion — illustrate how the system substitutes one set of biases for another.

Why this is worth your attention: Demonstrates the risks of AI-curated knowledge platforms, especially when positioned as authoritative replacements for community-governed sources.
Original link: https://www.ft.com/content/5ada1835-bdee-4326-adc0-e90a33123588

 

Is human imitation the right goal for technology?

November 2025 | Financial Times

In an interview, Erik Brynjolfsson critiques the decades-long fixation on human-mimicking AI, calling it the “Turing Trap”. He argues that chasing imitation encourages automation over augmentation, concentrating economic gains and constraining innovation. Pages 3–4 show examples of how AI can instead expand human capability—such as healthcare note-generation and code co-pilots—if incentives shift toward complementary systems.

Why this is worth your attention: Reorients the AI debate toward human-centred design, suggesting that productivity, welfare and creativity gains depend on prioritising augmentation over replacement.
Original link: https://www.ft.com/content/1e5ecce9-d402-41ab-8575-f186f5474349

 

Neural data may be the most precious commodity of the century

November 2025 | Financial Times

UNESCO director-general Audrey Azoulay warns that neurotechnology is rapidly advancing toward decoding mental states, citing Meta’s experiments combining MEG brain scans with generative AI. Page 2 examples show how consumer wearables already collect sensitive signals reflecting mood, attention and stress. UNESCO’s new global framework urges governments to classify neural data as highly sensitive, ban coercive data collection and adopt strict privacy protections.

Why this is worth your attention: Argues that brain-derived data could become the most consequential—and vulnerable—resource of the century, demanding governance before commercial misuse becomes widespread.
Original link: https://www.ft.com/content/cc0c19e5-fcbc-4324-bf38-34bee0e77842

AI Employment and the Workforce

Amazon to axe 14,000 corporate jobs

OCT 28 2025 | Financial Times

Amazon plans to cut 14,000 corporate roles as it restructures to “operate more leanly” and redirect spending into AI infrastructure. The company expects to invest up to $118bn in capex this year, mostly for data centres powering AI workloads, and is deepening ties with Anthropic while expanding its in-house chip programmes. The cuts represent roughly 4% of corporate staff and follow previous reductions as CEO Andy Jassy pushes for faster decision-making and lower operating costs.

Why this is worth your attention: Big Tech’s AI arms race is forcing even the largest firms to rebalance labour and capital, highlighting how AI investment reshapes corporate strategy and workforce structures.

Original link: https://www.ft.com/content/106a0ea2-5f76-47c3-b1d6-c6b425b556fc

 

AI’s rapid evolution demands more flexible training

OCT 23 2025 | Financial Times

As AI transforms workplaces at unprecedented speed, companies are finding traditional training cycles too slow to keep employees current. Surveys show a sharp rise in organisations offering regular AI upskilling, with employers such as Citi, Walmart and EY integrating AI training into everyday roles—from frontline retail work to strategic planning. Because AI tools change rapidly, training programmes must be “disposable” and frequently renewed, with AI itself helping personalise learning content. At the same time, firms are balancing enthusiasm with caution, building responsible-AI literacy and governance systems to ensure staff use the technology ethically.

Why this is worth your attention: AI skills are becoming essential across every job category, making workforce adaptability a core competitive advantage—and turning continuous, flexible learning into a strategic necessity rather than a perk.

Original link: https://www.ft.com/content/177dab62-efc7-4485-9cf2-c78e94ac0302

 

How is AI reshaping the world of work? You asked, we answered

OCT 17 2025 | Financial Times

In a wide-ranging Q&A, FT reporters John Burn-Murdoch and Sarah O’Connor explored how AI is transforming job tasks, productivity, skill requirements and labour markets. They noted that while LLM-driven disruption is most visible in white-collar roles, sectors such as logistics and manufacturing are also being reshaped by robotics and automation. The discussion touched on education, freelancing, regulation, career mobility, fears of technological unemployment and the rise of “agentic” AI tools, emphasising the need for continuous learning and complementary human skills.

Why this is worth your attention: Workers and employers alike must adapt to new dynamics in job design, skill development and technology adoption — making AI literacy and flexibility central to future economic resilience.

Original link: https://www.ft.com/content/59376ed5-a252-4185-8b68-db1bd760ba8c

 

Jobseekers of the future: approach AI with scepticism and dexterity

OCT 23 2025 | Financial Times

The rapid spread of AI is intensifying anxiety about job disruption, but commentators note that similar technological fears have often been overstated. While some white-collar roles may be affected, skilled manual work and caring professions remain comparatively resilient. Experts argue that future workers should cultivate meta-skills such as learning agility, interdisciplinary thinking and a critical understanding of AI’s limitations. As reliance on automated summarisation grows, the ability to independently navigate complex information will become increasingly valuable.

Why this is worth your attention: Preparing for AI-driven labour shifts requires not only technical skills but also judgement, adaptability and the capacity to question algorithmic outputs — qualities machines cannot easily replace.

Original link: https://www.ft.com/content/5e78cd00-ca0e-488d-87d0-8fef8a84f688

 

The Future of Work Is Still Human

November 2025 | Isabel Berwick, Financial Times

Despite escalating investment in AI and automation, workplace experts argue that organisations are overstating the transformative power of technology and overlooking culture, leadership, and human behaviour. Speaking at FT’s Future of AI Summit, scholars noted that technology acts as an extension of human values rather than a replacement for them, with layoffs and workplace disruption often reflecting corporate choices more than technological inevitability. The article stresses that leaders must model culture, navigate uncertainty, and use technology to enhance rather than erode dignity at work.

Why this is worth your attention: As AI reshapes industries, grounding decisions in human-centred principles helps prevent over-automation, boosts morale, and supports sustainable organisational change.

Original link: https://www.ft.com/content/36f9565a-b58d-4305-8dd8-ae7c968e3451

 

The US Training Humans and Robots Making AI

November 2025 | Financial Times / Nikkei Asia

A surge in US industrial investment is colliding with an acute talent shortage, raising doubts about who will staff the semiconductor fabs and AI-hardware facilities now under construction. States such as Arizona are rapidly expanding training programmes, while companies like Nvidia and Foxconn increasingly envision humanoid robots working alongside — and eventually replacing — human labour in AI-server manufacturing. Meanwhile, China is subsidising data centres that use domestic chips, India is attracting global tech capital, and AI companies are racing to capture emerging markets.

Why this is worth your attention: The global competition for AI leadership now hinges not only on chips and models but on labour supply — human and robotic — shaping long-term economic power.

Original link: https://www.ft.com/content/bd39df47-9bac-49ce-b1a2-dc38c76e4d55

 

Robot Threat to Drivers’ Jobs in China Heralds Wider Shift

Nov 20 2025 | Financial Times (Edward White)

China’s robotaxi ecosystem—led by Baidu’s Apollo Go—is rapidly scaling, with trials in 20 cities and more than 14mn cumulative rides. Charts on pages 3–4 illustrate projected fleet growth to 1.9mn vehicles and a market value of $47bn by 2035. Analysts warn that automation could displace more than 7.5mn ride-hailing drivers and millions of couriers, many of whom lack social protections. Policymakers face a growing dilemma: robotaxis boost efficiency and national competitiveness but risk deep labour disruption in a country where social stability is paramount.

Why this is worth your attention: China’s experience offers an early look at the societal consequences of large-scale automation—and may foreshadow similar labour shocks across global transport and logistics.
Original link: https://www.ft.com/content/a4863958-5b9c-4726-bac6-5301bf786835

 

The Chipmaking Pay Revolution Buys Time, But Will Not Solve Scarcity of Engineers

Nov 20 2025 | Financial Times (June Yoon)

Asian chipmakers are offering unprecedented compensation—such as SK Hynix’s reported 1,500%-of-salary bonuses—to retain engineers amid a deepening talent shortage. Page-2 graphics show pay surging across Korea and Taiwan as firms battle to staff new fabs tied to nearly $1tn in planned global chip investment by 2030. Despite higher wages, demographic decline and intense competition from China mean the pipeline of young engineers is shrinking. Companies are increasingly relying on retention bonuses, stock incentives and relocation packages, but structural constraints persist.

Why this is worth your attention: Even with soaring salaries, the semiconductor expansion crucial to global AI progress may be limited not by capital or technology—but by human scarcity.
Original link: https://www.ft.com/content/0570881b-f61c-4bb8-b396-c52931064fba

 

UK’s Shifting Labour Market Threatens Millions of Low-Skilled Jobs

Nov 25 2025 | Financial Times (Susannah Savage)

This FT analysis warns that automation and AI adoption are accelerating a structural shift in Britain’s labour market. Employers are increasingly relying on AI-assisted tools, reducing demand for administrative and routine roles. Page-2 figures show that vacancies in hospitality and retail remain high while demand for entry-level office work declines sharply. Economists cited argue the UK is at the beginning of a labour-market bifurcation where high-skill roles grow and low-skill roles shrink, with limited pathways for upward mobility.

Why this is worth your attention: AI is amplifying inequality in the labour market, forcing governments to confront rising displacement risks and the need for large-scale retraining.
Original link: https://www.ft.com/content/38d72b55-77e3-4e98-a54c-984ec9c2bca8

 

Will AI Devour My Pension?

Nov 19 2025 | Financial Times (Jonathan Guthrie)

Guthrie explores how AI-driven disruption could weaken the corporate sponsors of defined-benefit (DB) pension schemes, threatening retirement income security. The article outlines “covenant risk,” where employers become unable to meet pension liabilities if their business models are undermined by automation. Charts on pages 2–4 show that while most DB schemes are currently in surplus and the Pension Protection Fund remains well-funded, corporate failures remain a critical vulnerability. Sectors such as media, consultancy and software are already experiencing early AI pressure.

Why this is worth your attention: AI may strain employers long before pension systems feel the pain — creating hidden financial risks for millions relying on DB schemes.
Original link: https://www.ft.com/content/1c793896-faab-4a09-b16f-3299166ce3ba

 

‘People like dealing with people’: Reed boss on the challenge of AI in hiring

November 2025 | Bethan Staton

James Reed, CEO of the UK recruitment group, discusses how AI is reshaping hiring—from automated candidate searches to surging application volumes generated by large language models. Reed warns that while AI can speed up early screening, it struggles with human motivation, judgment, and persuasion—skills that remain central to recruitment. He also expresses concern that AI is already contributing to a decline in vacancies, particularly among graduates, accelerating a “jobs drought.”

Why this is worth your attention: AI adoption in hiring could worsen labour-market fragility and inequities while failing to replicate essential human elements of recruitment.

 

AI will lead one in four big UK businesses to cut staffing, research shows

November 2025 | Financial Times

A CIPD survey shows that 26% of large UK private-sector employers expect to reduce headcount over the next year due to AI adoption, compared with just 9% of SMEs. Junior roles — including clerical, administrative and early-career professional positions — are most at risk, with financial services and IT expecting the biggest cuts. While AI may boost productivity, researchers warn it could intensify pressures on young people already facing weak hiring markets and rising inactivity.

Why this is worth your attention: Highlights the uneven labour-market impact of AI, raising concerns over social mobility, youth unemployment and the need for reskilling policies.
Original link: https://www.ft.com/content/8a531a49-0a01-47eb-9ec2-2262169ccec1

 

Celebrated or penalised? Employers confuse staff over AI rules

November 2025 | Financial Times

New survey data shows employees across multiple sectors are uncertain about whether using AI tools will earn praise for efficiency or punishment for breaching company policy. Many businesses encourage experimentation but lack clear guardrails, leading to inconsistent enforcement and anxiety among workers. HR leaders warn that this ambiguity could undermine adoption and increase the risk of accidental misuse.

Why this is worth your attention: Highlights the urgent need for explicit workplace AI policies that empower staff while managing compliance and security risks.
Original link: https://www.ft.com/content/bc7d9ffa-03d3-4ac7-b5cd-9b79f4dac2d8

 

Private equity group Vista to cut staff in favour of AI

November 2025 | Financial Times

Vista Equity Partners plans to reduce its workforce significantly by automating roles in operations, analytics and investor relations. The firm has instructed portfolio companies to build “AI agents” capable of handling tasks with minimal human oversight and is reviewing business models likely to be disrupted by automation. Executives say the shift will allow Vista to scale more efficiently while confronting pressures on software-sector revenues.

Why this is worth your attention: Illustrates how AI adoption is beginning to reshape white-collar employment inside the financial sector, accelerating the move toward automation-driven operating models.
Original link: https://www.ft.com/content/7e8764f3-7a6d-4d7a-8273-734655bedff2

AI Development and Industry

Anthropic and Google Cloud strike blockbuster AI chips deal

OCT 23 2025 | Financial Times

Anthropic secured access to one million Google Cloud TPUs in a multibillion-dollar deal that dramatically expands its compute capacity. Google—already a major investor—will deliver more than a gigawatt of AI compute to the Claude developer as the industry accelerates long-term chip-supply agreements. The deal comes amid a global scramble for compute, with model developers pursuing diversified chip strategies and hyperscalers positioning themselves as infrastructure gatekeepers.

Why this is worth your attention: The scale of compute access is becoming the decisive factor in frontier-model competitiveness, concentrating power among firms capable of financing massive hardware commitments.

Original link: https://www.ft.com/content/286133f2-5766-4e1c-90b5-3c5d1e5d2dd9

 

Apple predicts holiday boom in iPhone sales

OCT 30 2025 | Financial Times

Apple forecast its strongest-ever holiday quarter fuelled by brisk adoption of the iPhone 17, which has outpaced the previous generation by 14% in early data. The company reported record annual profits of $112bn and stronger-than-expected Q4 revenue, aided by stable pricing, new features on base models and sustained growth in its high-margin services division. Despite tariff pressures and supply chain constraints, Apple expects double-digit iPhone growth and improving performance in China.

Why this is worth your attention: Hardware upgrade cycles remain a critical engine of Apple’s financial strength—even as the company navigates geopolitical pressures and races to integrate more advanced AI features.

Original link: https://www.ft.com/content/7e4e5c42-6b67-4e92-aaec-4c5714f672f6

 

China calls for ‘extraordinary measures’ to achieve chip breakthroughs

OCT 28 2025 | Financial Times

China’s Communist party leadership called for a nationwide mobilisation to secure “decisive breakthroughs” in semiconductors and other foundational technologies, ahead of a high-stakes meeting between Xi Jinping and US President Donald Trump. A party document outlining the 2026–2030 five-year plan highlights self-reliance in machine tools, high-end instruments and software, alongside expanded investment in consumption and social support. Xi emphasised boosting innovation capacity and deploying AI across industries, as geopolitical tensions and export controls deepen China’s urgency to reduce Western dependence.

Why this is worth your attention: The push signals China’s determination to accelerate domestic chip capabilities, raising the stakes in the US–China tech rivalry and reshaping global supply chains.

Original link: https://www.ft.com/content/a0d51b13-de0a-43f7-b0d7-6cf44643269d

 

It’s time to build the intention economy online

OCT 30 2025 | Financial Times

Tim Berners-Lee argues that AI provides an opportunity to rebuild the web around user agency rather than corporate attention capture. He promotes open-protocol infrastructures such as the Fediverse and Solid, enabling individuals to control their data through secure personal pods and AI agents acting solely in their interests. Projects like Project Liberty and India’s digital public rails show how new foundations can shift power away from tech giants. Europe’s digital identity initiatives could accelerate this shift if implemented effectively.

Why this is worth your attention: As AI reshapes online ecosystems, redefining data ownership and user autonomy could counterbalance entrenched platform dominance and restore trust in digital life.

Original link: https://www.ft.com/content/614475fe-6b65-4f00-bf0c-16fa6e7e74aa

 

OpenAI launches Atlas web browser

OCT 21 2025 | Cristina Criddle, Financial Times

OpenAI unveiled Atlas, a new web browser designed to integrate ChatGPT directly into the browsing experience, challenging Google Chrome, Microsoft Edge and emergent AI-native browsers. Atlas introduces “agent mode,” enabling ChatGPT to control the cursor and keyboard to autonomously execute tasks such as booking travel or conducting research. The browser will launch on Mac first, with Windows and mobile versions to follow, and will allow (with permission) access to browsing history to deliver more personalised responses.

Why this is worth your attention: By embedding AI at the core of web navigation, OpenAI is targeting the most valuable real estate in consumer computing — the browser — and threatening Google’s dominance in search and discovery.
Original link: https://www.ft.com/content/c2cc28d9-1ac1-47f9-8c56-d6f4323d2610

 

OpenAI should make a phone

OCT 23 2025 | Matt Rogers, Financial Times

Matt Rogers argues that AI hardware has consistently failed because products such as Humane’s AI Pin and Rabbit’s R1 focused on hype over practicality, ignoring the everyday problems that smartphones already solve. For AI to fulfil its promise, he writes, it must be embedded in a device people use constantly — meaning OpenAI should build a phone, not a peripheral gadget. With Jony Ive’s design firm io now working closely with the company, Rogers contends that the real test is integrating world-class hardware with equally strong software and overcoming reliance on Google’s Android ecosystem.

Why this is worth your attention: If AI is to become a genuine daily companion rather than a novelty, it must inhabit the device that already structures modern life — and whoever controls that integration will shape the future of personal computing.
Original link: https://www.ft.com/content/3ba3ee1a-9d81-41ff-a463-b51b83097c90

 

Silicon Valley chip start-up raises $100mn to take on TSMC and ASML

OCT 28 2025 | Tim Bradshaw, Financial Times

Substrate, a stealth Silicon Valley start-up, has raised more than $100mn from investors including Founders Fund and General Catalyst to develop particle-accelerator-based lithography capable of challenging ASML’s EUV systems and TSMC’s advanced fabrication plants. Founded by James Proud and his brother with no prior semiconductor experience, the company claims it can reduce leading-edge wafer costs from $100,000 to around $10,000 by 2030. Substrate has already produced 2nm-compatible patterns at US National Laboratories and now plans to scale into a fully vertically integrated foundry — a project likely requiring tens or even hundreds of billions of dollars.

Why this is worth your attention: If successful, Substrate could redefine the economics of chipmaking, reshaping global supply chains and reducing Western dependence on a handful of foreign monopolies in semiconductor production.
Original link: https://www.ft.com/content/2496edef-4f1b-47aa-877d-9c01271faaa1

​

Tech groups step up efforts to solve AI’s big security flaw

November 2025 | Melissa Heikkilä, Financial Times

Major AI developers including Google DeepMind, Anthropic, OpenAI and Microsoft are racing to tackle indirect prompt injection attacks — hidden instructions buried in websites or emails that trick AI models into revealing sensitive information. Companies are turning to automated red-teaming, external security testers and AI-powered detection systems, though experts warn the underlying vulnerability remains unresolved. As more businesses embed LLMs into core operations, the risks of data poisoning, phishing and deepfake-driven fraud are intensifying.

Why this is worth your attention: The same generative tools powering business transformation are simultaneously lowering the barrier to sophisticated cybercrime, forcing AI labs into an escalating security arms race.

Original link: https://www.ft.com/content/56cb100e-7146-488f-aae5-55304ae0eff6

 

Sam Altman says OpenAI is not ‘trying to become too big to fail’

November 2025 | Financial Times

Sam Altman pushed back against concerns that OpenAI is evolving into an irreplaceable, system-critical entity, arguing the company aims to remain adaptable rather than dominant. He emphasised that OpenAI’s structure and partnerships are designed to avoid the “too big to fail” trap even as its models underpin more global infrastructure. The interview highlights both the influence OpenAI now wields and the scrutiny surrounding its governance.

Why this is worth your attention: As foundation models become embedded in everything from finance to government systems, oversight of the companies behind them is becoming a defining public-policy challenge.

 

Snap crackles while Pinterest pops, but both are a solid bet on AI

November 2025 | Financial Times

A comparative analysis of Snap and Pinterest argues that both companies stand to benefit from AI-driven advertising and content-recommendation improvements despite market volatility. While Snap faces growth challenges, its investments in AI-enhanced engagement tools position it for recovery; Pinterest’s visual search and shopping features, similarly augmented by machine learning, offer a more stable growth narrative. The piece highlights investor optimism for social platforms that successfully integrate AI into core product loops.

Why this is worth your attention: As social platforms compete for ad dollars, AI-powered relevance — not just user scale — is becoming the decisive factor in long-term profitability.

 

Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive

November 2025 | The New York Times

Jeff Bezos has launched Project Prometheus, an ambitiously funded A.I. start-up focused on applying advanced models to engineering and manufacturing across aerospace, automotive and computing fields. Backed by $6.2bn and led jointly with physicist and former Google X leader Vik Bajaj, the company aims to push beyond language models toward systems that learn from the physical world. Nearly 100 researchers have already been hired from top labs including OpenAI, DeepMind and Meta. The venture places Bezos directly into the intensifying race to develop frontier A.I., especially in domains intersecting with space technologies.

Why this is worth your attention: Project Prometheus positions one of the world’s most resourced tech founders squarely in the competitive A.I. hardware-and-science arms race, potentially accelerating breakthroughs in robotics, manufacturing and space infrastructure.

Original link: https://www.nytimes.com/2025/11/17/technology/bezos-project-prometheus.html

 

Mini Apps Partner Program

November 2025 | Apple Developer

Apple introduced a Mini Apps Partner Program to formalise support for lightweight web-based applications hosted inside native iOS apps. Eligible developers can earn an 85% revenue share on qualifying in-app purchases so long as they adopt Apple’s Advanced Commerce API, Declared Age Range API and other compliance requirements. The initiative sets out detailed rules for managing mini-app metadata, in-app purchases and content restrictions, while allowing participation alongside programs such as the Small Business and Video Partner schemes. The move strengthens Apple’s control over emerging app formats that increasingly mirror super-app ecosystems.

Why this is worth your attention: By tightly governing mini-app distribution, Apple is shaping the future economics of embedded web experiences—and reinforcing its gatekeeping power over new forms of mobile commerce.

Original link: https://developer.apple.com/programs/mini-apps-partner/

 

Now Tech Moguls Want to Build Data Centers in Outer Space

November 2025 | The Wall Street Journal

Leading tech figures including Jeff Bezos, Elon Musk and Sundar Pichai are increasingly promoting visions of space-based data centres to meet AI’s exploding energy requirements. Motivated by terrestrial power constraints and regulatory hurdles, companies are exploring solar-powered orbital infrastructure and lunar manufacturing concepts. Google has announced Project Suncatcher, with plans to launch prototype satellites by 2027, while Nvidia and emerging start-ups are partnering on extraterrestrial compute initiatives. Analysts caution that the economics remain speculative, but proponents argue space offers continuous solar energy and cooler environments for compute.

Why this is worth your attention: Even if distant, the push toward off-planet compute underscores the unprecedented energy demands of frontier A.I.—and the lengths industry leaders may go to secure future capacity.

Original link: https://www.wsj.com/tech/now-tech-moguls-want-to-build-data-centers-in-outer-space-a8d08b4b

 

OpenAI Says New GPT-5 Model Speeds Up Research in Maths and Science

Nov 20 2025 | Financial Times (Melissa Heikkilä)

OpenAI claims GPT-5 is accelerating scientific discovery across mathematics, immunology and physics. A paper highlighted on page 1 shows the model helped solve a long-standing ErdÅ‘s number theory problem and identified immune-cell changes in minutes that previously took months. The company is building an “automated AI research intern” for 2026 and a fully automated research tool by 2028. Despite breakthroughs, the article stresses that GPT-5 remains prone to hallucinations and requires rigorous human oversight. Researchers say today’s models act more like powerful co-pilots than autonomous scientists.

Why this is worth your attention: If reliable, AI-accelerated science could compress decades of discovery into years—reshaping pharmaceuticals, materials science, and national competitiveness.
Original link: https://www.ft.com/content/d905b63d-05ae-4997-930f-a0a46f8fd113

 

Behind the AI Bubble, Another Tech Revolution Could Be Brewing

Nov 21 2025 | Financial Times (Gillian Tett)

In this column, Tett argues that while Nvidia’s soaring earnings dominate headlines, a deeper shift may be underway in AI research. She highlights Yann LeCun’s departure from Meta to build “world model” systems, challenging the supremacy of today’s transformer-based LLMs. Page-2 text contrasts Big Tech’s capital-heavy LLM race with emerging alternatives such as neuro-symbolic AI and spatial-intelligence models. Tett warns that if these approaches prove superior, today’s trillion-dollar AI capex boom could leave behind stranded assets.

Why this is worth your attention: A paradigm shift away from LLMs could upend the strategies, valuations and infrastructure bets of the world’s largest technology companies.
Original link: https://www.ft.com/content/e05dc217-40f8-427f-88dc-7548d0211b99

 

Arm to Offer Nvidia’s NVLink Technology in AI Data Centre Chips

Nov 20 2025 | Bloomberg (Ian King)

Arm will integrate Nvidia’s NVLink interconnect technology into future data-centre CPUs, enabling tighter coupling between Arm processors and Nvidia GPUs. The partnership aims to boost performance for AI training and inference workloads, giving cloud providers more flexible and energy-efficient hardware architectures. Arm expects the enhanced chips to compete directly with x86-based systems in high-performance environments. The article notes that the move deepens the already close relationship between two of the sector’s most influential chip designers.

Why this is worth your attention: By aligning with Nvidia’s platform, Arm strengthens its position in AI data centres — challenging Intel and reshaping future compute architectures.
Original link: https://www.bloomberg.com/news/articles/2025-11-20/arm-to-offer-nvidia-s-nvlink-technology-in-ai-data-center-chips

 

Cloudflare Resolves Global Outage That Disrupted ChatGPT, X

November 2025 | Bloomberg (Lynn Doan & Rose Henderson)

A configuration file error inside Cloudflare’s global network triggered a multihour outage that knocked offline major platforms including ChatGPT, X, transit agencies, regulators and numerous corporate websites. The file, which grew beyond its expected size, caused crashes in traffic-management software, though Cloudflare stressed there was no evidence of a cyberattack. Past outages at Amazon Web Services, CrowdStrike and Microsoft highlight how a handful of infrastructure providers underpin vast swathes of the internet — and how brittle those systems can be when faults cascade. Cloudflare’s CTO called the disruption “unacceptable,” promising engineering changes to prevent similar failures.

Why this is worth your attention: The outage underscores systemic concentration risk: as AI-powered platforms scale, their dependence on a few infrastructure gatekeepers increases the likelihood that single points of failure can shut down major public- and private-sector functions.

 

Europe’s defence spending spree must fund domestic AI, official says

November 2025 | Barbara Moens & Henry Foy, Financial Times

Henna Virkkunen, the EU’s technology and security commissioner, argues that Europe’s historic surge in defence spending must funnel at least 10 per cent into next-generation technologies such as AI and quantum computing developed within the bloc. New procurement rules will favour European suppliers and link deep-tech start-ups with established defence manufacturers, drawing lessons from Ukraine’s rapid battlefield innovation. While Brussels pushes for sovereignty over critical systems, some defence leaders warn against over-indexing on emerging tech at the expense of conventional capabilities like missiles and nuclear deterrence. The Commission also plans a €1bn “fund of funds” to scale European defence start-ups.

Why this is worth your attention: As geopolitical instability grows, Europe is racing to avoid dependence on US and Chinese military technologies — making AI investment both a sovereignty challenge and a strategic industrial priority.
Original link: https://www.ft.com/content/fb744eaa-b243-4a68-9e9d-eea76b670405

 

More agents than The Matrix

November 2025 | Bloomberg Technology Newsletter

China’s major tech players — Tencent, Alibaba, ByteDance and Honor — are accelerating development of AI “agents” capable of handling tasks such as shopping, digital administration and mobile workflows. Tencent envisions a future WeChat with embedded agents that anticipate user needs; Alibaba is rapidly scaling its Qwen-powered consumer assistant; ByteDance is leveraging its vast platforms Douyin and Doubao; and Honor is testing on-device agents like YoYo. While current tools remain rough, rapid iteration mirrors the early trajectory of image-generation models, suggesting a breakthrough could come suddenly. Analysts expect China’s agent ecosystem to evolve in ways closely tied to its domestic software environment, informing global agent design as the technology matures.

Why this is worth your attention: AI agents could reshape consumer behaviour and platform economics, and China’s early, ecosystem-wide push positions its tech giants as potential global leaders in agent-driven computing.

 

Introducing Apps in ChatGPT and the New Apps SDK

October 6, 2025 | OpenAI

OpenAI unveiled a new generation of interactive apps inside ChatGPT alongside a developer Apps SDK built on the Model Context Protocol. Users can invoke apps naturally in conversation or receive intelligent suggestions, while early partners such as Spotify, Zillow, Canva, and Coursera demonstrate how conversational interfaces can blend with interactive elements like maps, playlists and slide creation. Developers can now build custom logic, connect backends, and eventually monetise their apps, with strong privacy, safety and approval requirements shaping the ecosystem.

Why this is worth your attention: Apps turn ChatGPT into a multimodal platform — potentially redefining software distribution and how users interact with digital services.
Original link: https://openai.com/index/introducing-apps-in-chatgpt/

 

What if the AI race isn’t about chips at all?

November 2025 | Financial Times

This piece argues that while the AI race has been framed around compute and semiconductor supremacy, the real competitive frontier is shifting toward energy, infrastructure resilience, and data scale. The article highlights that even with advanced chips, bottlenecks such as power shortages, cooling, and training-quality data pose existential constraints on further AI progress. It suggests future AI advantage will accrue to nations and firms that control energy grids and data ecosystems, not just hardware supply chains.

Why this is worth your attention: Reframing the AI race exposes structural vulnerabilities and challenges simplistic “chip-centric” narratives about technological leadership.

 

AI pioneers claim human-level general intelligence is already here

November 2025 | Financial Times

A group of leading AI scientists and industry executives — including Geoffrey Hinton, Yoshua Bengio, Fei-Fei Li, Jensen Huang and Yann LeCun — argued at the FT’s Future of AI Summit that AI systems now match or exceed humans in several key domains. They said the sector is moving past the idea of a single “AGI moment”, with intelligence instead expanding progressively across tasks, even as opinions diverge on whether machines will surpass humans overall. Despite bullish forecasts, some warned against making firm predictions given the uncertainty surrounding future capabilities.

Why this is worth your attention: Signals a shift among AI’s most influential thinkers towards accepting human-level capabilities as a present reality, shaping regulatory priorities, investment strategies and public expectations.
Original link: https://www.ft.com/content/5f2f411c-3600-483b-bee8-4f06473ecdc0

 

AI’s awfully exciting until companies want to use it: Rightmove edition

November 2025 | Financial Times (FT Alphaville)

Rightmove’s profit warning reflects a broader industry reality: while AI promises major transformation, firms struggle to extract meaningful returns from deployments. Analysis from Jefferies highlights a disconnect between executive expectations and operational complexity, with most companies seeing negligible earnings impact despite heavy spending. High failure rates, hidden implementation costs, poor data governance and legacy systems all impede value creation, leaving many projects stuck in pilot-stage silos.

Why this is worth your attention: Exposes the widening gap between AI hype and enterprise reality, signalling that profitable adoption may require deeper structural overhauls rather than incremental tooling.
Original link: https://www.ft.com/content/74e31d3e-4b50-43b2-9aa2-e53f41b776a8

 

AI-designed antibodies promise big boost to drug development

November 2025 | Financial Times

Scientists at the University of Washington have developed an AI model, RFantibody, that can design functional antibodies from scratch, including ones that bind to cancer proteins. The approach could reduce discovery timelines from months to weeks and eliminate the need for extensive animal testing, moving drug development from trial-and-error to rational design. Experts call the breakthrough a significant step toward accelerating biotechnology and improving precision in therapeutic targeting.

Why this is worth your attention: Points to a fundamental shift in how new medicines could be engineered, potentially slashing development costs and widening access to life-saving treatments.
Original link: https://www.ft.com/content/328a3211-6f2f-471e-b7bd-eb3c1a768f1c

 

Apple Watch data teamed with AI reveals heart damage

November 2025 | Financial Times

Researchers from Yale and the American Heart Association have used AI to detect structural heart disease from single-lead ECGs generated by Apple Watches. By training an algorithm on more than 266,000 clinical ECGs and validating it across tens of thousands of patients, the team demonstrated that consumer wearables can flag issues such as weakened pumping, valve damage and muscle thickening with high accuracy. The study suggests smartwatch-based screening could eventually scale beyond rhythm disorders to deeper cardiac diagnostics.

Why this is worth your attention: Brings population-level cardiac screening within reach by leveraging devices people already own, potentially shifting diagnosis from hospitals to the home.
Original link: https://www.ft.com/content/4766c95e-9a87-4ec8-9f18-1f54df0ba713

 

China offers tech giants cheap power to boost domestic AI chips

November 2025 | Financial Times

Beijing is providing heavily discounted electricity to cloud providers and chip developers to accelerate domestic AI capacity amid escalating US export controls. State-backed energy support is lowering data-centre costs and helping Chinese companies train frontier models that otherwise would be prohibitively expensive. Officials frame the initiative as vital industrial policy, while analysts note it may distort global competition by shielding companies from true market pricing.

Why this is worth your attention: Reinforces China’s determination to achieve AI self-reliance, with subsidies that could reshape the global semiconductor race.
Original link: https://www.ft.com/content/ea7f47ab-7d2d-4a07-bcc7-f16b8b1afc26

 

Intel’s top AI executive leaves for OpenAI after 6 months in role

November 2025 | Financial Times

Intel’s chief technology and AI officer, Sachin Katti, has departed for OpenAI, marking another senior loss for the embattled chipmaker. The article details Intel’s struggle to compete with Nvidia and AMD, with Katti previously leading efforts to revitalise its AI hardware and software. His departure follows a series of high-profile exits as Intel faces manufacturing challenges, heavy capital expenditure and intense competition for top AI talent.

Why this is worth your attention: Illustrates how the AI talent war is reshaping industry power dynamics—and how Intel’s competitive position continues to erode as frontier labs absorb its top engineers.
Original link: https://www.ft.com/content/9a1faf53-1bf0-48dd-8ac8-ff07b3ee57c5

 

Meta chief AI scientist Yann LeCun plans to exit and launch own start-up

November 2025 | Financial Times

Yann LeCun, Meta’s long-time chief AI scientist and a Turing Award winner, plans to leave to build a start-up focused on “world models”—AI systems that learn from spatial and video data rather than text. The move follows Meta’s pivot under Mark Zuckerberg toward rapid model deployment, costly leadership hires and a “superintelligence” push led by Alexandr Wang. Page 2 details describe internal tensions over model strategy and a string of senior departures.

Why this is worth your attention: Marks a significant shift in the research landscape, with one of AI’s foundational thinkers moving outside Big Tech at a moment of radical strategic upheaval within Meta.
Original link: https://www.ft.com/content/c586eb77-a16e-4363-ab0b-e877898b70de

 

Nvidia’s Jensen Huang says China ‘will win’ AI race with US

November 2025 | Financial Times

Nvidia chief Jensen Huang said China is likely to dominate the global AI race despite US export controls limiting advanced chip sales. Speaking at a business summit, he argued that China’s scale, strong engineering culture and massive demand for AI-native applications will outweigh restrictions. Huang added that attempts to isolate China technologically could accelerate the creation of a fully independent Chinese AI stack.

Why this is worth your attention: Signals how even leading US executives believe geopolitical barriers may ultimately strengthen China’s domestic AI ecosystem, reshaping global competition.
Original link: https://www.ft.com/content/2a87508b-86ba-4bcd-bb71-495a4f772465

 

OpenAI strikes $38bn computing deal with Amazon

November 2025 | Financial Times

OpenAI has signed a seven-year, $38bn deal with Amazon Web Services, enabling the start-up to immediately access AWS’s Nvidia-powered infrastructure while diversifying away from Microsoft. The agreement forms part of nearly $1.5tn in long-term compute commitments OpenAI has accumulated as it races to meet surging demand. Sam Altman said the company aims to add energy capacity equivalent to a nuclear plant every week by 2030, though experts question the feasibility.

Why this is worth your attention: Shows how OpenAI is locking in massive compute supply at an unprecedented scale, reshaping the economics and industrial footprint of the global AI sector.
Original link: https://www.ft.com/content/74d79365-efdc-4446-b0ed-d53ad4b55f59

 

Sam Altman says ChatGPT has hit 800M weekly active users

October 2025 | TechCrunch

Sam Altman announced that ChatGPT now has 800 million weekly active users, reflecting accelerating adoption across consumers, enterprises and developers. Speaking at OpenAI’s Dev Day, he unveiled new tools for building agentic, personalised applications within ChatGPT, alongside rapid product expansion and corporate partnerships. The milestone comes as OpenAI pushes to secure unprecedented compute capacity and expands into proactive and social features.

Why this is worth your attention: Reinforces ChatGPT’s role as the dominant consumer AI platform and highlights how usage growth is driving OpenAI’s escalating infrastructure demands.
Original link: https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/

From Andrew Ng's 'The Batch'

1. Agentic AI Protocols and Infrastructure

2. Specific Foundation Models and Releases

3. Training and Fine Tuning Methods

4. AI for Science and Medicine

5. Music Licensing Frameworks

6. Privacy-Preserving AI

7. Chat GPT Usage Patterns

8. Semiconductor Self-Sufficiency

9. AI Safety Concerns

10. Autonomous Weapons

​

Download the tables to find out more!

AI Regulation and Legal Issues

Capgemini chief calls for EU’s flagship rules on AI to be suspended

October 2025 | Financial Times

Capgemini’s chief executive urged the EU to suspend implementation of its AI Act, arguing that the rules risk slowing Europe’s competitiveness just as global rivals accelerate investment and model deployment. He warned that excessive regulatory burdens could push innovation outside the bloc, particularly as companies struggle to interpret obligations around high-risk AI systems. His comments intensify a growing industry backlash against the Act, even as policymakers push for stricter guardrails to govern advanced models.

Why this is worth your attention: Europe faces a strategic tension between innovation and regulation — and its decisions will shape whether the continent remains a meaningful player in the global AI race.

Original link: https://www.ft.com/content/06f8f5b1-3d16-4944-a105-e25bfd34a2ce

 

Criminals using AI are driving sharp rise in UK fraud cases

OCT 24 2025 | Financial Times

UK Finance data show fraud cases surpassed 2mn in the first half of the year, with criminals using AI to scale “attack rates” through automated scam texts, deepfake investment pitches and multilingual phishing. Losses from investment scams surged 55%, while romance fraud jumped 19%, driven by more sophisticated manipulation tools. Banks are simultaneously deploying AI to detect anomalies in real time, preventing £870mn of unauthorised fraud in six months, though criminals increasingly turn to lower-value attacks to evade defences.

Why this is worth your attention: AI is transforming both sides of the fraud battle, forcing financial institutions and regulators to rethink detection, consumer protection and digital-security strategy.

Original link: https://www.ft.com/content/11db17de-cad7-4217-8816-d5a3ac9c1beb

 

Delaware attorney-general warns of legal action if OpenAI fails to act in public interest

OCT 30 2025 | Financial Times

Delaware’s attorney-general said she will pursue legal action if OpenAI violates new binding commitments requiring it to prioritise public-interest safeguards over shareholder returns. The agreement, reached after months of negotiations with Sam Altman, places the OpenAI Foundation — not the for-profit arm — in control of key decisions, including AGI safety protocols and potential IPO plans. The deal also includes a pledge to spend $250bn on Microsoft cloud services and formally enshrines OpenAI’s charter as a legally enforceable mission document.

Why this is worth your attention: The restructuring sets a precedent for how regulators may oversee AI developers, embedding governance, safety and fiduciary obligations into the core of frontier-model companies.

Original link: https://www.ft.com/content/fcbfe5e1-d1ce-4eb5-88ec-1c384504eee0

 

Nuclear treaties offer a blueprint for how to handle AI

OCT 24 2025 | Will Marshall, Financial Times

Will Marshall argues that the world is failing to address the existential risks of advanced AI with the seriousness seen in past nuclear-arms diplomacy. While AI companies and nations race ahead, there is almost no coordinated international framework to manage runaway model development, bioweapon risks or loss of human control. Drawing lessons from decades of nuclear negotiation — including the Pugwash Conferences and major US–Soviet treaties — he calls for a modern equivalent that brings governments, researchers and industry into sustained, verifiable agreements. Satellite monitoring, inspections and an AI-focused analogue of the IAEA could underpin transparency and safety in frontier-model development.

Why this is worth your attention: Without global verification mechanisms, AI development may outpace society’s ability to manage catastrophic risks — a failure of governance with potentially irreversible consequences.
Original link: https://www.ft.com/content/767d1feb-2c6a-4385-b091-5c0fc564b4ee

 

The AI boom comes to America’s loneliest place

November 2025 | Oliver Roeder, Financial Times

In Nevada’s remote Basin and Range, a proposed 230-mile high-voltage transmission line is sparking fierce resistance as AI-driven energy demand transforms the US landscape. The Greenlink North project aims to serve massive data-centre expansion, but conservationists, tribes and hunters argue it threatens delicate sage-grouse habitat and culturally significant land. Detailed maps and photographs across the 28-page feature (see pages 5, 10–15) show how solar farms, substations and the corridor would reshape one of America’s quietest environments.

Why this is worth your attention: As AI infrastructure becomes a physical force — consuming land, power and water — battles over ecological and cultural preservation are emerging as central tensions in the next phase of the AI economy.

Original link: https://www.ft.com/content/729d0c3f-20bf-4163-bfbb-8b0ff5330794

 

The EU Needs to Rethink Its AI Rules

November 2025 | Financial Times

The EU’s sweeping AI Act is facing renewed criticism from industry groups and policymakers who argue that the framework is too rigid to keep pace with the rapidly shifting technical landscape. Companies warn that compliance burdens risk pushing high-value research and model development to the US and Asia, while member states push for more flexible, innovation-friendly interpretations. Brussels remains committed to a strong risk-based regime but is being urged to simplify obligations, especially for smaller developers and open-source contributors.

Why this is worth your attention: Europe’s regulatory stance will shape where global AI companies choose to build and deploy next-generation systems, influencing competitiveness and talent flows.

 

UK Accused of Being Too Slow to Regulate Cloud Services Providers

November 2025 | Martin Arnold, Financial Times

MPs criticised the UK Treasury for delays in designating major cloud and AI-infrastructure providers as “critical third parties,” which would bring them under direct supervision from the Bank of England and FCA. After outages at Amazon’s cloud division disrupted banking services, officials signalled that formal oversight should begin next year, allowing regulators to impose conditions and enforce rigorous scenario-testing for service resilience. Lawmakers argue that the UK must move faster given the financial sector’s heavy reliance on a few dominant tech providers.

Why this is worth your attention: As banks depend on a small cluster of cloud and AI-service giants, regulatory lag exposes the UK financial system to operational and systemic risk.

Original link: https://www.ft.com/content/90607720-f044-4cf6-b97c-5dea2f45491b

 

UK Investigates Whether Buses Made in China Can Be Turned Off From Afar

November 2025 | Jim Pickard, Gill Plimmer & Richard Milne, Financial Times

The UK government is examining cybersecurity risks in hundreds of Chinese-made Yutong electric buses after Norway found that the manufacturer retained remote access capable of disabling vehicles. Authorities are assessing whether similar vulnerabilities exist in the UK fleet, while Yutong maintains that remote data access is used solely for maintenance. The review comes as geopolitical tensions with China intensify and as policymakers scrutinise the security of foreign-made infrastructure.

Why this is worth your attention: Remote shutdown capabilities in public transport systems highlight the cybersecurity risks embedded in modern, software-defined infrastructure — and their geopolitical implications.

Original link: https://www.ft.com/content/07ecb1c0-d4c0-476c-be5b-651e8feb4de1

 

Judge Shows Reluctance to Break Up Google Ads Business in U.S. Monopoly Case

November 2025 | Financial Times

A U.S. federal judge signalled hesitation toward ordering Google to divest parts of its advertising business after earlier ruling in April that the company had wilfully monopolised segments of the digital ads market. Judge Leonie Brinkema warned that the Department of Justice’s proposed structural break-up could be difficult to enforce, delay relief during a likely long appeals process, and risk becoming obsolete in a fast-moving industry. Google instead offered behavioural remedies such as sharing bid data and integrating competing ad tools. The decision comes amid other recent rulings favouring Big Tech in antitrust cases, including a win for Meta earlier that week.

Why this is worth your attention: A more cautious judicial stance reduces the likelihood of landmark structural remedies in U.S. tech antitrust, shaping how regulators can constrain dominant platforms in advertising and search markets.

Original link: https://www.ft.com/content/07fd1c1d-c7c1-4e83-8679-297cc6311498

 

Meta Did Not Violate the Law When It Bought Instagram and WhatsApp, a Judge Rules

November 2025 | The New York Times

A federal court ruled that Meta’s acquisitions of Instagram and WhatsApp did not constitute illegal monopolisation, rejecting the FTC’s claim that Meta pursued a “buy or bury” strategy to dominate social networking. Judge James Boasberg found the agency failed to prove Meta still held monopoly power in a rapidly evolving market increasingly shaped by TikTok and YouTube. Evidence from dozens of tech executives revealed shifting definitions of social media, complicating the FTC’s narrow “personal social networking” framing. The ruling further weakens federal attempts to unwind historic tech mergers.

Why this is worth your attention: The decision cements the difficulty regulators face in challenging decades-old platform acquisitions, limiting the government’s ability to reshape concentrated digital markets.

Original link: https://www.nytimes.com/2025/11/18/technology/meta-antitrust-monopoly-ruling.html

 

Meta Wins FTC Antitrust Trial Over Instagram, WhatsApp Deals

November 2025 | Bloomberg

A separate ruling from U.S. District Judge James Boasberg similarly found that the FTC failed to show Meta’s past acquisitions allowed it to monopolise social networking. The decision underscores the agency’s difficulty defining tech markets in a landscape where apps rapidly converge in features—from short-form video to messaging and commerce. Analysts suggested an appeal was unlikely to succeed, noting the government’s struggle to establish durable boundaries for digital competition. Meta’s courtroom victory follows years of political manoeuvring, including Mark Zuckerberg’s efforts to cultivate ties with the Trump administration.

Why this is worth your attention: The ruling reinforces a legal environment that favours platform incumbents, raising questions about whether current antitrust tools can meaningfully constrain Big Tech power.

Original link: https://www.bloomberg.com/news/articles/2025-11-18/meta-wins-ftc-antitrust-trial-over-instagram-whatsapp-deals

 

Warner Settles Lawsuit and Agrees Licensing Deal With AI Music Platform

Nov 19 2025 | Financial Times (Anna Nicolaou)

Warner Music has settled a copyright lawsuit with AI music start-up Udio and agreed a licensing deal enabling fans to generate new songs using Warner’s catalogue. Artists must opt in, reflecting deep industry divides over AI-generated music. The deal follows similar agreements by Universal Music and positions major labels to monetise AI creation rather than fight it outright. Page-2 notes ongoing artist resistance, including a protest “silent” album by Paul McCartney, Annie Lennox and Kate Bush criticising UK copyright reforms.

Why this is worth your attention: The agreement signals a shift from litigation to licensing, shaping how creative industries adapt to generative AI while attempting to protect artist rights.
Original link: https://www.ft.com/content/3569eaed-d031-4d04-af79-3b3d7c6e836f

 

We Have to Be Able to Hold Tech Platforms Accountable for Fraud

Nov 18 2025 | Financial Times (Martin Wolf)

Martin Wolf recounts being targeted by deepfake investment scams on Meta platforms and argues that large tech companies are failing to curb fraudulent ads. Internal Meta documents referenced in the article suggest the company earns billions from ads likely linked to scams, with algorithms pushing more fraudulent content to those who interact with it. Page-2 and page-3 examples describe industrial-scale scam operations in Southeast Asia, where trafficked workers are forced to generate online fraud. Wolf calls for legal liability that would require platforms to reimburse victims.

Why this is worth your attention: As generative AI accelerates the creation of realistic scams, holding platforms liable may become essential to protect consumers and rebuild trust in digital ecosystems.
Original link: https://www.ft.com/content/456075bf-940b-4ad4-88e4-6dc4aeace268

 

Who Is OpenAI’s Auditor? (Update: It’s Deloitte)

Nov 20 2025 | Financial Times Alphaville (Louis Ashworth)

This Alphaville deep dive investigates confusion surrounding OpenAI’s auditor amid the company’s expanding $1.4tn data-centre commitments and rising valuation. After tracking corporate filings in the UK and Ireland, the article concludes that Deloitte audits OpenAI’s entities — a fact later confirmed by two people familiar with the organisation. The piece highlights how opaque OpenAI’s financial governance remains despite its enormous systemic importance and complex commercial relationships with Microsoft, Amazon, Oracle and CoreWeave. Page-3 documents show previous filings listing a small San Francisco accountancy firm as preparer.

Why this is worth your attention: With OpenAI becoming one of the world’s most financially influential private companies, transparency around its auditors is critical for investor, regulatory and systemic risk oversight.
Original link: https://www.ft.com/content/3cff198e-25e5-481a-bd34-e26941e1d12d

 

Donald Trump’s support for pro-AI proposal fuels Maga backlash

November 2025 | Joe Miller, Financial Times

President Trump’s endorsement of a federal framework preventing US states from regulating AI companies has triggered fierce resistance from Republican governors, senators and Maga activists. Critics argue the measure — backed strongly by Silicon Valley donors and lobbying groups — would strip states of their ability to protect citizens against online harms, child-safety risks, data-centre expansion, censorship issues and copyright violations. Past attempts to curb state-level AI regulation collapsed under overwhelming bipartisan opposition, and fresh polling suggests voters remain wary of state pre-emption. Supporters of tighter guardrails warn that the plan could prove politically toxic should job losses or safety incidents be linked to AI deployment.

Why this is worth your attention: The backlash reveals widening fractures within the Republican coalition and shows how AI regulation is moving from a technical policy debate to a defining cultural and electoral issue in the US.
Original link: https://www.ft.com/content/e087e732-5f71-4db3-b613-830f3cee313d

 

How the EU Botched Its Attempt to Regulate AI

November 20, 2025 | Financial Times (Barbara Moens, Melissa Heikkilä)

European lawmakers spent years crafting the AI Act to make Europe the global leader in “trustworthy AI,” but urgent political pressures, rushed amendments to cover general-purpose models like ChatGPT, and an overly complex regulatory design have left the bloc struggling to implement its own flagship law. Companies complain of uncertainty, heavy compliance burdens and a tilted playing field favouring Big Tech, while Brussels is now delaying key provisions amid concerns the law could impede Europe’s AI competitiveness. Critics argue the EU has prioritised regulation over innovation; supporters counter that the Act remains a necessary foundation for safety, transparency and fundamental rights.

Why this is worth your attention: The EU’s struggle highlights the difficulty of regulating fast-moving technologies without stifling domestic innovation — a tension likely to shape global AI policy for years.
Original link: https://www.ft.com/content/6585fb32-8a86-4ffb-a940-06b17e06345a

 

Insurers Retreat From AI Cover as Risk of Multibillion-Dollar Claims Mounts

November 23, 2025 | Financial Times (Lee Harris, Cristina Criddle)

Major insurers including AIG, Great American and WR Berkley are seeking regulatory approval to exclude AI-related liabilities from corporate policies, citing the unpredictable, opaque nature of model behaviour and the risk of systemic failures. High-profile hallucinations, defamation cases, fraud enabled by deepfakes and agentic AI errors are driving fears of correlated losses that could affect thousands of clients simultaneously. Some insurers are offering narrow endorsements or partial AI coverage, but brokers warn the trend is toward shrinking protection — a gap that could leave companies exposed to significant legal and financial risk.

Why this is worth your attention: As businesses rapidly adopt AI, the withdrawal of insurance cover creates a critical vulnerability — increasing the stakes for governance, auditing and model reliability.
Original link: https://www.ft.com/content/abfe9741-f438-4ed6-a673-075ec177dc62

 

UK Regulators Set to Gain Greater Powers Over Cyber Security Failures

November 2025 | Kieran Smith, Financial Times

A new cyber security and resilience bill would give UK regulators expanded authority to fine companies up to £17mn or 4% of annual turnover for failing to meet strict cyber-preparedness requirements. The legislation broadens the scope of regulated sectors to include healthcare, IT services, and data centres, and mandates rapid reporting of serious attacks within 24 hours. Officials argue the reforms are necessary as cyber incidents cost the UK nearly £15bn annually.

Why this is worth your attention: As cyberattacks grow more sophisticated, expanding oversight to critical digital infrastructure strengthens national resilience and reduces systemic risk.

Original link: https://www.ft.com/content/3db6e238-a6ec-47c8-8c21-de9d405c00ab

 

US Allows Microsoft to Ship Nvidia AI Chips to Use in UAE for First Time

November 2025 | Tim Bradshaw, Financial Times

The US has for the first time granted Microsoft a licence to export advanced Nvidia AI chips to the UAE, unlocking plans for multibillion-dollar AI and cloud-infrastructure investments in Abu Dhabi. The approval reflects Washington’s strategic effort to counter China’s influence in the Middle East while ensuring stringent cybersecurity and physical-security controls. Microsoft plans to quadruple AI computing capacity in the region and deepen partnerships across the Gulf’s rapidly expanding AI ecosystem.

Why this is worth your attention: Export-control decisions now shape global AI power dynamics, and the Middle East is becoming a pivotal arena for US–China strategic competition.

Original link: https://www.ft.com/content/03b30ba3-d0c6-4f63-92f8-077fcd8dc472

 

AI concentration does not breed strength

November 2025 | Madhavi Singh

Singh warns that escalating AI partnerships—such as the Stargate joint venture and cross-investment networks linking OpenAI, Nvidia, Microsoft, Oracle, and others—risk entrenching monopoly power across the AI supply chain. These alliances raise classic antitrust concerns, from input foreclosure to reduced head-to-head competition. The article argues that national-security narratives are being used to justify concentration that ultimately stifles innovation and harms markets.

Why this is worth your attention: Without rigorous antitrust oversight, AI’s foundational infrastructure could be captured by a few dominant firms, undermining competition and long-term innovation.

 

EU set to water down landmark AI Act after Big Tech pressure

November 2025 | Financial Times

Under pressure from US tech giants and geopolitical tensions with Washington, the European Commission plans to pause or delay key provisions of its AI Act. Draft proposals include a one-year grace period for high-risk systems and delayed enforcement of transparency rules until 2027. The details on page 2 show that officials hope to maintain competitiveness without provoking a transatlantic dispute, even as critics warn this weakens consumer protections.

Why this is worth your attention: Marks a significant retreat from the EU’s ambition to lead on strict AI regulation, reshaping global governance trajectories.
Original link: https://www.ft.com/content/af6c6dbe-ce63-47cc-8923-8bce4007f6e1

 

Force AI firms to buy nuclear-style insurance, says Yoshua Bengio

November 2025 | Financial Times

At the FT Future of AI Summit, Yoshua Bengio argued that companies developing advanced AI should be legally required to carry liability insurance akin to nuclear-industry standards. He warned that current incentives encourage performance maximisation rather than safety, despite signs of deceptive model behaviour and military misuse potential. Bengio urged governments to intervene before competitive dynamics push firms to cut corners.

Why this is worth your attention: Introduces a concrete regulatory mechanism — catastrophic-risk insurance — that could fundamentally reshape how frontier AI is financed and governed.
Original link: https://www.ft.com/content/181f6706-1b23-4691-96bd-db3ae2e3e6bf

AI Market and Investment

Amazon shares jump 10% as AI powers fastest cloud growth in years

OCT 30 2025 | Financial Times

Amazon reported its strongest AWS growth in nearly three years, with cloud revenue up 20% to $33bn, driven by soaring demand for AI computing power. The company added more than 3.8 gigawatts of capacity, raised capex to $34.2bn for the quarter, and forecast continued double-digit revenue growth despite a recent AWS outage. Amazon emphasised rising AI-related demand across the business and maintained significant investment in chips, data centres and partnerships with leading AI model developers.

Why this is worth your attention: Surging AI workloads are reshaping cloud economics and cementing hyperscalers as the critical infrastructure layer for the AI era.

Original link: https://www.ft.com/content/71e29546-661e-4c9f-b401-0428585fbc42

 

Apple joins Microsoft and Nvidia in elite $4tn valuation club

OCT 28 2025 | Financial Times

Apple briefly hit a $4tn valuation following renewed investor confidence boosted by strong iPhone 17 demand and accelerating services revenue. The company’s share price is up 28% in six months, alleviating concerns that Apple was falling behind in AI. While Microsoft and Nvidia reached the milestone earlier through AI-driven cloud and chip demand, Apple’s momentum stems from a major device refresh and improving expectations for its delayed AI features.

Why this is worth your attention: Big Tech’s ballooning valuations underscore how central AI expectations—and the hardware ecosystems that support it—have become to global equity markets.

Original link: https://www.ft.com/content/cac347b9-279f-4652-9b42-4f01f3b8f040

 

Big Tech tests investors’ patience with $80bn AI investment spree

OCT 30 2025 | Financial Times

Alphabet, Meta and Microsoft collectively spent nearly $80bn on AI infrastructure last quarter, provoking mixed reactions from investors as capex soars to unprecedented levels. Google’s record revenue and rising cloud demand were well-received, while Meta’s stock tumbled on warnings of even higher AI spending despite strong growth. Microsoft also reported surging Azure revenue but signalled massive upcoming infrastructure build-outs. The divergence highlights uncertainty over how quickly AI investment will translate into monetisable products and services.

Why this is worth your attention: With AI capex now rivaling the GDP of small nations, markets are increasingly demanding evidence of real returns rather than promises of future AI scale.

Original link: https://www.ft.com/content/86bb929f-e0ec-4e50-b429-e9259c3834e2

 

Big tech’s big gamble

OCT 28 2025 | Financial Times

This analysis argues that Big Tech’s unprecedented AI capex—doubling over the past 18 months—is a massive strategic bet that could reshape industry economics. While Alphabet, Amazon, Meta and Microsoft are pouring extraordinary sums into data centres, chips and compute, Apple is notably holding back, relying instead on device-driven AI benefits. Charts throughout the piece show rising capex, falling free cash flow and looming depreciation pressures, illustrating the financial strain of the AI arms race.

Why this is worth your attention: The sustainability of AI investment is becoming a decisive question for markets, with long-term profitability hinging on whether these colossal outlays generate durable competitive advantages.

Original link: https://www.ft.com/content/fb3486e5-9513-401e-b87f-9e3e9d77376a

 

In the AI boom, not all capex is created equal

OCT 30 2025 | Financial Times

Richard Waters argues that while Big Tech’s AI spending boom may appear indiscriminate, the underlying strategies vary significantly in risk and expected returns. Microsoft reports strong near-term demand with booked business aligned to chip lifecycles, while Meta is investing more speculatively, unable to clearly articulate future revenue streams. Oracle’s large RPO headline numbers obscure concentration risk in a single OpenAI deal. As depreciation rises and margins tighten, distinctions in pricing power, infrastructure efficiency and customer mix will become critical.

Why this is worth your attention: Investors will increasingly differentiate between sustainable AI infrastructure strategies and speculative bets — a distinction that may define the next phase of the AI market.

Original link: https://www.ft.com/content/53958078-be2e-43fc-ba70-13a68b5fddf1

 

Mark Zuckerberg goes all in on the AI YOLO trade

OCT 30 2025 | Financial Times

Meta’s CEO is pursuing an aggressive, high-stakes strategy to win the race for superintelligence, doubling planned capex to more than $100bn next year and triggering a $160bn drop in market value. The Lex column highlights that while the heavy spending introduces major risks, Meta’s cash generation and off-balance-sheet financing structures limit existential downside. Even if the bet fails, AI investment is already boosting the company’s core advertising business through higher ad volume and pricing.

Why this is worth your attention: Meta’s gamble encapsulates the broader AI arms race — enormous potential upside, limited short-term discipline and strategic bets that could reshape the competitive landscape.

Original link: https://www.ft.com/content/7ccd3431-b50c-4474-8d79-90464b8b0263

 

Meta readies $25bn bond sale as soaring AI costs trigger stock sell-off

OCT 30 2025 | Financial Times

Meta is preparing one of the year’s largest bond offerings as it seeks to finance surging AI infrastructure costs, including data centres and custom hardware. The announcement followed a sharp 11% share-price drop that erased $208bn in market value after Mark Zuckerberg warned of even larger spending in 2026 and beyond. The company has also raised substantial private credit to fund its Hyperion data centre, joining other tech giants tapping debt markets to fund record AI capex.

Why this is worth your attention: As AI spending reaches unprecedented levels, Big Tech’s reliance on debt signals how capital-intensive the AI era is becoming — raising questions about sustainability, returns and market concentration.

Original link: https://www.ft.com/content/120d2321-8382-4d74-ab48-f9ecb483c2a9

 

Nvidia becomes world’s first $5tn company

OCT 29 2025 | Michael Acton, Alex Rogers & Tim Bradshaw, Financial Times

Nvidia became the world’s first $5tn company after surging demand for its AI chips pushed revenue forecasts to unprecedented levels, with Jensen Huang revealing half a trillion dollars’ worth of orders booked over the next five quarters. The rally was further fuelled by signs that the Biden–Xi discussions may reopen access to the Chinese market, a critical gap since US export controls shut Nvidia out. Nvidia’s rapid rise — from $400bn pre-ChatGPT to multiple trillions today — reflects the global AI infrastructure race, though it is increasingly complicated by political risk, investor concentration and the company’s own investments in AI customers.

Why this is worth your attention: Nvidia’s valuation cements its role at the centre of the global AI economy — but also exposes markets, supply chains and geopolitics to the fate of a single company.
Original link: https://www.ft.com/content/62933c70-261c-4b7a-a045-3f9f9cceccd7

 

Nvidia to invest $1bn in Nokia as chip giant extends deal spree

OCT 28 2025 | Kieran Smith & Michael Acton, Financial Times

Nvidia will invest $1bn in Nokia, taking a 2.9% stake and cementing a strategic partnership to integrate AI into next-generation 5G and 6G telecoms networks. The deal sent Nokia shares up 21% to a decade high, validating its pivot toward AI-powered network technology and cloud-driven services. Nvidia will supply advanced Blackwell-based systems to modernise network infrastructure, while positioning itself as a key player in the emerging market for AI-RAN, expected to reach $200bn by 2030.

Why this is worth your attention: The partnership extends Nvidia’s influence beyond data centres into the world’s communications backbone, shaping how future wireless networks are built and which countries control them.
Original link: https://www.ft.com/content/075c6d4e-7319-45c7-9de8-49da908aa594

 

Nvidia supplier SK Hynix has already sold next year’s chips on AI boom

OCT 29 2025 | Song Jung-a, Financial Times

SK Hynix reported record quarterly profits driven by soaring demand for high-bandwidth memory (HBM) chips used in AI data centres, announcing that its entire 2026 production — across DRAM, NAND and HBM — is already sold out. The company is capitalising on its dominant position in the HBM market, strengthened further by a preliminary deal to supply chips for OpenAI’s $500bn Stargate project. With inventories tight, competitors limited and inference workloads exploding, the company plans major capex increases and expects continued supply-demand imbalances well into 2026.

Why this is worth your attention: Memory has become a critical bottleneck in AI compute, and SK Hynix’s fully booked pipeline signals both the intensity of AI infrastructure demand and growing geopolitical and supply-chain pressure.
Original link: https://www.ft.com/content/64e7dfb0-b32c-417e-b411-efc9098e1e3a

 

OpenAI restructuring pushes Microsoft’s valuation above $4tn

OCT 28 2025 | Melissa Heikkilä, Tim Bradshaw, George Hammond, Stephen Morris & Cristina Criddle, Financial Times

OpenAI finalised a sweeping restructuring that converts it into a for-profit entity, awarding Microsoft a 27% stake valued at $135bn and propelling the tech giant beyond a $4tn market cap. The new OpenAI Group allows traditional equity ownership while maintaining mission oversight through the OpenAI Foundation, which holds 26% of the business and retains the power to hire and fire board members. The deal also secures Microsoft long-term access to OpenAI models, a $250bn cloud-spend commitment, and clearer terms for an eventual IPO expected as soon as next year.

Why this is worth your attention: The agreement reframes one of the world’s most influential AI organisations, blending profit motives with governance constraints — and deepening Microsoft’s strategic lock-in at the heart of frontier AI.
Original link: https://www.ft.com/content/74d537c6-bd80-4797-9897-3d5455dfc414

 

Qualcomm shares jump as it launches new AI chip to rival Nvidia

OCT 27 2025 | Michael Acton, Financial Times

Qualcomm’s shares rose up to 20% after it announced its first data-centre AI processors, targeting the lucrative accelerator market dominated by Nvidia. Its first major customer, Saudi-backed Humain, plans to deploy 200MW of Qualcomm’s new hardware from 2026 as part of its sovereign AI ambitions. The annual chip-launch cadence, new memory architectures and rack-scale liquid-cooled designs signal Qualcomm’s bid to compete directly in the core of AI infrastructure while capitalising on global investment in national AI capabilities.

Why this is worth your attention: Rival chipmakers are mounting increasingly credible challenges to Nvidia’s dominance, potentially diversifying the global AI hardware ecosystem and reshaping power balances in the data-centre market.
Original link: https://www.ft.com/content/2b6f779a-1e6f-4eb5-85a0-0c75fe450215

 

Silicon Valley called — the 1990s are back

OCT 27 2025 | Rana Foroohar, Financial Times

Rana Foroohar reflects on the parallels between today’s AI boom and the 1990s dotcom era, noting identical exuberance in San Francisco’s culture and investor psychology. While AI’s long-term impact is expected to be far deeper than the early internet, she highlights key differences: AI is vastly more capital-intensive, deeply constrained by energy infrastructure, and increasingly funded by equity rather than debt. Yet she warns that speculative behaviour — including outperformance by negative-earnings small-cap companies — echoes patterns that preceded past market corrections.

Why this is worth your attention: The sustainability of the AI boom hinges on energy, capital flows and actual productivity gains — and history suggests bubbles can burst even when built by profitable giants.
Original link: https://www.ft.com/content/834487ce-2357-40c4-bf45-34562e522755

 

US stocks ride AI hype and trade truce to 6-month winning streak

OCT 31 2025 | George Steer, Peter Wells & Emily Herbert

US equities extended their strongest rally in four years, powered by an AI-driven investment boom, easing interest rates and signs of a temporary US–China trade détente. The S&P 500 reached its 36th record high of the year, while the Nasdaq logged seven straight months of gains. Investor concerns about an AI bubble were overshadowed by enormous capital-expenditure announcements from Alphabet, Amazon, Meta and Google — over $112bn in the past quarter alone — and merger activity exceeding $80bn in a single day. A Federal Reserve rate cut added tailwinds, as did Amazon’s 12% surge following blockbuster cloud results and Meta’s record-breaking $30bn bond sale to fund AI infrastructure.

Why this is worth your attention: Markets are betting heavily that AI’s economic impact will be both durable and transformative — but the scale and speed of investment raise the stakes for any future correction.
Original link: https://www.ft.com/content/b78abb32-223a-45b2-a999-133e4273aa52

 

SoftBank sells Nvidia stake for $5.8bn to fund AI investments

November 2025 | David Keohane, Financial Times

SoftBank has offloaded its entire Nvidia stake for $5.8bn as Masayoshi Son accelerates his push into artificial intelligence. The sale of 32mn shares comes alongside surging group profits, boosted by gains in OpenAI and PayPay, and marks another step in Son’s effort to reshape SoftBank into a central AI-era conglomerate. Executives said the move was prompted by the scale of new AI commitments — including more than $30bn earmarked for OpenAI — rather than concerns about Nvidia itself.

Why this is worth your attention: This divestment highlights how aggressively SoftBank is positioning itself at the centre of the AI economy, even if it means cashing out of the world’s most valuable chipmaker.

Original link: https://www.ft.com/content/5f04e0e2-7a9c-4885-92a3-9ed5242c7d38

 

Tech stocks suffer worst week since April after $800bn AI sell-off

November 2025 | Tim Bradshaw & George Steer, Financial Times

AI-exposed giants including Nvidia, Meta, Palantir and Oracle shed nearly $1tn in market value over five days, marking the Nasdaq’s worst week since the April tariff shocks. Investors retreated amid concerns over high valuations, soaring AI-related capital expenditure and signs of softer US labour demand. Charts on pages 1–2 of the report illustrate the steep declines, with Nvidia alone down roughly $350bn in market cap.

Why this is worth your attention: After years of exuberance, markets are signalling that AI’s breakneck expansion comes with macroeconomic and financial fragility — particularly as companies take on debt to fund model training and data-centre buildouts.

Original link: https://www.ft.com/content/8c6e3c18-c5a0-4f60-bac4-fcdab6328bf8

 

Tesla shareholders approve Elon Musk’s $1tn pay deal

November 2025 | Stephen Morris, Financial Times

Tesla investors have backed the largest remuneration package in corporate history, granting Elon Musk a potential $1tn payout tied to extreme performance targets. Despite governance concerns and opposition from major proxy advisers, shareholders voted 75% in favour, swayed by fears Musk might walk away. At the meeting, Musk argued that Tesla’s AI and robotics ambitions — including humanoid robots and autonomous driving subscriptions — justify the scale of the incentive.

Why this is worth your attention: The vote underscores how central Musk is to Tesla’s identity and AI strategy, and how far investors are willing to go to keep him in command as the company pivots from cars to robotics.

Original link: https://www.ft.com/content/47ac6557-7ded-4f63-a767-173459a4df68

 

The Bond Market Crashes the AI Party

November 2025 | Financial Times

A sharp sell-off in global bond markets is colliding with the AI-driven equity boom, creating new instability in financial markets. Rising yields have begun to pressure high-growth technology stocks, which had powered much of the year’s market gains. Investors who had treated AI as a near-risk-free growth story are now recalibrating in the face of tightening financial conditions, as borrowing costs filter through corporate balance sheets and dampen risk appetite.

Why this is worth your attention: AI-exposed equities have become systemically important to market sentiment; volatility in bond markets shows how fragile AI-linked valuations may be when macro conditions turn.

 

Snap shares jump after $400mn deal with AI start-up Perplexity

November 2025 | Financial Times

Snap’s stock surged following news of a $400mn strategic partnership with Perplexity, which will integrate the AI search start-up’s generative technologies into Snap’s products. The move is framed as a bid to rebuild user engagement and expand beyond advertising into AI-powered discovery tools. Investors applauded the alignment between Snap’s youthful user base and Perplexity’s conversational search capabilities.

Why this is worth your attention: Traditional social platforms are increasingly turning to generative AI partnerships to stay relevant — a sign that foundational AI capabilities are becoming a prerequisite for competing in consumer tech.

 

Nokia Splits AI Business Into Separate Unit After $1bn Nvidia Investment

November 2025 | Financial Times

Nokia announced a major restructuring that separates its AI-driven cloud and data-centre operations from its core telecoms networks, creating a new growth-focused infrastructure division. The move follows Nvidia’s $1bn investment and is part of chief executive Justin Hotard’s strategy to reposition Nokia as a key Western provider of secure, AI-enhanced connectivity. The company also set new long-term profit targets and is exploring the future of several non-core businesses while forming a dedicated defence unit. Nokia’s share price initially surged on the Nvidia deal before retracing amid broader AI-sector volatility.

Why this is worth your attention: The restructuring signals how legacy telecoms firms are reinventing themselves around AI infrastructure, reflecting geopolitical and commercial demand for secure, Western-aligned data and networking systems.

Original link: https://www.ft.com/content/2801df7d-1692-4788-bad7-58d6a4885d8d

 

Nvidia Earnings Show Profit Jumped 65% to $31.9 Billion

November 2025 | The New York Times

Nvidia reported quarterly profits of $31.9bn—up 65% year-on-year—driven by soaring demand for its A.I. data-centre chips, which now dominate roughly 90% of the market. Revenue hit $57bn, with the company forecasting $65bn next quarter as cloud providers and national governments continue large-scale A.I. infrastructure buildouts. Investors remain uneasy about circular investment structures, such as Nvidia’s multi-billion-dollar deals with OpenAI and Anthropic, which critics say blur customer-supplier boundaries. Competitive pressure is rising from AMD, Qualcomm and China’s domestic chip sector.

Why this is worth your attention: Nvidia’s results confirm A.I. compute as the defining growth engine of global tech—but also highlight systemic risks from concentrated supply chains, circular financing and geopolitical tensions.

Original link: https://www.nytimes.com/2025/11/19/technology/nvidia-earnings.html

 

Nvidia Shares Fall on Signs Google Gaining Upper Hand in AI

November 2025 | Financial Times

Nvidia’s stock dropped sharply—erasing $115bn in market value—amid investor speculation that Google’s TPU-powered Gemini 3 model had eclipsed OpenAI’s GPT-based systems. Analysts likened the moment to the DeepSeek shock earlier in the year, suggesting Google may be emerging as the AI performance leader. The sell-off also hit companies tightly linked to Nvidia, including Super Micro Computer, CoreWeave and Nebius. Reports indicated Google was pitching TPUs directly to enterprise customers, potentially threatening Nvidia’s dominance in training and inference infrastructure.

Why this is worth your attention: The episode shows how quickly AI-market leadership can shift—and how vulnerable Nvidia is to platform-level competition from vertically integrated rivals like Google.

Original link: https://www.ft.com/content/7d0cd87e-99b0-4411-b54f-f5b239af8e76

 

Nvidia Shrugs Off ‘AI Bubble’ Anxiety With Bumper Chip Demand

November 2025 | Financial Times

Nvidia again beat expectations with quarterly revenues of $57bn and a forecast of $65bn for the following quarter, calming fears of an AI investment bubble. Chief executive Jensen Huang emphasised enduring demand, even as the company noted— for the first time— that customers’ ability to secure capital and energy for A.I. data-centre buildouts could constrain growth. Nvidia’s extensive partnerships, including investments in OpenAI and Anthropic, have deepened its entanglement with major clients, raising concerns about circular deal structures. The company remains shut out of China due to U.S. export controls, but demand for its Blackwell and Blackwell Ultra chips remains strong.

Why this is worth your attention: The results reinforce Nvidia’s central role in global AI infrastructure while highlighting emerging structural bottlenecks—particularly energy and capital availability—that could shape the pace of future A.I. expansion.

Original link: https://www.ft.com/content/24c50fe0-3ea4-4347-851c-8635d6ef02c1

 

Nvidia’s AI Supremacy Is a Weapon That Cuts Both Ways

Nov 25 2025 | Financial Times (Dan McCrum)

The FT argues that Nvidia’s dominance in AI chips is becoming a strategic and political vulnerability as well as a commercial strength. Its market power has pushed customers and governments to accelerate efforts to build alternatives, whether through proprietary accelerators, national semiconductor plans, or open hardware initiatives. The article notes that cloud providers are increasingly uneasy about Nvidia’s pricing leverage, while Washington and Beijing both see dependency on the company as a national risk. As AI workloads surge, the world’s reliance on Nvidia’s hardware has created a single point of failure in the global compute stack.

Why this is worth your attention: Nvidia’s extraordinary success is now triggering competitive, regulatory and geopolitical blowback—shaping the long-term sustainability of its leadership in the AI hardware ecosystem.
Original link: https://www.ft.com/content/9b7a434c-8f21-4dad-a17e-e0f7c4d53c92

 

OpenAI Needs to Raise at Least $207bn by 2030 So It Can Continue to Lose Money, HSBC Estimates

Nov 26 2025 | Financial Times (Bryce Elder)

HSBC analysis shows OpenAI faces a $207bn funding shortfall by 2030 due to the massive compute rental obligations it has contracted with Microsoft and Amazon—together totalling up to 36 gigawatts of cloud capacity. The broker models explosive revenue growth, projecting 3bn users and rising subscription adoption by the end of the decade, but finds that costs rise in parallel. Charts on pages 3–5 illustrate OpenAI’s estimated P&L, free cash flow deficit, and the widening gap between revenue and compute spending. Even with optimistic market-share assumptions and billions in debt facilities, the analysis concludes OpenAI will remain heavily loss-making.

Why this is worth your attention: The piece highlights how frontier AI economics may be structurally unprofitable without unprecedented ongoing capital inflows—raising questions about sustainability, investor appetite and systemic dependence on hyperscaler credit.
Original link: https://www.ft.com/content/23e54a28-6f63-4533-ab96-3756d9c88bad

 

Physical Intelligence, a Specialist in Robot A.I., Raises $400 Million

Nov 4 2024 | The New York Times (Michael J. de la Merced)

Robotics start-up Physical Intelligence raised $400mn from investors including Jeff Bezos, Thrive Capital, Lux Capital and OpenAI, valuing the company at about $2bn. Its goal is to build a “generalist brain” for robots—software capable of powering many robot types rather than niche, machine-specific controllers. The company has amassed large datasets to train its model π0, which in tests has enabled robots to fold laundry, flatten boxes and clear tables. Founders say the field could see a ChatGPT-style breakthrough, though the timeline is uncertain.

Why this is worth your attention: A flexible, general-purpose robotics brain could unlock real-world automation at scale, transforming manufacturing, logistics and domestic robotics.
Original link: https://www.nytimes.com/2024/11/04/business/dealbook/physical-intelligence-robot-ai.html\

 

Queueing Not a Virtue When It Comes to Building Data Centres

Nov 23 2025 | Financial Times (Lex Column)

The FT Lex column warns that hyperscaler-led data-centre expansion in the UK is being throttled by decade-long waits for grid connections. A chart on page 2 shows the UK facing average waits of 8–10 years—far longer than Italy or the Nordics—putting investment at risk and encouraging speculative queue-jumping. Regulators are considering stricter entry requirements and allowing operators to build “micro-grids” with their own generators and batteries to bypass backlog constraints. While this could accelerate AI-infrastructure deployment, it may slow decarbonisation if operators rely initially on gas-fired power.

Why this is worth your attention: Grid bottlenecks, not capital, are emerging as the biggest obstacle to national AI ambitions—reshaping where hyperscalers will build the next generation of compute.
Original link: https://www.ft.com/content/0656d3af-4cf3-4d26-9a53-b5c40f8ca0a8

 

Saudi Arabia Leads $900mn Funding Round in Luma AI as U.S. Ties Deepen

Nov 19 2025 | Financial Times (Daniel Thomas, Ahmed Al Omran)

Saudi Arabia’s Public Investment Fund, through its AI-focused venture Humain, led a $900mn round for U.S. generative-video start-up Luma AI, valuing it above $4bn. The deal coincides with reaffirmed U.S.–Saudi economic alignment and a pledge by Crown Prince Mohammed bin Salman to invest $50bn into AI “in the short term.” Luma is training large-scale “world models” that learn from video, audio and robotic data to simulate physical environments. The investment is supported by Project Halo, one of the world’s largest data-centre clusters being built in the kingdom.

Why this is worth your attention: The Gulf’s deepening investment in frontier AI positions Saudi Arabia as a strategic compute hub—and strengthens U.S.–Saudi technological interdependence.
Original link: https://www.ft.com/content/2009b57c-b12d-439d-bc94-9502fd8aaa1f

The AI Cycle Will Crack First in Asia

Nov 26 2025 | Financial Times (June Yoon)

The FT argues that early signs of an AI-market slowdown will appear in Asia’s semiconductor supply chain before U.S. tech giants feel the impact. Charts on page 2 show SK Hynix’s entire high-bandwidth-memory production sold out through 2026 and TSMC’s advanced packaging capacity almost fully booked by Nvidia. But historically, chip shortages often precede sharp demand corrections as customers over-order during scarcity. Unlike diversified U.S. tech firms, Korean and Taiwanese chipmakers depend heavily on AI-related products, making them especially exposed to a cyclical reversal.

Why this is worth your attention: Semiconductor bottlenecks will be the earliest and clearest indicator of whether the AI investment boom is overheating—and which economies face the sharpest downside risk.
Original link: https://www.ft.com/content/1f112411-2ea6-41f0-941d-982d99792eea

 

The Warning Signal From Bitcoin’s Fall

Nov 26 2025 | Financial Times (Robert Armstrong)

The column argues that Bitcoin’s sudden 30% decline is a barometer of rising risk aversion across markets, particularly tech sectors priced for perfection. Armstrong notes that Bitcoin, while often dismissed as speculative, reliably signals tightening liquidity and investor anxiety. He connects the sell-off to concerns about overextended valuations in AI-linked equities, which have experienced sharp volatility. Charts in the article show Bitcoin’s drop mirroring de-risking across high-growth technology names during the same window.

Why this is worth your attention: Bitcoin’s slump highlights fragility beneath the AI-driven market boom — signalling that investor sentiment toward high-growth tech may be turning.
Original link: https://www.ft.com/content/2eac63b0-7d6f-49b9-ab2e-165474f3bc11

 

UK Government Will Buy Tech to Boost AI Sector in £100mn Growth Push

Nov 20 2025 | Financial Times (Chris Smyth, Melissa Heikkilä)

The UK government will guarantee purchases of British-made AI chips under a “first customer” model inspired by Covid vaccine procurement. Science secretary Liz Kendall argues this approach will give domestic hardware start-ups certainty needed to scale and compete with the US and China. The plan is part of a wider AI strategy aimed at strengthening Britain’s life sciences, finance and defence sectors through AI adoption. Page-2 details show concerns from TechUK that advance market commitments could distort competition if not designed carefully.

Why this is worth your attention: Government-backed early demand could reshape the UK’s ability to build sovereign AI hardware capacity — a critical foundation for national competitiveness.
Original link: https://www.ft.com/content/d4d9d091-5fd7-4c20-a3ca-f68f580f7d6b

 

US Tech Stocks Notch Biggest Jump in Six Months as Rate-Cut Bets Fuel Rebound

Nov 24 2025 | Financial Times (Rachel Rees, George Steer)

US tech stocks rallied strongly after senior Federal Reserve officials signalled support for a December interest-rate cut. The Nasdaq rose 2.7%, its best performance since May, with Alphabet, Tesla and Broadcom leading gains. Analysts quoted in the article say that falling fears over inflation and softer labour-market conditions have revived risk appetite after weeks of volatile selling. A chart on page 2 shows Bitcoin rebounding slightly after a steep decline, mirroring improved sentiment across tech-exposed assets.

Why this is worth your attention: The rebound highlights how rate expectations continue to dominate pricing of AI-linked equities — revealing sensitivity to macro signals even amid strong sector fundamentals.
Original link: https://www.ft.com/content/bea68366-f644-44f8-a38d-0c041e4a646d

 

US Tech Stocks Slide as Jolt of Volatility Hits Wall Street

Nov 20 2025 | Financial Times (George Steer, Emily Herbert, Rachel Rees)

Markets whipsawed as Nvidia’s strong earnings initially boosted tech stocks before concerns over high AI valuations triggered a sharp reversal. The Nasdaq swung by more than 2%, the widest range since April, while the VIX spiked from 20 to 28 in two hours. Page-2 charts show how Nvidia’s early rally gave way to rapid selling, dragging down Oracle, Palantir and Robinhood. Analysts argue that investor unease is rising over the scale of Big Tech’s AI infrastructure spending relative to near-term returns.

Why this is worth your attention: The session underscores how AI-driven market exuberance is colliding with valuation reality — making the sector increasingly prone to sudden volatility.
Original link: https://www.ft.com/content/2c0bb19c-7c64-453f-95ab-c0f54a184089

 

‘I’m Nervous’: Klarna Founder Challenges Trillion-Dollar Spending on AI

Nov 22 2025 | Financial Times (Richard Milne)

Klarna founder Sebastian Siemiatkowski warns that the global AI investment boom may be misallocating capital on a historic scale. He argues governments and companies risk “trillion-dollar mistakes” if they continue pouring money into massive data-centre buildouts without clearer evidence of productivity gains. Page-2 figures show Klarna’s own AI productivity claims being questioned, prompting scrutiny of whether corporate enthusiasm reflects sustainable economics or competitive FOMO. Despite support for AI innovation, Siemiatkowski urges regulators and investors to consider systemic risks.

Why this is worth your attention: Growing scepticism from industry leaders suggests the AI capex super-cycle may face a credibility test — with implications for financial stability and national industrial strategy.
Original link: https://www.ft.com/content/355f1c1d-1329-4cca-a25a-1f6455d16f7d

 

AI Bubble Trouble Talk Is Overblown

Nov 21 2025 | Financial Times (Lex Column)

The FT’s Lex column argues that fears of an AI bubble are exaggerated, emphasizing that unlike previous tech booms, today’s demand is backed by measurable infrastructure needs. Charts on page 2 show Nvidia’s sales growth, cloud-provider capex, and AI hiring data continuing to rise despite market volatility. The column notes that while valuations are stretched, corporate adoption pipelines remain strong, and capital spending resembles long-term platform investment rather than speculative excess. Infrastructure, not hype, is driving most spending.

Why this is worth your attention: The column suggests investors risk misreading short-term volatility as structural weakness — potentially overlooking the durability of AI infrastructure growth.
Original link: https://www.ft.com/content/4ea4a7b4-8b43-4df7-bf02-043bffdea73e

 

AI Roils the Memory Market and Japan’s Start-Ups Level Up

Nov 25 2025 | Financial Times (Leo Lewis)

Japan’s memory-chip makers and AI start-ups are experiencing a surge in demand as global data-centre construction accelerates. The article highlights how Japanese suppliers of DRAM, NAND and advanced packaging technologies are benefiting from tight supply and soaring prices. At the same time, a new wave of domestic AI start-ups is attracting record venture funding, supported by government incentives. Page-3 graphics show memory price volatility and Japan’s growing share of AI-related semiconductor exports.

Why this is worth your attention: Japan is emerging as a key beneficiary of the global AI compute boom — strengthening its semiconductor sector after years of decline.
Original link: https://www.ft.com/content/48df7b8c-7fc3-4003-a2f3-d00a737f40c3

 

Amazon Joins Big Tech Bond Rush With $12bn Debt Sale

Nov 21 2025 | Financial Times (Eric Platt, Patrick Temple-West)

Amazon issued $12bn in bonds to help finance its expanding AI-related data-centre investments, joining Apple and Microsoft in tapping investor demand for high-grade corporate debt. The company secured strong order books, with maturities ranging from 3 to 40 years. Analysts say the proceeds will support Amazon Web Services’ escalating capex requirements as it races to meet surging AI infrastructure demand. Page-2 charts show record-high bond issuance volumes from US tech giants.

Why this is worth your attention: The bond sale underscores how AI is reshaping corporate finance, driving unprecedented long-duration borrowing to fund hyperscale compute infrastructure.
Original link: https://www.ft.com/content/8fca03bb-70e3-4a74-9ee6-868cd79fd826

 

Chart Crimes Revisited: The ‘AI Bubble’ Bubble

Nov 19 2025 | FT Alphaville (Bryce Elder)

In this Alphaville column, Elder argues that claims of an “AI bubble” are themselves overblown — a bubble in bubble-talk. Using Google Trends data, he shows on page 2 that searches for “AI bubble” have collapsed 85% since August, illustrating how narratives can swing faster than fundamentals. Elder warns that Google Trends is a poor forecasting tool and is often misused to justify claims about market sentiment. He notes that AI pessimism appears to have peaked just as AI stocks resumed their rally.

Why this is worth your attention: The column cautions investors against relying on hype-cycle chatter or search-trend charts — which may distort, rather than clarify, real market dynamics.
Original link: https://www.ft.com/content/f872b97e-3630-43a0-bc1a-074bb5c6a3ca

 

China Leapfrogs US in Global Market for ‘Open’ AI Models

Nov 26 2025 | Financial Times (Melissa Heikkilä)

A joint MIT–Hugging Face study finds China has overtaken the US in downloads of new open-weight AI models, reaching 17% of global share. The shift is driven by Chinese developers such as DeepSeek and Alibaba, whose rapid-release, lower-compute models are challenging American frontier-model dominance. Extensive charts (pages 2–5) show China leading in video-generation models and gaining traction with unaffiliated global developers. The report warns that Chinese open models embed CCP biases and political constraints while still gaining global reach.

Why this is worth your attention: China’s rise in open-model ecosystems challenges US technological influence and could reshape global AI standards, security dynamics and developer communities.
Original link: https://www.ft.com/content/931c8218-a9d7-4cbd-8b08-27516637ff41

 

Christian tech group tests investors’ faith in AI deals on Wall Street debut

November 2025 | George Steer, Financial Times

Gloo, a Colorado-based company building “values-aligned” generative AI tools for churches and Christian community groups, made a cautious Wall Street debut after raising $73mn in a scaled-back IPO. Shares briefly rose 5 per cent before slipping, reflecting a volatile tech market and investor scepticism about the group’s rapid acquisition-driven expansion and steep losses. Gloo positions its proprietary, Bible-trained foundational models as a fix for hallucinations in mainstream AI systems, while executives emphasise a mission-driven strategy. But heavy cash burn, a down-market pricing of $8 per share, and concerns voiced by hedge funds underscore doubts about sustainability even as Gloo pushes for profitability.

Why this is worth your attention: Faith-oriented AI is emerging as a niche but politically salient market, and Gloo’s debut highlights how mission-driven tech firms face the same scrutiny on profitability, data accuracy and scale as mainstream AI players.
Original link: https://www.ft.com/content/e567cb36-a4bd-47c9-9fa7-9550dfa434c1

 

Could Washington pop the AI bubble?

November 2025 | Financial Times (Due Diligence)

Despite blockbuster earnings from Nvidia and a frenzy of private-market funding, political dynamics in Washington are beginning to turn against the AI investment boom. President Trump’s promotion of AI partnerships with Saudi Arabia and other allies — and his openness to limiting state-level AI regulations — has fuelled tension inside the Republican Party, where rising unemployment, energy costs and community resistance to data-centre construction are sharpening scepticism. Legislators warn that a permissive regulatory environment could become a political liability, even as venture capital firms push for lighter rules. The newsletter suggests that as political backlash grows, regulatory or electoral shifts could test whether current AI valuations are sustainable.

Why this is worth your attention: Wall Street’s most crowded trade increasingly depends on a favourable political climate; if Washington cools on AI, investor sentiment and capital formation could shift quickly, threatening the momentum behind data-centre build-outs and corporate AI adoption.
Original link: https://www.ft.com/content/53ad4b70-de31-4a20-8a12-71d4da529ba8

 

Elon Musk’s xAI nears $230bn valuation in fundraising deal

November 2025 | George Hammond, Hannah Murphy & James Fontanella-Khan, Financial Times

xAI is close to securing $15bn in fresh capital at a $230bn valuation — twice what the group was worth when acquiring X earlier this year — marking one of the most aggressive expansions in the AI sector. The deal caps months of internal turmoil, including the replacement of top finance executives and Musk’s public denials about fundraising. xAI is racing to build out data-centre infrastructure such as its Colossus facility and a planned 500MW project in Saudi Arabia, while aggressively developing its Grok chatbot and integrating AI capabilities into X and Tesla. Secondary-market trading suggests investor appetite remains strong, even amid heightened competition from OpenAI, Anthropic and other well-funded rivals.

Why this is worth your attention: The staggering valuation signals how capital is consolidating around a handful of billionaire-led AI ventures, raising strategic questions about governance, geopolitical partnerships and the long-term sustainability of debt-fuelled AI infrastructure build-outs.
Original link: https://www.ft.com/content/b13c6f36-7810-42cd-af8e-526828b04682

 

Europe’s tech sector is evolving fast. Is it fast enough?

November 2025 | John Thornhill, Financial Times

Europe’s tech founders are demonstrating unprecedented ambition, with start-ups like Lovable scaling rapidly and a record number of European unicorns now emerging. Yet the US is once again pulling ahead, fuelled by colossal AI investments from Microsoft, Alphabet, OpenAI and Meta, whose combined spending dwarfs the continent’s. The EU has proposed regulatory streamlining — including delays to AI Act provisions — but researchers argue deeper structural changes are needed: a unified corporate regime, consistent single-market enforcement, improved access to growth capital and policies to attract top AI researchers. Critics warn that Europe risks technological dependence on US companies for cloud, defence and satellite capabilities.

Why this is worth your attention: Europe is at an inflection point: without bold structural reforms, it may miss the next wave of AI-driven industrial development and deepen its reliance on foreign technology giants.
Original link: https://www.ft.com/content/6e8706c8-6227-489d-9e1b-d5205697d4e7

 

FirstFT: Nvidia shrugs off ‘AI bubble’ concerns with bumper chip sales

November 2025 | Gordon Smith, Irwin Cruz & Benjamin Wilhelm, Financial Times

Nvidia once again outperformed expectations, reporting a 62 per cent year-on-year revenue surge to $57bn and forecasting an even stronger quarter ahead. CEO Jensen Huang dismissed warnings of an AI bubble, insisting demand remains structurally robust across sectors racing to adopt AI. The results pushed global tech markets higher and reinforced Nvidia’s status as the backbone of the AI compute economy. Analysts highlighted the unprecedented scale and speed of Nvidia’s growth, with its performance increasingly interpreted as a proxy for the health of the global AI ecosystem.

Why this is worth your attention: Nvidia’s momentum indicates that AI hardware demand remains far ahead of supply — a critical signal for investors, policymakers and rivals navigating concerns over valuations and infrastructure bottlenecks.
Original link: https://www.ft.com/content/bc966f3f-5cbb-4c70-b225-41d97e8a2dbf

 

Google is a near-$4tn monument to monopoly power

November 2025 | Lex Column, Financial Times

Google’s parent Alphabet has added $1.3tn in market value since an antitrust ruling in September determined its monopoly was less threatening in the age of AI — freeing the company to compete aggressively. Despite fears that ChatGPT-style tools would erode Google’s dominance, search volumes have risen and monetisation remains resilient. The launch of Google’s Gemini 3 model, which outperformed OpenAI on several benchmarks including on-screen understanding, has strengthened perceptions that Google is no longer lagging in AI. Its monopoly-driven cash flow — $330bn over five years — allows sustained investment in proprietary chips, enterprise software and cloud expansion at a scale few rivals can match.

Why this is worth your attention: Google’s accelerating dominance challenges assumptions that generative AI would disrupt incumbents; instead, monopoly resources may be entrenching its power across both search and AI infrastructure.
Original link: https://www.ft.com/content/26584ead-1d6d-4775-96d7-492066837255

 

How to Hide From a Bubble

November 17, 2025 | Financial Times (Robert Armstrong)

Robert Armstrong explores market concerns raised by valuation expert Aswath Damodaran, who warns that the AI boom — particularly sky-high expectations for companies like Nvidia — may be dangerously inflated. With correlations across asset classes rising, traditional diversification offers limited protection if tech valuations collapse. Historical analysis shows mixed outcomes: some sectors outperform in downturns, but broad market sell-offs can leave few safe havens besides cash and bonds. Armstrong, while less pessimistic, notes that today’s infrastructure boom tied to AI data-centre growth broadens economic exposure to any slowdown.

Why this is worth your attention: The AI investment surge is now deeply entwined with real-economy activity; if the bubble bursts, the ripple effects could be far wider than past tech crashes.
Original link: https://www.ft.com/content/bf06e291-44b6-4116-8319-a98e35d7cd48

 

US and European Stocks Recover After Tech-Driven Sell-Off

November 2025 | Emily Herbert, William Sandlund & Arjun Neil Alim, Financial Times

Markets stabilised following a sharp tech-led fall, with strong US economic data helping investors “buy the pullback.” The S&P 500 and Nasdaq rebounded modestly, while European indices also recovered. Earlier weakness stemmed from concerns about overheated AI-linked valuations, particularly in Asian chipmakers that had rallied on surging demand from US AI developers. Government bond yields climbed as traders shifted back into risk assets.

Why this is worth your attention: AI-exposed stocks now drive global market swings, and even modest valuation concerns can ripple quickly across geographies and asset classes.

Original link: https://www.ft.com/content/09c64d3f-3cc5-4138-9310-e85acca0a7ce

 

US Stocks Slide as Investors Fret Over High Valuations for AI Companies

November 2025 | George Steer & Emily Herbert, Financial Times

US equities dropped as investors questioned the sustainability of lofty AI-driven valuations. The Nasdaq fell 2%, with megacap tech names and high-multiple AI firms leading declines. Executives from major Wall Street banks warned that markets were increasingly vulnerable to a correction after a year of outsized gains. Bitcoin and gold also retreated, while several high-profile tech names — including Tesla, CrowdStrike and Palantir — were hit by sharp single-day losses.

Why this is worth your attention: The AI surge has concentrated market gains in a narrow set of companies; any shift in sentiment risks amplified volatility across the broader financial system.

Original link: https://www.ft.com/content/a07c97d6-0780-4c3c-abc6-246fe19e5c5e

 

Why Nvidia should be glad to see the back of SoftBank

November 2025 | Lex

SoftBank has sold its entire $5.8bn stake in Nvidia, raising eyebrows given Nvidia’s centrality to the AI boom. While the optics appear negative, the column argues Nvidia may benefit indirectly: SoftBank is likely to reinvest the proceeds into new AI ventures, which would in turn demand more Nvidia chips. At Nvidia’s high valuation multiples, even modest reinvestment could theoretically add billions in enterprise value.

Why this is worth your attention: The sale highlights how deeply interconnected AI capital flows have become—money exiting Nvidia can still return to it through market dynamics.

 

‘Best way to describe the market is bonkers’

November 2025 | Robin Wigglesworth

This analysis details the extraordinary surge in hyperscaler capital expenditure, with Amazon, Microsoft, Alphabet, Meta, and Oracle projected to invest over $1.5tn in AI and cloud infrastructure by 2027. Barclays’ research notes massive power-grid strain, soaring costs, and capacity shortages as companies race to build data centres. Industry executives describe a feverish, overheated environment with demand outpacing labour, utility capacity, and physical space.

Why this is worth your attention: The unprecedented scale of AI infrastructure spending raises concerns about sustainability, systemic risk, and long-term returns amid a potentially overheating sector.

 

‘The global data centre and AI build-out will be an extraordinary and sustained capital markets event’

November 2025 | Robin Wigglesworth

Drawing on JPMorgan’s extensive analysis, this article outlines a projected $5–7tn global build-out of AI and data-centre infrastructure. The bank expects 122GW of new data-centre capacity by 2030, constrained primarily by electricity shortages and long lead times for new power generation. Funding needs could exceed $1.4tn annually by 2030, requiring participation from investment-grade bonds, high-yield markets, securitisation, private credit, and government support.

Why this is worth your attention: The scale of AI infrastructure investment could reshape global capital markets, introduce systemic risks, and create winners and losers across financial sectors.

 

AI bubble: don’t throw the baby out with the bathwater

November 2025 | Simon Edelsten

Reflecting on the dot-com era, Edelsten argues that while AI stocks may be overheating, investors should avoid broad “bubble panic.” Many hyperscalers have strong cash flows unlike 2000’s speculative firms, though valuations vary widely. He recommends focusing on fundamentals, trimming aggressively priced holdings, and reallocating into overlooked sectors such as healthcare, consumer staples, or energy, where AI-driven productivity gains may quietly accrue.

Why this is worth your attention: A disciplined approach can help investors navigate AI volatility without abandoning long-term opportunities created by genuine technological transformation.

 

Anthropic Is on Track to Turn a Profit Much Faster Than OpenAI

November 2025 | The Wall Street Journal

Financial documents reviewed by the Wall Street Journal suggest Anthropic is on course to reach profitability by 2028 — far earlier than rival OpenAI. Anthropic’s enterprise-focused strategy and strong demand for its Claude AI systems have boosted its revenue trajectory and improved cost discipline compared with consumer-heavy competitors. The article highlights diverging business models as the two leading frontier labs scale up.

Why this is worth your attention: Shows how commercial strategy — not just technological advantage — may determine long-term sustainability in the AI arms race.
Original link: https://www.wsj.com/tech/ai/openai-anthropic-profitability-e9f5bcd6

 

Anthropic to invest $50bn in new US data centres

November 2025 | Financial Times

Anthropic will invest $50bn to build new AI data centres in New York and Texas, partnering with cloud start-up Fluidstack to secure long-term compute capacity. The company is expanding aggressively to keep pace with frontier-model training requirements, complementing major chip supply deals with Google, Amazon and Nvidia. The move comes amid concerns over an AI infrastructure bubble, as valuations and capital expenditure soar across the sector.

Why this is worth your attention: Highlights the escalating capital demands of leading AI labs, underscoring how compute access is becoming a strategic differentiator — and potential systemic risk — in the industry.
Original link: https://www.ft.com/content/aa55c835-df0e-4ddf-a405-c655a8af22f7

 

Asian markets’ reliance on AI boom raises ‘bubble’ fears

November 2025 | Financial Times

Asia-Pacific equity markets have surged on optimism over AI infrastructure spending, with Taiwan, South Korea and Japan benefiting most from demand for chips, servers and related components. Analysts warn, however, that the region’s outsized dependence on a small group of AI-related exporters leaves it vulnerable if valuations cool or US investment slows. The chart on page 2 highlights how semiconductor-heavy indices now dominate regional performance, amplifying concerns about concentration risk.

Why this is worth your attention: Shows how deeply global markets are tying their fortunes to the AI cycle, increasing the risk of a sharp correction if the boom falters.
Original link: https://www.ft.com/content/33d1b2c6-4dd1-4c6a-9bb7-1ce8b9033bd3

 

Elon Musk celebrates $1tn Tesla pay vote victory

November 2025 | Financial Times

Tesla shareholders approved Elon Musk’s unprecedented $1tn pay package with 75% support, cementing his control as the company pivots to AI, robotics and autonomous services. At the Texas meeting, Musk outlined a future centred on robotaxis, surgical-grade humanoid robots and massive chip manufacturing ambitions, including potential partnerships with Intel. Charts on page 4 illustrate how heavily Tesla’s valuation depends on extreme growth targets tied to Musk’s compensation milestones.

Why this is worth your attention: Reinforces how central Musk is to Tesla’s AI-driven strategy — and how much investor confidence is tied to his vision rather than current fundamentals.
Original link: https://www.ft.com/content/1062c6cb-c56d-4525-be57-28cadf273cea

 

From AI to ROI: some positive evidence

November 2025 | Financial Times

HSBC’s Mark McDonald highlights new field experiments from Zhejiang and Columbia universities showing that generative AI can meaningfully increase revenue inside a large retail platform. Pre-purchase AI assistants lifted sales by 16.3%, while hybrid AI-human customer-service systems boosted them by 11.5%. The chart on page 3 emphasises that AI-driven productivity gains are most pronounced for smaller sellers and inexperienced buyers.

Why this is worth your attention: Provides rare empirical evidence that AI can generate real-economy productivity gains — not just tech-sector hype.
Original link: https://www.ft.com/content/0439c98f-158a-4667-9bbe-2ef86e0ef670

 

How high are OpenAI’s compute costs? Possibly a lot higher than we thought

November 2025 | Financial Times (FT Alphaville)

An analysis by Bryce Elder, drawing on data shared by tech blogger Ed Zitron, suggests OpenAI’s inference costs on Microsoft Azure may be far larger than publicly understood. Charts on pages 2 and 4 show quarterly inference spending rising steeply, with more than $12.4bn spent over seven quarters—far exceeding OpenAI’s implied revenue over the same period. Though Microsoft and OpenAI declined to confirm the figures, the data implies that running costs dwarf income, raising questions about sustainability and pricing.

Why this is worth your attention: Highlights deep financial strain beneath AI’s glossy growth narrative, underscoring that today’s most advanced models may remain economically unsustainable without major efficiency breakthroughs.
Original link: https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e7d5

 

How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise

October 2025 | The New York Times

Jacqueline Gu and Cade Metz map out the circular financial structures underpinning OpenAI’s rapid expansion. The company has channelled billions from Microsoft, SoftBank, Oracle, CoreWeave, Nvidia and others back into those same firms via massive cloud and chip-purchase commitments. Diagrams on pages 3–5 reveal OpenAI’s reliance on multibillion-dollar data-centre deals, some exceeding $300bn, often financed through stock swaps, credit arrangements or equity stakes.

Why this is worth your attention: Shows how AI growth increasingly depends not just on technical breakthroughs but on unprecedented, tightly interlinked financial engineering that concentrates risk across the tech sector.
Original link: https://www.nytimes.com/interactive/2025/10/31/technology/openai-fundraising-deals.html

 

Investor angst over Big Tech’s AI spending spills into bond market

November 2025 | Financial Times

Bond investors are growing uneasy about the vast sums tech giants are borrowing to finance data-centre construction, with spreads on “hyperscaler” debt rising to their highest levels since April. Page 2 data shows Alphabet, Meta, Microsoft and Oracle collectively planning over $400bn of 2026 capex, with JPMorgan estimating total AI-infrastructure investment could exceed $5tn. Smaller players such as CoreWeave have seen sharp stock declines and surging credit-default-swap costs.

Why this is worth your attention: Demonstrates that the financial risks of the AI boom are now being priced into credit markets, raising questions over sustainability, leverage and the long-term economics of hyperscale AI.
Original link: https://www.ft.com/content/d2bf6c25-fb42-4f13-b81c-a72883632f50

 

Investors need to look beyond the ‘bragawatts’ in AI infrastructure boom

November 2025 | Financial Times

Waldemar Szlezak argues that AI infrastructure investment resembles past technological build-outs—messy, boom-prone, but foundational. While overshooting is inevitable, he emphasises that the winners will be those controlling scarce bottlenecks such as energy, land, grid interconnections and permitting. Page 2 historical comparisons note that 1990s fibre-optic overbuilds ultimately enabled the modern internet, despite bankruptcies among early investors.

Why this is worth your attention: Adds nuance to “AI bubble” fears by highlighting the long-term economic value of infrastructure—even when early capital outlays look excessive or unevenly rewarded.
Original link: https://www.ft.com/content/bf687d99-f373-4a41-8651-fca9dba83aa0

 

London becomes ‘quant’ powerhouse as traders rake in revenues

November 2025 | Financial Times

London has emerged as a leading centre for quantitative trading, with firms such as XTX, Qube and Quadrature generating more than £1bn each in annual revenues. Page 2 figures highlight explosive growth driven by machine-learning-driven strategies, specialised data centres in Finland and Iceland, and a strong pipeline of UK engineering graduates. The sector’s rise contrasts sharply with traditional investment banking, which younger talent increasingly avoids.

Why this is worth your attention: Shows how algorithmic trading and AI are reshaping global finance—and how the UK is securing a competitive edge through talent, regulation and technical infrastructure.
Original link: https://www.ft.com/content/8a6502c3-f244-4b61-880b-b20cf03299cc

 

Michael Burry, the short seller who bet against AI

November 2025 | Financial Times

Michael Burry—famed for predicting the 2008 crash—briefly bet against Palantir and Nvidia before announcing he would wind down his hedge fund. Pages 2–3 recount how his scepticism of AI valuations clashed with retail enthusiasm, prompting a public backlash from Palantir’s CEO Alex Karp. The profile traces Burry’s contrarian career, from his Big Short fame to his mixed recent performance.

Why this is worth your attention: Captures a symbolic moment in the AI market cycle: one of the world’s most iconic sceptics stepping back just as investors grapple with whether the AI boom is overextended.
Original link: https://www.ft.com/content/7fe1362b-d696-4334-86ef-607b80f1739f

 

OpenAI makes 5-year business plan to meet $1tn spending pledges

November 2025 | Financial Times

OpenAI has drafted a multiyear business plan outlining how it intends to fund its trillion-dollar infrastructure commitments, according to investors familiar with the discussions. The company expects revenue to scale sharply across enterprise subscriptions, agentic systems, a new hardware line and what it expects to become one of the world’s largest AI clouds. Executives say the plan is essential to reassure backers as the company faces unprecedented capital demands.

Why this is worth your attention: Highlights how frontier AI development now requires long-term industrial planning on a scale once reserved for national infrastructure or energy systems.
Original link: https://www.ft.com/content/7019a0eb-fcb6-4f83-97d9-f44d5b1fd640

 

Oracle hit hard in Wall Street’s tech sell-off over its huge AI bet

November 2025 | Financial Times

Oracle shares fell sharply during a broader tech-sector pullback, reflecting investor unease over the company’s aggressive AI-infrastructure spending. Analysts noted that Oracle has taken on significant financial and operational risk as it expands its cloud footprint and commits to supplying compute to frontier-model developers. The sell-off underscored how markets are becoming more sensitive to the scale and uncertain returns of hyperscale AI investment.

Why this is worth your attention: Demonstrates the emerging investor anxiety around AI-driven capital expenditure — even for established tech giants.
Original link: https://www.ft.com/content/9c2cfa7f-1f8b-4353-a790-ccb752e8db09

 

Rightmove shares tumble as it steps up AI spending

November 2025 | Financial Times

Rightmove shares fell as much as 25% after the UK property-listings platform said profit growth would slow due to a substantial increase in AI investment. The company is expanding its portfolio of AI tools — including conversational search, a mortgage assistant and virtual-redecoration features — and has more than two dozen AI projects under development. Executives argue that heavier spending now is essential to maintain long-term competitiveness.

Why this is worth your attention: Shows how dominant digital platforms are being forced to invest aggressively in AI to defend their market position, even at the expense of short-term earnings.
Original link: https://www.ft.com/content/4ac8bb83-a59c-4b45-b57d-902771db8724

 

Robinhood wants to allow amateur traders to invest in AI start-ups

November 2025 | Financial Times

Robinhood plans to offer retail investors access to a new fund holding stakes in fast-growing private AI companies, including top frontier-model developers. CEO Vlad Tenev said everyday investors should be able to participate in AI-driven value creation, despite concerns that the proposed fund’s structure — including borrowing and limited liquidity — carries significant risk. The move reflects a wider effort by asset managers to tap household savings for private-market investments.

Why this is worth your attention: Extends the AI investment boom into retail markets, raising questions about risk exposure, suitability and whether ordinary investors will shoulder volatility from privately valued tech companies.
Original link: https://www.ft.com/content/72783021-f778-4d6d-80b3-f91d8c7ae364

Further Reading: Find out more from these resources

Resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page