THE SKINNY
on AI for Education
Issue 23, December 2025
Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​
Headlines
​
-
Bottlenecks, Not Bubbles: In Education We Need to Learn to Love the Complexity in 2026
-
AI News Summary
​​​
Welcome to The Skinny on AI in Education. Fancy a broader view? Our signature Skinny Scan takes you on a whistle-stop tour of recent AI developments reshaping education.​​
​​​​
​
Bottlenecks, Not Bubbles: In Education We Need to Learn to Love the Complexity in 2026

Listening to the BBC Radio 4 Today programme on Monday I heard guest editor Mustafa Suleyman make several striking comments about fear in relation to AI. His central statement was: "I honestly think that if you're not a little bit afraid at this moment, then you're not paying attention.” I agree, but this article is not pessimistic. It is realistic. Because realism is exactly what education needs in 2026 and maybe a little fear to motivate us to dig a bit deeper into the complexity of the AI Ecosystem.
There is constant talk of bubbles in the AI world, with eye-wateringly large valuations awarded to AI companies and worries about the reality of profitability and the risks of circular funding arrangements. But the picture is more complex than simply asking is there a bubble and when will it burst? And in education we need to learn to love the complexity. Our students are using AI tools, we all need to know how to get the best results from AI safely and how to help our students to do likewise.
Education faces a double challenge in 2026: navigating AI's technical and economic roller coaster whilst simultaneously rethinking what we are preparing students for. In short: discriminative AI works, but we are using it to teach 20th-century curriculum standards. Generative AI is impressive but flawed and currently looks economically challenging. We need a realistic understanding of both AI types, the complexities of the AI ecosystem to which they belong AND radical rethinking of education's purpose. So, let's get started.
As I am sure you are aware, I love baking and in baking I know that I need flour, eggs, heat, the right recipe, the right oven, and someone willing to eat the result. AI is the same: you need inputs, some processing and outputs that people want and can afford. And as in baking, when you look beyond the end product, there are critical bottlenecks that limit what is possible: the time it takes to build a sourdough starter, the capacity of the oven, the limits on scaling that mean you cannot keep making a larger and larger soufflé and expect it to rise! You need to understand the bottlenecks to know what is possible versus what is hype.
In this short piece I give you a taster that I hope will whet your appetite enough to read the articles that will follow over the coming weeks. Basically, you need to understand three of the bottlenecks in the AI ecosystem and that the economics of generative AI look odd. We also need to explore regulation, and the extent to which what AI produces is useful, and we will do this in future pieces. But first you need to understand that not all AI is the same.
This piece will take you about 8-10 minutes to read, so grab a cuppa and a slice of cake :).
Two Important Differences That Matter to Education
​
Firstly, discriminative AI: these are systems that recognise patterns and guide students through structured content, like intelligent tutoring systems. Decades of research evidence tells us how to design and implement these accurately and effectively. The main risk is bias, combined with the very tricky problem that these well-researched systems may very effectively support people to learn things they no longer need to learn.
​
Secondly, generative AI: ChatGPT-style systems that create new text, images, audio, video and also offer AI tutor bots. These are where the problems lie. Their ability to confidently state false information has improved through post-training techniques that require a lot of human input. But the errors cannot be eliminated entirely using current approaches.
Three Critical Bottlenecks That Limit What AI Can Achieve
Processing power (the ‘Oven Problem’): the ‘chips are down’.
In baking, you need the right oven temperature maintained constantly for the right duration. Too little heat and nothing rises. Too much and you burn it. In AI, processing power is like your oven. But here is where it gets complex: the processing power you need depends entirely on whether you are training or doing inference, and whether you are dealing with generative or discriminative AI. And increasingly, whether you can access the processing resources you need depends on where you are in the world and what geopolitical alliances your country has.
Training: The Industrial Kitchen. Training a large AI model is like running a professional bakery's test kitchen where you are developing new recipes but at the scale of an industrial manufacturing plant! You need: tens of thousands of high-end processors working in parallel, continuously operating for weeks or months. Like thousands of ovens all running simultaneously, all needing perfect coordination all using specialised AI chips designed specifically for AI training, which all need to communicate constantly, sharing information about what they are learning. This costs huge amounts of money and relies on resources, beyond the financial, where there are restricted supplies
Inference: Once the model is trained, inference is dramatically cheaper. Maybe only needing a single or a few processing units, milliseconds/seconds of processing time, and energy comparable to a light bulb running briefly, rather than a town's worth of power for months.
​
However, even this can add up at scale: When you are serving millions of users, the pennies per query it costs add up fast. If ChatGPT serves 100 million queries per day (conservative estimate), and each query costs $0.01-$0.02 in compute costs, that's still $1-2 million per day just in inference costs, and over $500 million per year. For longer outputs, costs multiply, e.g. Generate a report $0.50. At scale, these costs are enormous.
Generative AI is processing-intensive for both training and inference, which means generative AI companies have a fundamental problem, because every user interaction costs them real money in compute. The more successful they are (more users), the more money they spend on inference.
Discriminative AI is usually much more efficient and often has better unit economics. Once trained, the cost per inference is so low that it scales profitably. This is why Google's spam filter is profitable (billions of emails classified at minimal cost per email) while ChatGPT struggles with profitability.
Limited Resources: I said that the resources needed for processing were limited beyond the financial. The processing chips, the memory and the networking are not infinitely available resources. With respect to the specialised AI chips, much of the conversation focusses on chip suppliers like Nvidia, the most valuable company in the world, who can for example be restricted by US regulation, in the chips they can sell to certain countries, like China. But the restrictions go beyond regulation and politics. Nvidia do not actually make the advanced computer chips they design and sell. These chips are essential for modern AI, particularly generative AI. And one company, TSMC in Taiwan, makes about 90% of the world’s supply of the most advanced chips: the geopolitical risk is real.
And we can’t just build more factories, or fabrication plants as they are called, to produce more chips. It takes 3-5 years to build new fabrication plants and just to 'spice things up' only one company (ASML) in the Netherlands makes the very expensive machines you need to put in these fabrication plants to make the advanced chips. Oh, and there is a several years-long waiting list if you do have the money and want to buy one of these machines.
And even if you have enough high-quality chips, you need them to communicate with each other at incredible speeds. This is where memory becomes critical, High-Bandwidth Memory (HBM) to be precise, and where geopolitical supply chains once again create vulnerabilities. Think of HBM as the speed at which you can get ingredients into and out of your oven. You might have the best oven in the world, but if you can only open the door once every five minutes, you can't cook efficiently. And guess what? There is currently an HBM shortage and only three companies make HBM (sound familiar?): Samsung (South Korea), SK Hynix (South Korea), Micron (US). Basically, South Korea produces about 70% of world's HBM. Demand outstrips supply, lead times are 6-12 month, prices have skyrocketed etc. etc. The HBM costs alone for training a large language model like Chat GPT are $billions.
And as if all this was not enough complexity to contend with, thousands of chips needing to talk to each other constantly requires Ultra-fast networking in data centres designed specifically for AI where there is physical proximity between chips which can't communicate fast enough if they are too far apart. This is why AI training happens in specialised facilities. You can't just rent random cloud servers around the world. Networking equipment supply is less problematic than for the chips and the memory, but not without its challenges. The required networking equipment comes from US companies like Nvidia, Broadcom, Cisco; plus some from China, such as Huawei, but their products are banned in many markets due to security concerns.
Finally, you need dedicated power and cooling infrastructure. AI data centres need enormous electricity and water supplies and some cities are now saying "no" to new facilities. Perhaps there will be another ‘Deepseek moment’ and we will find a way to get more processing from less power and with less sophisticated more easily available chips, but we don’t know.
So, I think it is fair to say that your free AI tools will likely not stay free.
Another slice of cake?...
Data (The ‘Flour Problem’): We have run out. Basically, AI has more or less exhausted the available human-made good quality text data and the industry's solutions (synthetic data, copyright lawsuits: as of October 2025, there are 51 lawsuits against AI companies, and licensed data) all have serious problems. Yes, there are other data sources, such as video, but they are expensive to process and are nothing like as prolific.
For education, the implications are profound. Educational content (textbooks, curricula, lesson plans, educational videos, teachers' materials) are prime targets for AI scraping. Your intellectual property may already be training AI systems without your consent or compensation. Education AI companies face these same data constraints. If they are forced to use synthetic data, quality may decline. If they must licence all content, costs will rise dramatically. If they lose copyright lawsuits, their entire business model may collapse.
Then there is the academic integrity crisis. Students use AI trained on essays, papers, and assignments. Is the AI paraphrasing its training data (which would be plagiarism) or generating truly original text? The legal uncertainty creates ethical uncertainty for educators trying to establish fair policies.
Algorithms (the ‘Recipe Problem’): No breakthrough since 2017. Obviously, the research labs in the big tech companies will be working away on new ideas that are not public, but what we do know is that there has been no major breakthrough in the design of the algorithms that specify how the AI processes data and learns since 2017, when the transformer architecture that powers generative AI large language models was published. We are seeing diminishing returns from making models bigger. For discriminative AI, this means gradual improvements. For generative AI, it means hallucinations and inconsistencies are not going away. Recent improvements are not due to fundamentally new AI techniques, but to the use of reinforcement learning after training and this requires a lot of input from people: hence is limited in terms of scalability.
Note on world models: These are systems that learn how the physical and social world actually works (rather than just learning patterns in text) by using video footage of physical interactions, simulations of physical processes, sensor data from robots interacting with real environments, and multi-modal data combining vision, sound, touch, and text, are a genuine research direction that could improve AI capabilities. But they are not a magic solution to data scarcity or output accuracy. They require even more resources and face similar bottlenecks.
The Economics That Look Odd for Generative AI
Discriminative AI systems have sustainable economics. BUT generative AI companies are burning billions. For example, OpenAI lost £7 billion in 2025. Their projected loss for 2028 is £59 billion. Their path to profitability: 2030, if ever. Sora 2 costs £12 million per day to operate. EdTech AI startups building on generative AI surely face an ever-increasing risk of failing in 2026-2027. Free tools will likely not stay free. Paid tools may disappear when vendors are acquired or shut down.
So, is there a Bubble?
Yes, there is a bubble. But for education at least, what matters is understanding the AI ecosystem and in particular the specific and very real bottlenecks. Discriminative AI for structured learning works and has sustainable economics, although we do need to address the ‘small’ problem of future-proofing what they tutor. Generative AI is impressive but has fundamental accuracy limits and what look like unsustainable economics.
The complexity exists whether you acknowledge it or not. So, let me help you to learn to love the complexity: understand the ingredients, the recipe, the oven's limits, which types of AI work reliably for what purpose, so that you can navigate what is coming.
***
Here is a little quick reference table to help you easily see the important differences between Generative AI, think of this as like piping royal icing decorations on your Christmas cake. You are creating new patterns, new designs. Each output is unique, generated from learned patterns. Like an AI generating a new essay or image. And Discriminative AI, think of this as like sorting biscuits into categories: this is a gingerbread man, this is a shortbread, this is a chocolate chip cookie. You are recognising and classifying, not creating. These two types of AI have very different resource requirements, business models, and constraints.

In the coming weeks, I will publish articles exploring these bottlenecks, what they mean for education, and practical actions you can take to prepare for the disruption ahead.
What can you do now?
Start by auditing which AI tools you and your students are using. Are they discriminative or generative? Are they free or paid? Do you understand what data they use, how they process it and what purpose they serve? Who is benefitting from their use and how? What happens if they disappear or are no longer free/more expensive in the coming months?
​
Subscribe to receive the guides coming in the next few weeks.
The ‘Skinny Scan’ on what is happening with AI in Education….
The news this last month paints a picture of an AI industry at an inflection point, simultaneously experiencing extraordinary growth while confronting serious questions about sustainability, governance, and real-world impact. Basically, this is ‘the AI Bubble’ issue.
​
The dominant theme is a massive AI infrastructure buildout colliding with growing investor nervousness. Tech giants and AI labs are spending unprecedented amounts on data centres, chips, and compute, but clear "killer apps" and durable revenue streams have not materialised at scale. Several stories highlight: the record corporate debt issued (approximately £1.4 trillion in US bonds) driven substantially by AI data centre financing; the off-balance sheet structures moving over £100 billion of data centre debt into ‘special purpose vehicles’, and some major pension funds and investors are beginning to hedge against an AI bust through credit default swaps.
​
Key Developments by Theme
​
1. Competitive Dynamics Are Shifting: OpenAI's lead is eroding fast. Google and Anthropic have closed the gap.
​
2. Hardware and Chips Are the Strategic Battleground: Jensen Huang (Nvidia CEO) was named FT Person of the Year, reflecting how GPU /TPU supply shapes who can train frontier models.
3. Geopolitics and Compute Access: The US-China tech rivalry intensified ,and a US-UK technology cooperation agreement was also paused.
4. Regulation Is Accelerating: Australia implemented a world-first ban on under-16s using major social media platforms. The UK is exploring tougher laws on AI chatbots, particularly around child safety. The EU opened an antitrust probe into Google's use of online content for AI training. US state attorneys-general demanded stronger AI safeguards following cases of emotional harm.
5. Real-World AI Failures and Trust Issues: Multiple stories documented ongoing problems. From biased facial recognition systems to corporate chatbots "going wild", hallucinated citations, fabricated book recommendations, and faulty medical advice.
6. AI's Impact on Work and Society: Accenture rebranded its 800,000 employees as "reinventors" with implicit pressure to reskill or leave. Education systems are struggling to define rules as students rapidly adopt AI tools. AI companions such as Casio's "Moflin" pet are raising concerns about emotional dependency and psychological risk.
7. Robotics and Embodied AI: Investment flowed into humanoid robot startups. Wayve is testing self-driving vehicles in London. iRobot declared bankruptcy as Chinese competitors like Roborock gained ground with AI-enhanced features.
​
8. Energy and Infrastructure Constraints: AI's physical demands are becoming critical. Data centres are driving up electricity bills for households. Facilities are turning to aircraft engines and diesel generators to bypass years-long grid connection delays. Debates have emerged about whether to put AI data centres in space to avoid terrestrial power limits.
AI News Summary
AI in Education
The AI skills employers want — and what business schools teach
1 December 2025 | Financial Times
European business schools are racing to redesign curricula around AI, focusing less on technical mastery and more on leadership, judgment and ethical decision-making. Programmes increasingly embed generative AI tools into live projects, simulations and executive training, often in partnership with industry. Employers say they value graduates who can work critically with AI systems rather than simply deploy them.
What you need to know: As AI becomes ubiquitous, competitive advantage is shifting toward human skills — such as judgment and ethics — that complement, rather than compete with, intelligent systems.
Original link: https://www.ft.com/content/68abc2d4-5f13-43f2-a62d-a811da483bd4
Students embrace AI as schools tread carefully
3 December 2025 | Financial Times
Students across schools and universities are rapidly adopting generative AI tools for studying, revision and assignment support, even as institutions struggle to define clear rules for their use. Educators are torn between recognising AI’s potential to personalise learning and concerns about plagiarism, deskilling and academic integrity. Many schools are experimenting with revised assessment methods and AI literacy programmes rather than outright bans.
What you need to know: The education sector is becoming a real-world testing ground for AI governance, with student behaviour often moving faster than institutional policy.
Original link: https://www.ft.com/content/78d643a6-bcc8-4368-a0d6-30a71f63e71c
DealBook: The education of higher education
15 December 2025 | Alina Tugend, The New York Times (DealBook)
University leaders meeting at the DealBook Summit described how generative AI is reshaping higher education’s core value proposition: preparing students for a job market where automated screening tools and AI-enabled workflows are increasingly standard. The discussion ties rising public skepticism about degrees to employability pressures, while panelists argue institutions must modernise training, emphasising adaptable skills, internships, and career pathways, because AI will keep changing what “work-ready” means. The piece also captures a near-term reality: applicants are optimising résumés for AI screening systems before a human ever sees them.
What you need to know: Education is becoming an AI adoption battleground, schools are being pushed to teach AI-era skills, while employers’ AI screening is already changing hiring dynamics and the incentives around credentials.
AI Ethics and Societal Impact
Start-ups promise to help ‘vibe coders’ catch the AI bugs
2 December 2025 | Financial Times
A new wave of start-ups is emerging to tackle the downside of “vibe coding” — the practice of relying heavily on generative AI to write software with minimal human oversight. While the approach has sped up development, it has also introduced subtle bugs, security flaws and brittle code that developers struggle to diagnose. These start-ups offer AI-powered testing, debugging and code-review tools designed specifically to audit machine-generated software before it reaches production.
What you need to know: As AI increasingly writes code, demand is shifting from generation to verification, highlighting that reliability and trust are becoming the next bottlenecks in AI-assisted software development.
Original link: https://www.ft.com/content/613bf123-b99a-4d18-b6d8-1ab453a8f2c6
Taiwan’s AI boom leaves traditional manufacturers trailing
2 December 2025 | Financial Times
Taiwan’s economy is being propelled by surging global demand for AI chips and servers, driving strong GDP growth and record profits for technology leaders such as semiconductor firms. However, manufacturers in traditional sectors — from machinery to automotive parts — are struggling with rising costs, labour shortages and weaker demand. The resulting divide is raising concerns about uneven growth and long-term industrial resilience.
What you need to know: AI-driven growth can reshape entire economies, but it also risks widening gaps between high-tech winners and legacy industries.
Original link: https://www.ft.com/content/ef15d02b-daf2-4463-9448-411bce647cc3
Facial recognition technology used by UK police is biased, Home Office admits
5 December 2025 | Chris Smyth
UK government research has found that facial recognition systems used by police misidentify Asian and Black people at far higher rates than white people, and women more often than men. Despite the findings, ministers continue to back expanded use of the technology, calling it a major crime-fighting breakthrough. Civil liberties groups warn the systems have been deployed at scale without adequate safeguards, raising the risk of systemic discrimination.
What you need to know: Bias in deployed AI systems is no longer theoretical—it is measurable and operational. This case shows why governance, testing and accountability matter as AI moves deeper into law enforcement and other high-stakes domains.
Original link: https://www.ft.com/content/c49a4524-e4bd-42f1-aabb-d7b0e50dd066
In future ‘books could respond’ says winning author Stephen Witt
5 December 2025 | Andrew Hill, Financial Times
Stephen Witt, author of The Thinking Machine, reflects on Nvidia’s rise and the broader implications of AI for creative work. He argues that AI systems could transform books into interactive, responsive media while also posing existential challenges to writers and other knowledge workers. Witt portrays Jensen Huang as both visionary and relentlessly paranoid about competition.
What you need to know: Captures how AI is reshaping not just industries, but cultural production and authorship itself.
Original link: https://www.ft.com/content/9058c29f-d868-4089-8493-298976848846
Meta buys AI pendant start-up Limitless to expand hardware push
5 December 2025 | Hannah Murphy, Financial Times
Meta has acquired Limitless, an AI-powered wearable start-up best known for its always-on recording pendant, signalling Zuckerberg’s ambition to move beyond smart glasses into new AI-first hardware categories. The device records, transcribes and allows users to search conversations, positioning AI as a cognitive extension rather than a screen-based tool. The deal reflects Meta’s belief that wearables will become the primary interface for accessing “personal superintelligence,” despite ongoing privacy and social concerns around ambient listening devices.
What you need to know: Shows how leading AI firms are betting that hardware — not apps — will define the next phase of AI adoption, raising fresh ethical and regulatory questions.
Original link: https://www.ft.com/content/a1a7adab-506e-4623-8f7a-0b7c94c8d6b4
Tech elites are starting their own for-profit cities
7 December 2025 | Financial Times
A growing group of tech founders and investors are backing experimental, for-profit cities designed to operate with minimal regulation and novel governance models. Inspired by start-up culture and crypto ideology, these projects aim to create tech-friendly jurisdictions for innovation, wealth creation and alternative social systems. Critics argue they risk deepening inequality, undermining democratic accountability and exporting regulatory arbitrage to weaker states.
What you need to know: AI wealth and techno-optimism are fuelling ambitions that extend beyond products and platforms into governance itself, raising profound questions about power, regulation and social responsibility.
Original link: https://www.ft.com/content/b127ee7a-5ac4-4730-a395-c9f9619615c7
The perils of using AI when recruiting
10 December 2025 | Financial Times
Companies are increasingly using generative AI tools to research and assess job candidates, but the practice carries significant risks. Chatbots can hallucinate personal details, infer protected characteristics or reinforce existing biases, potentially leading to discriminatory hiring decisions. Regulators place responsibility on employers to ensure AI is used transparently and responsibly.
What you need to know: As AI enters high-stakes decision-making, errors and bias are no longer abstract technical issues but legal and ethical liabilities.
Original link: https://www.ft.com/content/229983ee-c11f-44fb-8e61-2ac61d8d100a
LinkedIn and the great gender swap
13 December 2025 | Isabel Berwick, Financial Times
A wave of LinkedIn users, particularly women, report dramatic visibility gains after changing profiles to male, fuelling suspicion that recent algorithm changes may be suppressing women’s posts. Some participants used AI tools to rewrite profiles and older posts in more “agentic” language and saw major increases in views, prompting an organised campaign demanding transparency. LinkedIn denies using gender as a ranking signal and says changing gender does not affect feed or search placement, while also noting rapid growth in overall posting and commenting volume. The controversy underscores how opaque ranking systems can shape economic outcomes for creators and professionals.
What you need to know: As social platforms increasingly rely on AI-driven ranking, bias (real or perceived) becomes a high-stakes governance issue, and generative AI is also being used to “game” those systems.
Original link: https://www.ft.com/content/7a20ee85-bb4e-4204-9416-a3d1a9de1cd8
Australian users flock to new platforms after social media ban for under-16s
13 December 2025 | Nic Fildes, Financial Times
Following Australia’s ban on under-16s holding accounts on major social media apps, teenagers and families have shifted quickly to alternative platforms, including Lemon8, Yope, Coverstar, RedNote (Xiaohongshu), and even WhatsApp. The law places the burden on tech companies to verify ages and report compliance progress, but academics and observers warn that the surge in obscure apps highlights enforcement challenges, new services can emerge rapidly and may lack mature safety systems. Some smaller apps are already seeking exemptions by arguing they are built for private messaging rather than algorithmic feeds.
What you need to know: Regulation changes user behaviour fast, and the “whack-a-mole” shift to fringe apps creates new pressure for AI-driven content moderation, age estimation, and safety tooling across a wider long-tail of platforms.
Original link: https://www.ft.com/content/c901bac6-6b7f-4e7b-85e3-93df19f9270b
Big Tech’s AI data centres are driving up electricity bills for everyone
14 December 2025 | The New York Times
The rapid expansion of AI data centres by companies such as Amazon, Google and Microsoft is straining electricity grids and pushing up costs for households and small businesses. Utilities warn that massive infrastructure upgrades will be required, while regulators debate how much tech giants should pay. AI’s energy appetite is reshaping power markets.
What you need to know: Compute is becoming an energy problem, making electricity availability and pricing a critical constraint on AI growth.
Original link: https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html
I met Moflin, the AI pet. Could it help fight loneliness or is it a sign of a dystopian future?
17 December 2025 | Isabella McRae, Big Issue
Casio’s fluffy AI companion “Moflin” has become a surprise hit in Japan and is now launching in the UK and US, pitching itself as an emotionally responsive pet that learns a user’s voice and interaction style over time. The piece weighs potential benefits, companionship for loneliness and dementia care, citing precedent like the therapeutic robot seal Paro, against concerns that AI companions can become unregulated psychological experiments for vulnerable people. Experts warn that humanlike chat and attachment can reinforce harmful thoughts or blur reality for users in distress, arguing oversight and research are lagging behind deployment.
What you need to know: AI “companions” are moving off screens into everyday consumer products, making questions of safety, emotional dependency, and regulation central to the next phase of human-AI interaction.
Original link: https://www.bigissue.com/life/health/ai-pet-moflin-loneliness-dystopian-future/
Meta adopts new age-check system to meet global child safety laws
17 December 2025 | Tim Bradshaw, Financial Times
Meta is partnering with Singapore-based K-ID to integrate its AgeKey system into Meta apps, aiming to streamline compliance with a patchwork of child protection rules emerging worldwide. The system lets users verify age once and reuse credentials across compatible apps, with Meta positioning it as more user-friendly than current checks. The article notes ongoing tensions between regulation and privacy: age assurance can reduce harmful content exposure but raises concerns about anonymity and data sharing. Meta also signals it would prefer operating systems or app stores to handle age authentication rather than each app implementing its own solution.
What you need to know: AI-driven content moderation and recommendation systems are pushing regulators toward stronger “identity and age assurance”, which could reshape how AI products handle privacy, verification, and access control.
Original link: https://www.ft.com/content/8164c36c-f224-4006-bc1f-561952c119b6
The tyranny of the digital calendar
17 December 2025 | Financial Times
Digital calendars, once designed to simplify work, have become sources of stress, surveillance and social friction. Shared scheduling tools expose power dynamics, blur boundaries and encourage over-optimisation of time. AI-powered assistants promise relief by automating planning, but risk further entrenching always-on work cultures.
What you need to know: AI productivity tools may solve coordination problems while simultaneously deepening them, depending on how they are deployed.
Original link: https://www.ft.com/content/b1a609ba-d0ab-4132-99ea-721455ebbc6c
No, you can’t tell when something was written by AI
24 December 2025 | Elaine Moore, Financial Times
Elaine Moore argues that most “tells” people rely on to spot AI-generated text, bland tone, certain words, or repeated structures, are just as common in human writing, making gut instinct a poor detector. She cites studies showing people routinely confuse AI and human-created songs and poems, notes the limits of AI-detection tools, and concludes that context (who is writing, how they usually communicate, what’s plausible) matters more than stylistic clues, especially as humanised AI text becomes harder to distinguish.
What you need to know: Highlights that reliable detection of AI-written content remains technically and practically elusive, complicating efforts to police AI use in education, media and online platforms.
Original link: https://www.ft.com/content/b2ebb99a-cfea-465f-93ff-0ea8ed6bfac5
The relentless rise of YouTube
26 December 2025 | Christopher Grimes, Financial Times
Christopher Grimes charts how YouTube has evolved into the dominant US TV and streaming platform, powering a “new Hollywood” of creator-run studios while muscling into live sports, podcasts and marquee events like the Oscars. As traditional studios such as Paramount and Warner Bros face consolidation and takeover bids, YouTube’s ad-sharing model, creator economy and AI-driven recommendation engine are drawing audiences, advertisers and celebrities toward a platform where independent creators wield increasing power over formats once owned by broadcast TV.
What you need to know: Shows how AI-driven discovery and creator monetisation are reshaping the media landscape, with YouTube’s algorithmic platform becoming a central distribution layer for both human and AI-generated video content.
Original link: https://www.ft.com/content/9e75eeb8-b6e6-4a90-b015-2732fa9a8774
How AI Is Changing the Games We Play, From Poker to Curling
26 December 2025 | Kit Chellel, Bloomberg
This long-form essay explores how AI, algorithms and bots are reshaping leisure, from professional poker and fantasy sports to curling, golf and even casual board games. Drawing on examples such as AI-optimised curling strategies developed at the University of Alberta, poker bots that outperform elite players, and algorithmic horse-racing syndicates, the piece argues that machine intelligence is altering the essence of play itself. While AI can improve performance and efficiency, critics warn it flattens creativity, reduces uncertainty and may erode the human quirks, emotion and discovery that make games meaningful. The article also raises broader cultural concerns about “intellectual leveling,” as recommendation systems and AI tools steer people toward similar tastes and choices.
What you need to know: Highlights how AI’s influence extends far beyond work and productivity, transforming sport, gaming and culture, and raising questions about whether optimisation and automation risk diminishing human experience as well as enhancing it.
Original link: https://www.bloomberg.com/news/features/2025-12-26/how-ai-is-changing-the-games-we-play-from-poker-to-curling
Global hotel groups bet on customer loyalty to beat online and AI agents
27 December 2025 | Stephanie Stacey, Financial Times
Major hotel chains such as Marriott, Hilton, Hyatt and Wyndham are ramping up loyalty programmes and tech investments to drive direct bookings and reduce reliance on online travel agents like Booking.com and Expedia. With OTA commissions typically running at 15–25 per cent, hotels see loyalty ecosystems and partnerships as a way to capture more customer data, personalise experiences and prepare for a future in which AI travel “agents” handle trip planning on behalf of guests. Executives view AI channels as both an opportunity, potentially cheaper than OTAs, and a threat, since automated agents could diminish brand recognition and make it easier for customers to switch.
What you need to know: Shows how AI agents are poised to reshape consumer interfaces in travel, forcing incumbents to compete not just on price and location but on data, loyalty and integration with AI-driven booking systems.
Original link: https://www.ft.com/content/b4ee6ec8-cfdc-4f28-b4ab-65baf611125b
Why your AI companion is not your friend
28 December 2025 | Martin Sandbu, Financial Times
Martin Sandbu argues that “AI companions” exploit a loneliness crisis by mimicking friendship while hollowing out the very idea of companionship, much as social networks earlier warped the notion of “friends” and “connections”. Drawing on philosophical ideas about friendship’s intrinsic, non-instrumental value, he warns that AI companions are engineered mirrors designed to please and soothe rather than challenge or truly relate, offering a form of solipsistic comfort that risks further weakening real human ties.
What you need to know: Raises ethical and social questions about AI products marketed as “friends,” highlighting how companionship bots may deepen isolation and shape norms around relationships as AI becomes more embedded in daily life.
Original link: https://www.ft.com/content/f3658db4-0bd5-4a0e-af9f-8f7a14f05603
AI Employment and the Workforce
Accenture dubs its 800,000 staff ‘reinventors’ as it adapts to AI
30 November 2025 | Ellesheva Kissin
Accenture is relabelling its nearly 800,000 employees as “reinventors” as it reorganises around what it calls “Reinvention Services”, folding strategy, consulting, creative, technology and operations into a single unit. The shift is part branding exercise, part internal pressure campaign: leadership is pushing retraining for an AI-heavy consulting market, with warnings that staff who cannot re-skill may be asked to leave. Critics quoted in the piece argue that jargon can boost confidence and signal modernity, but may also create confusion and undermine trust if it feels detached from reality.
What you need to know: AI transformation is increasingly an organisational redesign problem, not just a tooling upgrade—firms are reshaping roles, incentives and identity to force adoption at scale. This is a live example of how “AI strategy” quickly becomes workforce strategy.
Original link: https://www.ft.com/content/668944f0-4fb5-4d0a-a86a-93a0ffd0e57e
Complete rethink of business models needed to realise AI’s benefits
3 December 2025 | Melissa Heikkilä
After early experimentation, companies are discovering that AI gains depend less on flashy tools and more on redesigning processes, data foundations and accountability structures. The article finds that many firms lack the organisational readiness to deploy AI effectively, slowing adoption despite heavy investment. Successful use cases tend to augment humans—such as fraud detection or scientific research—while failures often stem from overconfidence in systems that struggle with real-world complexity and hallucinations.
What you need to know: AI returns are increasingly an organisational challenge rather than a technical one. Without rethinking workflows, incentives and oversight, even powerful models fail to deliver durable productivity gains.
Original link: https://www.ft.com/content/0bba73c4-ad33-4060-bbc5-c18c55b7942b
Human touch remains key to AI customer service strategies
3 December 2025 | Elizabeth Bratton, Financial Times
Companies including Allianz, easyJet, and Expedia are using AI to handle routine customer service tasks while keeping humans central to complex or sensitive interactions. Executives report faster response times and efficiency gains, but widespread caution about fully automating customer-facing roles. Analysts argue hybrid models combining AI and human judgement are likely to dominate.
What you need to know: Shows where AI adoption is stalling at the edges of human trust, empathy, and accountability.
Original link: https://www.ft.com/content/50a829b8-57aa-44c0-b565-2819620f4f3f
Internal AI adoption accelerates across tech sector
3 December 2025 | Nicholas Fearn, Financial Times
Technology companies including IBM, Asana, and Schneider Electric are deploying AI internally before selling it to customers, using tools for HR, coding, sales, and operations. Executives report significant efficiency gains, but stress the importance of governance, metrics, and responsible deployment.
What you need to know: Shows how internal AI use is becoming a proving ground for enterprise AI products — and reshaping work from the inside out.
Original link: https://www.ft.com/content/f369ba68-387c-4963-bce4-3e7a019bf62a
AI Development and Industry
Mistral unveils new models in race to gain edge in ‘open’ AI
2 December 2025 | Melissa Heikkilä, Financial Times
French start-up Mistral has released a new suite of powerful open-weight AI models, including Mistral Large 3, positioning itself as Europe’s strongest challenger to US and Chinese AI leaders. The models emphasise multilingual and multimodal capabilities while remaining openly accessible, aligning with European ambitions for AI sovereignty. Critics, however, argue that open-weight approaches still fall short without access to large-scale proprietary data.
What you need to know: Demonstrates Europe’s strategic bet on openness as its best chance to stay competitive in frontier AI development.
Original link: https://www.ft.com/content/bc9339a6-a8e4-4c6d-b77a-f9cffafb8a9f
OpenAI’s Sam Altman declares ‘code red’ after rivals make advances
2 December 2025 | Georgina Quach and Melissa Heikkilä, Financial Times
Sam Altman has ordered a company-wide “code red” to improve ChatGPT’s speed, reliability and personalisation, delaying other initiatives such as AI agents and advertising. The move follows benchmark advances by Google and Anthropic that have eroded OpenAI’s technical lead. Internally, the decision reflects growing concern about spreading resources too thin during a critical competitive phase.
What you need to know: Shows how rapidly improving foundation models are forcing strategic retrenchment even at the top of the AI ecosystem.
Original link: https://www.ft.com/content/7a42396f-487a-47b0-8121-8d8f2112fa53
Meta poaches senior Apple designer Alan Dye to support AI glasses push
3 December 2025 | Hannah Murphy and Rafe Rosner-Uddin, Financial Times
Meta has recruited Alan Dye, Apple’s long-time head of interface design, to lead a new internal studio focused on AI-powered wearables. The move underscores Meta’s belief that design — not just model performance — will determine whether AI glasses can replace smartphones as the dominant computing platform. Dye will oversee the integration of software, hardware and AI across Meta’s consumer devices.
What you need to know: Highlights how user experience and industrial design are becoming strategic differentiators in consumer AI adoption.
Original link: https://www.ft.com/content/b9b1d92a-7856-4058-adde-417a0b24fe62
Generative AI’s rapid journey through the ‘hype cycle’
3 December 2025 | John Thornhill, Financial Times
John Thornhill uses Gartner’s “hype cycle” framework to argue that generative AI has moved from inflated expectations into a phase of disillusionment. While adoption has been rapid, many companies are discovering the technology’s limits, from hallucinations to cultural and organisational barriers. Thornhill suggests productivity gains are real but depend on deep changes in working practices, governance, and skills rather than quick technical fixes.
What you need to know: Signals a shift from experimentation to realism, highlighting that AI’s impact will be determined by organisational change as much as model capability.
Original link: https://www.ft.com/content/bc84b655-3c91-4efc-9191-18366173c4ca
OpenAI’s ‘code red’ moment
4 December 2025 | Richard Waters, Financial Times
An internal memo from Sam Altman urging a “code red” refocus on ChatGPT has exposed growing strategic tensions inside OpenAI. After pursuing multiple experimental products, the company is pulling back to concentrate on defending its flagship chatbot against resurgent rivals such as Google’s Gemini. The shift raises questions about whether OpenAI can maintain technical leadership while also building a sustainable business model.
What you need to know: Reveals how even AI market leaders are struggling to balance innovation, focus and monetisation under intense competitive pressure.
Original link: https://www.ft.com/content/780b9b62-81ca-4a1f-bb1d-0226d0a719a8
​
Can Wayve make London a self-driving city?
6 December 2025 | John Thornhill
The FT’s John Thornhill rides through central London in a Wayve vehicle that largely drives itself, using the experience to explore whether robotaxis can handle one of the world’s messiest urban environments. Wayve’s pitch is an “AV2.0” approach focused heavily on software and vision—spending “every dollar on AI”—aiming to scale across carmakers without extensive pre-mapping. The article also stresses the gap between impressive demos and full autonomy: timelines are uncertain, regulators will move cautiously, and Wayve remains earlier-stage financially even as it raises large sums and prepares trials with partners like Uber.
What you need to know: Autonomous driving is one of the toughest real-world tests of embodied AI, where models must handle rare “long tail” events safely under regulation. Progress here tends to spill over into robotics more broadly—Wayve explicitly frames its tech as a platform for the “robotic age.”
Original link: https://www.ft.com/content/67c3cbad-330d-4dfa-8483-2b96c5b36430
Chinese phonemakers seize on Apple’s AI struggles
7 December 2025 | William Langley and Gloria Li
Chinese smartphone manufacturers are exploiting delays in Apple’s AI feature rollout by promoting tools that make it easier for users to switch from iPhones. Domestic brands such as Honor, Oppo and Xiaomi are bundling AI assistants, cross-platform file transfer and ecosystem-bridging apps to lure customers, particularly in China’s hyper-competitive handset market. Analysts say faster AI integration by Chinese vendors is eroding Apple’s traditional advantage of a closed, seamless ecosystem—at least domestically.
What you need to know: Consumer-facing AI features are becoming a competitive differentiator in hardware markets, not just software platforms. Apple’s slower AI deployment highlights how AI execution speed can directly affect market share in mature device categories.
Original link: https://www.ft.com/content/5bfaf5f3-c92a-41e1-81a6-c065a1bc61c2
Google’s ‘TPU’ chip puts OpenAI on alert and shakes Nvidia investors
8 December 2025 | Tim Bradshaw, Financial Times
Google’s in-house tensor processing units (TPUs) are emerging as a serious challenger to Nvidia’s AI chips, helping its Gemini 3 models outperform OpenAI’s latest systems in some benchmarks. The article details how Google’s vertically integrated approach — combining hardware, software, and cloud infrastructure — is unsettling investors and prompting rivals to reassess their dependence on Nvidia.
What you need to know: Highlights intensifying competition at the hardware layer, which could alter power dynamics across the entire AI ecosystem.
Original link: https://www.ft.com/content/d8585870-17a5-43a0-95ef-cbebb1995107
IBM extends AI push with $11bn takeover of Confluent
8 December 2025 | Kieran Smith, Financial Times
IBM’s $11bn acquisition of data-streaming platform Confluent is designed to strengthen its AI and cloud ambitions. Executives describe Confluent’s technology as the “rails” enabling AI agents and real-time applications, positioning IBM to deploy generative and agentic AI more quickly across enterprises.
What you need to know: Illustrates how legacy tech firms are buying infrastructure to compete in AI rather than building everything in-house.
Original link: https://www.ft.com/content/8112d77f-2531-400f-b947-b506fe3c6b3f
China set to limit access to Nvidia’s H200 chips despite Trump export approval
9 December 2025 | Zijing Wu
Beijing is considering restricting access to Nvidia’s advanced H200 AI chips even after Donald Trump signalled US approval for exports, as China intensifies efforts to build self-sufficiency in semiconductors. Regulators are discussing an approval system that would require Chinese buyers to justify why domestic alternatives cannot meet their needs, with the option of barring public-sector purchases altogether. The move reflects Beijing’s attempt to balance short-term performance gains from Nvidia hardware with long-term industrial policy goals, while Chinese tech groups increasingly train models overseas to bypass chip controls.
What you need to know: Compute access is now a geopolitical lever shaping AI capabilities, not just a technical choice. China’s willingness to constrain even permitted Nvidia chips underlines how national AI strategies increasingly prioritise ecosystem control over raw performance.
Original link: https://www.ft.com/content/c4e81a67-cd5b-48b4-9749-92ecf116313d
China’s open-source AI is a national advantage
9 December 2025 | Kai-Fu Lee
Kai-Fu Lee argues that China’s embrace of open-source large language models has become a strategic strength, enabling rapid iteration, lower costs and broad adoption despite restricted access to top-tier GPUs. He points to models such as DeepSeek’s R1 and Alibaba’s Qwen, which rival US counterparts while using less computing power and allowing developers to inspect, modify and deploy them locally. The open approach, Lee suggests, mirrors Android’s dominance over Apple’s iOS: less profitable per user, but vastly more scalable and resilient.
What you need to know: Open-source AI is emerging as a parallel path to frontier capability, particularly where compute is constrained. This dynamic could reshape global AI influence by favouring ecosystems that optimise for efficiency, adaptability and developer scale.
Original link: https://www.ft.com/content/b1f92b0e-d6ef-4c95-b51e-7bcf90c8a65f
AI shows potential against resistant bacteria
10 December 2025 | Patrick Temple-West
The article reports on machine learning-driven efforts to design new antibiotics as drug resistance rises globally, spotlighting projects that combine vast chemical building blocks to propose candidate compounds. Researchers behind Phare Bio say their models produced antibiotics that worked against targets like gonorrhoea and MRSA “in a Petri dish”, while the next frontier is improving in-vivo effectiveness and moving candidates toward trials. A central constraint is commercial: while investors have poured money into AI drug discovery, antibiotics remain a tough market because they are cheap, sometimes held back as last-resort treatments, and therefore less attractive financially.
What you need to know: Bio is a major proving ground for AI that can search enormous design spaces, but translation to real-world outcomes (trials, approvals, manufacturing) remains the bottleneck. This is a clear case where better models alone won’t be enough without aligned incentives and funding.
Original link: https://www.ft.com/content/03d0cbc1-a137-434d-bb2f-4603f48f44c7
China adds domestic AI chips to official procurement list for first time
10 December 2025 | Zijing Wu
China has added domestic AI processors—reportedly including chips from Huawei and Cambricon—to a government procurement list, signalling a push to replace foreign technology in the public sector following US export controls. The move aims to generate billions in demand for local chipmakers and reflects confidence that domestic hardware can compete, even as some buyers struggle with migration costs and compatibility (including rewriting code built around Nvidia). The article describes the policy as part of a broader “Xinchuang” effort that has previously pushed domestic substitutes for CPUs and operating systems across government offices and state-linked institutions.
What you need to know: Compute is a strategic chokepoint for AI progress—industrial policy and procurement decisions can reshape which chips (and software ecosystems) become standard. Hardware fragmentation also raises the cost of deploying frontier models, pushing more effort into portability tooling and “run anywhere” stacks.
Original link: https://www.ft.com/content/83c6521e-fe42-49e2-a9fe-eda97168b316
Google DeepMind to build materials science lab after signing deal with UK
10 December 2025 | Melissa Heikkilä, Financial Times
Google DeepMind will establish its first automated materials science laboratory in the UK as part of a new partnership with the British government. The lab aims to use AI systems to accelerate the discovery of materials for semiconductors, solar cells, and superconductors, while giving UK researchers priority access to DeepMind’s scientific models. The agreement also deepens collaboration on AI safety and public-sector use.
What you need to know: Demonstrates how AI is moving beyond software into scientific discovery, potentially reshaping innovation in energy, manufacturing, and materials.
Original link: https://www.ft.com/content/b20f382b-ef05-4ea1-8933-df907d30cc2c
FT Person of the Year: Jensen Huang
12 December 2025 | Financial Times
The Financial Times names Nvidia chief executive Jensen Huang its 2025 Person of the Year, tracing how his company’s chips became the backbone of the global AI boom. The profile charts Nvidia’s transformation from a niche graphics supplier into the most valuable company in the world, driven by surging demand for data centres and generative AI models. It also explores Huang’s engineering-first leadership style, geopolitical influence, and mounting competitive threats from rivals developing alternative AI chips.
What you need to know: Nvidia’s dominance shapes the economics and pace of AI development worldwide, making Huang one of the most influential figures in the technology’s future.
Original link: https://www.ft.com/content/11a018f4-95e0-41c2-99d8-aff105328a0b
Will OpenAI’s $1bn deal with Disney boost video app Sora?
12 December 2025 | Financial Times
OpenAI has struck a reported $1bn partnership with Disney to supply generative video technology for entertainment and marketing, marking one of the most ambitious commercial uses of its Sora model to date. The deal gives Disney early access to advanced video-generation tools while providing OpenAI with a marquee creative partner and real-world training data. Analysts say success will hinge on whether generative video can meet production-quality standards at scale.
What you need to know: This partnership tests whether generative AI can move beyond demos into high-value creative industries without undermining intellectual property or creative control.
Original link: https://www.ft.com/content/b14490d9-3ac9-45ce-bce5-df6c39db472f
How ASML’s CEO Plans to Keep Pace With Soaring AI Demand
12 December 2025 | Peter Elstrom, Sarah Jacob and Tom Mackenzie, Bloomberg
ASML CEO Christophe Fouquet describes the company’s central role in the AI boom: its lithography machines are required to produce the most advanced chips powering leading AI systems, and at the high end ASML effectively has no peer. The article focuses on ASML’s transition from EUV to “High NA” EUV, technology intended to push chip geometries below 2nm, and the challenge of matching Nvidia’s desire for faster-than-Moore’s-Law progress. Alongside technical execution, ASML must navigate geopolitics, with China a major market but restricted from the most advanced tools, creating pressure that could accelerate domestic alternatives.
What you need to know: AI capability is increasingly set by semiconductor manufacturing limits, lithography roadmaps (High NA and beyond) will directly constrain or unlock the next wave of model efficiency and performance.
Original link: https://www.bloomberg.com/news/features/2025-12-12/how-asml-plans-to-keep-pace-with-nvidia-s-growth-and-soaring-ai-demand?embedded-checkout=true
Tech Decoded: Will humanoid robots ever go mainstream?
15 December 2025 | Lily Jamali, BBC News
Humanoid robots are spreading beyond demos into real environments, but reliability and context awareness remain major bottlenecks. The newsletter recounts MIT’s Daniela Rus describing how a humanoid could follow a basic instruction (water a plant) yet failed dramatically when asked to “water a friend,” illustrating the persistent gap between scripted behaviors and real-world understanding. Investors are bullish, Morgan Stanley projects a multi-trillion-dollar market by 2050, but practitioners argue mass adoption depends on whether humanoids can learn from feedback and adapt safely in messy human settings.
What you need to know: Embodied AI is hitting the “last mile” problem, progress now depends less on flashy demos and more on robust learning, safety, and generalization in unpredictable physical environments.
Lessons from Roomba: sometimes being first mover sucks
15 December 2025 | Financial Times (Lex), Financial Times
iRobot’s Chapter 11 filing is used as a cautionary tale about how quickly consumer robotics can commoditize: pioneering a category doesn’t guarantee durable advantage if rivals can replicate capabilities and undercut on price. The column argues iRobot helped create the market for autonomous home cleaning, but competition, particularly from Chinese brands, eroded differentiation, while the blocked Amazon acquisition removed a potential lifeline. The broader lesson is that without a defensible “drawbridge” (technical, economic, or ecosystem), early robotics leaders can be overtaken fast.
What you need to know: As AI-enabled devices proliferate, defensibility matters, hardware + commodity AI features are easy to copy, so durable advantage increasingly comes from data, distribution, integration, and ecosystems.
OpenAI hires George Osborne to spearhead global ‘Stargate’ expansion
16 December 2025 | George Hammond and Ivan Levingston, Financial Times
OpenAI has hired George Osborne to lead “OpenAI for Countries,” positioning the initiative as an international extension of “Stargate,” its large-scale data-center and AI infrastructure push. The company frames the effort as helping governments build “sovereign AI” aligned with “democratic principles,” and says it is in discussions with dozens of countries, following early deals in places including the UK and UAE. The report also notes concerns that the pace and scale of data-center build-outs could fuel a financial bubble, given the enormous capital required.
What you need to know: AI leadership is being pursued through compute diplomacy, OpenAI is trying to lock in global infrastructure partnerships that shape standards, governance, and long-term platform dependence.
Original link: https://www.ft.com/content/a6a8c7aa-9677-4208-a28e-a3ca51cb7aa3
The robots are coming… to help around the house
16 December 2025 | Financial Times
A new generation of AI-powered domestic robots is edging closer to mainstream adoption, with machines capable of cleaning, carrying, tutoring and offering companionship. Falling hardware costs and advances in perception and language models are expanding what robots can do inside homes. High prices and reliability concerns remain key barriers to mass uptake.
What you need to know: Embodied AI is moving from labs to living rooms, signalling a shift from digital assistants to physical, autonomous systems.
Original link: https://propertylistings.ft.com/propertynews/germany/7548-the-robots-are-coming-to-help-around-the-house.html
Applied AI: Corporate Chatbots Gone Wild
16 December 2025 | Kevin McLaughlin, The Information
As businesses roll generative AI into customer-facing chatbots, real-world deployments are still showing basic failure modes: drifting off-topic, answering sensitive questions, and offering questionable advice. The newsletter highlights examples where chatbots for companies like Sierra (in a Gap.com deployment) and others responded to prompts about intimacy products, Nazi Germany, medical/legal issues, and even drug dosing, areas they were not intended to handle, prompting fixes like tighter safeguards and filtering. The broader point is that “three years into the generative AI boom,” many firms are still wrestling with foundational controls for production use.
What you need to know: This is a reminder that “agentic” customer support is a safety-and-liability problem as much as a UX upgrade, guardrails, retrieval scoping, and refusal behavior are becoming core differentiators for enterprise AI.
The Briefing: OpenAI’s British Hire
16 December 2025 | Martin Peers, The Information
OpenAI has recruited former UK chancellor George Osborne to lead “OpenAI for Countries,” a role focused on working with governments to develop AI infrastructure and the surrounding ecosystem, compute capacity, workforce training, public-sector adoption, and safety/cybersecurity standards. The piece frames the move as part of OpenAI’s increasingly global, state-facing strategy, drawing parallels to Big Tech’s playbook for managing regulators and shaping policy, and suggesting OpenAI’s scale ambitions now extend far beyond product to the “rails” of national AI build-outs.
What you need to know: Frontier AI competition is shifting from models to geopolitics and infrastructure, companies are racing to become embedded partners to governments for compute, regulation, and national deployment.
Inside Mark Zuckerberg’s turbulent bet on AI
17 December 2025 | Hannah Murphy and George Hammond, Financial Times
Meta chief executive Mark Zuckerberg is pouring billions into AI infrastructure and an aggressive talent-poaching campaign as he tries to reposition the company from AI laggard to leader. The effort has been marked by internal turbulence, lay-offs, restructures, and leadership reshuffles, alongside an escalating “personal superintelligence” narrative aimed at embedding AI across products and future hardware like smart glasses. After Llama 4 underperformed versus rivals and drew criticism over benchmarking tactics, Meta is now racing to ship a new frontier model built from scratch, codenamed “Avocado”, with ambitions to match Google’s Gemini line. Investors, meanwhile, are growing wary of the scale of spending and the financial engineering required to fund new data centres and chips.
What you need to know: Big Tech’s AI race is now as much about compute, capital markets and organisational execution as model research , and Meta is testing how far money and hiring can close capability gaps.
Original link: https://www.ft.com/content/cd3c6867-2f73-417d-a299-fb91a57bfe08
Amazon overhauls AI team as chief declares an ‘inflection point’
17 December 2025 | Rafe Rosner-Uddin, Financial Times
Amazon announced a management shake-up that elevates AI model development, chips, and quantum computing into a single leadership structure, with longtime infrastructure executive Peter DeSantis overseeing the new group. The changes include the planned departure of Rohit Prasad, the executive leading Amazon’s large language model effort, and highlight AWS’s urgency to close the gap with rivals on in-house chips and foundation models. The reorg also sits alongside Amazon’s push to broaden adoption of its Trainium chips, which so far have a narrower customer base than Nvidia-based alternatives.
What you need to know: Hyperscalers are no longer “just” hosting AI, they’re reorganising around owning the full stack (models + chips + infrastructure), a shift that will influence model pricing, performance, and enterprise lock-in.
Original link: https://www.ft.com/content/f3092c2d-f428-4ff4-bdbd-9a27b12bcae2
What happens if AI data centres slip the ‘surly bonds of earth’?
17 December 2025 | Anjana Ahuja, Financial Times
A proposed Google-backed experiment to run AI data centres in space has sparked debate over how far the industry will go to meet exploding compute and energy demands. The concept would rely on solar-powered satellite clusters processing AI workloads in orbit, sidestepping terrestrial land and power constraints but creating new risks around space debris, governance and maintenance. Critics argue the idea highlights the unsustainable trajectory of AI infrastructure rather than offering a realistic solution.
What you need to know: AI’s energy appetite is forcing increasingly extreme ideas, underlining that physical limits, not just algorithms, may define the next phase of AI scaling.
Original link: https://www.ft.com/content/cc07f853-4f1d-4e69-8bfb-9220175656ab
AI Startup Edison Raises $70 Million to Speed Up Scientific Research
18 December 2025 | Rachel Metz, Bloomberg
Edison Scientific raised $70 million at a reported $250 million valuation to expand “Kosmos,” software aimed at speeding scientific research by generating hypotheses and guiding analysis using a mix of third-party frontier models and Edison’s own models. Spun out of the nonprofit FutureHouse, Edison positions its tools as going beyond chat, taking longer to run deeper research workflows, and claims it can compress weeks or months of exploratory work into hours for some tasks. High-profile backers and public praise reflect rising investor confidence in “AI for science,” even as the article notes the field is still early.
What you need to know: “AI for science” is turning into a major commercialization wave, tools that automate hypothesis generation and literature-to-insight workflows could shift how fast labs iterate, and where competitive advantage in research accumulates.
Original link: https://www.bloomberg.com/news/articles/2025-12-18/ai-startup-edison-raises-70-million-to-speed-up-scientific-research?embedded-checkout=true
China boosts AI chip output by upgrading older ASML machines
19 December 2025 | Eleanor Olcott, Financial Times
Chinese chipmakers are reportedly retrofitting older ASML deep ultraviolet (DUV) lithography tools with secondary-market components, such as upgraded wafer stages, lenses, and sensors, to improve alignment precision and increase output of advanced chips used in AI systems. Because export controls restrict China’s access to ASML’s most advanced DUV and all EUV machines, fabs have relied on workarounds like multi-patterning that raise costs and reduce yields; the upgrades are said to mitigate some of those constraints. The report highlights how third-party engineering support and grey-market sourcing can expose cracks in export-control regimes designed to slow China’s AI compute progress.
What you need to know: Compute is geopolitics, China’s ability to sustain advanced chip production despite restrictions will shape the global distribution of AI training capacity and the effectiveness of export-control policy.
Original link: https://www.ft.com/content/d10398db-b8b4-40f3-8c6d-b340470f5f3c
Humanoid Robots Are Coming, As Soon As They Learn to Fold Clothes
19 December 2025 | Tim Fernholz, Bloomberg
At the Humanoids Summit in Silicon Valley, enthusiasm for “LLM-powered” robotics was high, but full-size humanoid prototypes were scarce, reflecting safety, reliability, and dexterity challenges that still block real-world deployment. Many demos used smaller, widely available platforms (including Unitree robots), while speakers noted that physical environments introduce messy edge cases that software alone can’t solve. The piece also points to strong capital and policy momentum, particularly in China, as well as a looming economic threshold where humanoids become cost-competitive with human labour.
What you need to know: Embodied AI is becoming the next major frontier, LLMs are pushing robotics toward general-purpose tasking, but real deployment depends on safety, dexterity, and system robustness, not just better language models.
Original link: https://www.bloomberg.com/news/articles/2025-12-19/humanoid-robots-are-emerging-as-the-next-big-ai-breakthrough?embedded-checkout=true
Roomba rival Roborock bets on AI to clean up market for robot vacuum cleaners
19 December 2025 | Gloria Li, Financial Times
Roborock is leaning into AI-powered features, including robotic arms and object recognition (even for pet mess), to differentiate in an increasingly competitive robot vacuum market. The story comes as iRobot, the Roomba pioneer, falls into bankruptcy and is taken over by a Chinese supplier, underscoring how quickly hardware categories can flip when software and iteration speed outpace incumbents. Roborock says it uses a mix of self-developed and open-source AI models to train devices for real-world tasks like picking up socks and adapting to different home environments. The piece also points to China’s advantage in integrating AI into consumer hardware at scale.
What you need to know: AI is moving from chatbots into “embedded autonomy”, real-world perception and action in consumer devices, accelerating robotics adoption beyond labs and into mass-market products.
Original link: https://www.ft.com/content/9e8dbb4e-8892-4a46-a75e-47a0facf2cc7
Looking back on a year of AI blunders
21 December 2025 | Pilita Clark, Financial Times
Pilita Clark surveys a year of high-profile AI mishaps, from BBC and newspaper hallucinations and non-existent book recommendations to fabricated legal citations, flawed medical advice and bungled consultancy reports. Across media, law, medicine and politics, she argues, AI is exposing human flaws, laziness, overconfidence, cost-cutting and political showmanship, as much as its own technical limits, with examples like Albania’s AI “minister” symbolising how leaders are outsourcing judgement to imperfect systems.
What you need to know: Underscores that current generative AI is far from reliably trustworthy in professional and civic settings, and that organisational safeguards and human oversight are lagging well behind adoption.
Original link: https://www.ft.com/content/d22867d6-af87-4727-84d7-1571d951347d
America’s top companies keep talking about AI, but can’t explain the upsides
23 September 2025 | Melissa Heikkilä, Chris Cook and Clara Murray, Financial Times
This FT analysis of S&P 500 filings and earnings calls finds that while references to AI have exploded, many large US companies struggle to articulate clear business benefits beyond hype and “fear of missing out”. Non-tech groups from Coca-Cola to Lululemon often describe AI in vague terms, emphasising potential productivity gains while devoting more space to risks such as cybersecurity, regulatory exposure and implementation failures. Only a subset of firms directly serving the AI build-out, like energy providers and equipment manufacturers tied to data centres, can point to concrete upside, and even successful AI adopters do not reliably outperform the broader market.
What you need to know: Underscores the gap between AI rhetoric and measurable impact across much of corporate America, suggesting that widespread value creation from AI may take longer and be more uneven than investor enthusiasm implies.
Original link: https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473
Nvidia to poach top staff from AI chip start-up Groq in licensing deal
24 December 2025 | Tim Bradshaw, Financial Times
Nvidia is hiring Groq founder Jonathan Ross and other senior executives as part of a licensing agreement that will integrate Groq’s low-latency inference processors into Nvidia’s “AI factory” data-centre architecture. Groq, which focuses on energy-efficient inference chips and was valued at $6.9bn as recently as September, will continue operating independently, even as the deal raises antitrust concerns about dominant players absorbing potential rivals through “soft” acquisitions and licensing structures.
What you need to know: Illustrates how Nvidia is consolidating its leadership not just in hardware but in specialised AI workloads, blurring the line between competition and ecosystem partnerships in the AI chip race.
Original link: https://www.ft.com/content/3584197e-a99a-4a06-9386-dc65cf603f45
AI upheaval shows little sign of lessening
25 December 2025 | Richard Waters, Financial Times
Richard Waters argues that 2025 cemented AI as a boom dominated by infrastructure builders and model trainers, with unprecedented spending on data centres and soaring valuations for companies like OpenAI, Anthropic and xAI. Forecasts for AI data-centre capex have been repeatedly revised upwards, and multi-trillion-dollar build-out plans are under way even as token prices for AI model usage collapse, driving rapid improvements in price-performance. Yet clear “killer apps” and broad enterprise revenue have lagged, leaving the boom resting heavily on a handful of trillion-dollar tech groups whose cash flows are being stretched by ever-higher capital demands.
What you need to know: Captures the structural tension in today’s AI economy: infrastructure and model investment are racing ahead of proven, durable demand, raising questions over how long big tech balance sheets can absorb the cost of the AI build-out.
Original link: https://www.ft.com/content/728b03a4-cef3-4ee9-a421-d681998ef7d8
AI ‘world models’ promise to reshape $190bn video games industry
25 December 2025 | Cristina Criddle, Financial Times
The article explores how “world models”, AI systems that can generate and navigate interactive 3D environments, are poised to transform the global video games industry. Google DeepMind, Fei-Fei Li’s World Labs and others are using these models to speed up the creation of game worlds, characters and non-player characters, with examples like an AI-powered Darth Vader in Fortnite and studios reporting dramatic gains in development speed. While proponents argue this will unlock new personalised experiences and free developers to focus on creativity, unions and artists warn of job losses and a flood of low-quality AI-generated content.
What you need to know: Showcases how frontier “world models” are becoming a real commercial platform for AI, pushing games toward user-generated, AI-built worlds and illustrating how generative 3D may spill over into robotics, simulation and broader virtual environments.
Original link: https://www.ft.com/content/9b1b1bc3-6573-451d-892b-e6abb819a112
Report claims Salesforce execs admit trust issues with LLM models; the company clarifies LLMs can provide trusted outcomes when connected with accurate data
26 December 2025 | Trending Desk, ET Online, The Economic Times
This piece reports that Salesforce executives have scaled back their enthusiasm for large language models after reliability problems and internal concerns about “randomness,” with engineers noting models struggle once prompts involve more than about eight instructions. The company has shifted its Agentforce product towards tightly controlled, deterministic automation, even as CEO Marc Benioff links AI agent deployments to a reduction of roughly 4,000 support roles and re-emphasises the primacy of high-quality data and governance to avoid hallucinations and “AI drift”.
What you need to know: Shows how a major enterprise vendor is pivoting from generic LLM hype to heavily constrained, data-grounded AI, signalling a broader industry move towards reliability, guardrails and workforce reshaping.
Original link: https://economictimes.indiatimes.com/news/new-updates/ai-bubble-bursting-salesforce-execs-admit-trust-issues-after-laying-off-4000-techies-now-scaling-back-
Data centres turn to aircraft engines to avoid grid connection delays
27 December 2025 | Martha Muir, Financial Times
Facing grid connection wait times of up to seven years, data-centre developers are increasingly installing on-site power using aeroderivative gas turbines derived from jet engines and diesel generators. Companies such as GE Vernova, ProEnergy and Cummins report surging orders as AI-hungry facilities like the Stargate data centre in Texas seek hundreds of megawatts to a gigawatt of rapid-deploy capacity, sometimes repurposing aviation hardware. Regulators are loosening rules on generator use even as analysts warn these smaller, fossil-fuelled plants are more polluting and often more expensive than grid electricity, raising environmental and cost concerns around AI’s power demand.
What you need to know: Reveals how bottlenecks in energy infrastructure are pushing AI providers toward ad-hoc, carbon-intensive power solutions, making energy availability and sustainability central constraints on scaling frontier AI.
Original link: https://www.ft.com/content/8deb1518-b650-4a21-b7d1-3e6180560056
Nvidia reportedly backs away from its effort to make its own public cloud, team reorg eases friction with customers, chipmaker shifts unit’s focus to internal R&D
December 2025 | Luke James, Tom’s Hardware
This report describes Nvidia’s decision to scale back ambitions to operate its own branded public cloud service, reorganising the DGX Cloud group into its core engineering division under SVP Dwight Diercks. Instead of competing directly with hyperscalers such as AWS and Azure, DGX Cloud will now function primarily as an internal platform to support Nvidia’s chip development and AI research. The move reflects both record GPU demand and sensitivity to partner concerns about channel conflict, as Nvidia balances ecosystem relationships with vertical-integration ambitions.
What you need to know: Signals how Nvidia is prioritising collaboration with cloud partners while doubling down on its core strengths in AI hardware and platforms, a strategic recalibration that shapes power dynamics across the AI infrastructure stack.
Original link: https://www.tomshardware.com/tech-industry/nvidia-restructures-dgx-cloud-team-refocuses-cloud-efforts-internally
AI and Cybersecurity
US has failed to stop massive Chinese cyber campaign, warns senator
12 December 2025 | Demetri Sevastopulo, Financial Times
A senior US senator has warned that Chinese intelligence continues to access American telecom networks through a sprawling cyber operation known as “Salt Typhoon”, potentially allowing surveillance of unencrypted communications nationwide. Mark Warner blamed staff cuts and fragmented oversight for the failure to contain the breach, arguing that the US response lags behind the scale of the threat. The campaign has exposed systemic weaknesses in telecom infrastructure and cyber governance, raising alarms about national security.
What you need to know: As AI amplifies cyber capabilities on both offence and defence, insecure digital infrastructure becomes a strategic vulnerability with geopolitical consequences.
Original link: https://www.ft.com/content/50e45bac-c16b-48e8-b788-e6b106be9490
Fraudsters use AI to fake artwork authenticity and ownership
21 December 2025 | Lee Harris and Josh Spero, Financial Times
The piece reports that art fraudsters are using chatbots and large language models to generate convincing fake invoices, provenance records and certificates of authenticity to support dubious insurance claims or sales. Loss adjusters and provenance researchers describe cases where multiple certificates share identical AI-generated text, forged signatures and invented references, with models “hallucinating” documentation that never existed. While insurers and experts are experimenting with AI to detect such fakes, improvements in generative tools are making it harder to spot doctored documents through simple visual inspection alone.
What you need to know: Illustrates how generative AI is not only creating new economic value but also lowering the barrier to sophisticated fraud, pressuring regulators, insurers and cultural institutions to upgrade verification and audit tools.
Original link: https://www.ft.com/content/fdfb5489-daa0-4e7e-97b7-4317514cd9f4
The data breach that hit two-thirds of a country
23 December 2025 | Song Jung-a, Financial Times
South Korean online retailer Coupang has suffered the country’s largest-ever data breach, exposing personal information from more than 33mn accounts, nearly two-thirds of the population, after hackers accessed overseas servers for months before detection. Investigators believe a former employee with privileged access exploited lingering credentials to extract customer data, prompting political backlash, executive resignations and calls for tougher cyber security enforcement. The incident has become a national wake-up call on data protection failures as digital platforms scale rapidly.
What you need to know: In the age of AI-driven personalisation and data-intensive systems, weak cyber security doesn’t just risk privacy, it undermines the data foundations AI systems depend on.
Original link: https://www.ft.com/content/df4042fa-3e56-410f-b905-4aed8fd434ac
AI Regulation and Legal Issues
AI era requires ‘totally different’ approach to regulation, says FCA boss
3 December 2025 | Martin Arnold
The head of the UK’s Financial Conduct Authority argues that AI’s rapid evolution makes traditional rulemaking too slow, pushing the regulator toward a more flexible, outcomes-led approach. Rather than punishing every failure, the FCA says it will focus on “egregious” issues that aren’t addressed, while encouraging firms to innovate in areas like fraud detection, customer service and risk controls. The article also highlights the FCA’s “AI live testing” initiative, designed to help firms deploy AI “safely and responsibly” with tailored support.
What you need to know: As AI capabilities change on three-to-six-month cycles, oversight is shifting from static rules to iterative supervision and testing—an approach likely to spread across other high-stakes sectors. For AI builders, this signals growing demand for auditable systems and “safe-to-try” deployment pathways.
Original link: https://www.ft.com/content/ba3b38da-8ca0-434d-b657-4fcc9383af7e
UK explores tougher laws on AI chatbots
3 December 2025 | Financial Times
UK ministers are considering stronger regulation of AI chatbots amid concerns they may expose children to harmful advice or encourage self-harm. Officials say some chatbots fall outside existing online safety laws, prompting a review of regulatory gaps. The debate reflects growing anxiety about emotionally engaging AI systems and their impact on vulnerable users.
What you need to know: Governments are shifting focus from content moderation to the psychological effects of conversational AI, signalling a new phase of AI regulation.
Original link: https://www.ft.com/content/12cc60ef-7d97-4d20-a7fd-9a28ff6bcb11
Donald Trump to issue executive order for single federal rule on AI regulation
8 December 2025 | Zehra Munir and Joe Miller
Donald Trump says he will sign an executive order preventing US states from individually regulating AI, arguing that fragmented rules would undermine America’s competitiveness against China. The move is backed by major AI lobbyists but has triggered opposition from Republican senators and governors who say it overreaches executive authority. Legal experts question whether such an order could survive court challenges without congressional backing.
What you need to know: Regulatory centralisation is becoming a strategic tool in the AI race. How the US resolves federal versus state authority will affect deployment speed, compliance costs and global competitiveness.
Original link: https://www.ft.com/content/47d54ca4-2ea3-4519-b860-e466ee7802b6
Nvidia can sell H200 AI chips to China, Donald Trump says
8 December 2025 | Demetri Sevastopulo and Michael Acton, Financial Times
Donald Trump announced that Nvidia will be permitted to export its advanced H200 AI chips to approved Chinese customers, reversing years of tightening export controls. The move has triggered sharp backlash from US lawmakers and security officials, who warn it could accelerate China’s military and surveillance capabilities. Supporters argue that restricting chip exports harms US companies and cedes influence over global AI infrastructure.
What you need to know: Underscores how access to compute has become a central geopolitical lever in the global AI race.
Original link: https://www.ft.com/content/ac63139d-5143-4aed-a1e4-980a06551b51
AI turbocharges litigation powers
9 December 2025 | James Paton
Law firms are increasingly using AI to compress time-intensive legal work—digesting huge document sets, building chronologies, drafting briefs, and interrogating deposition transcripts—turning hours of review into minutes in some workflows. The article describes safeguards emerging alongside adoption, including “verifiers” to check outputs and greater transparency about when AI is used, as firms worry about confidentiality and (especially) accuracy. It also notes the reputational and legal risks of hallucinated citations, pointing to cases where AI-generated errors made it into court filings and triggered sanctions or corrections.
What you need to know: High-stakes professional domains are adopting AI fastest where it boosts throughput on text-heavy work—but they are simultaneously building new human-in-the-loop controls to manage hallucinations and liability. This is a preview of how “AI compliance” will look in many knowledge industries.
Original link: https://www.ft.com/content/423e9bc4-227d-4bd4-87de-401132eb415a
EU opens probe into Google’s use of online content for AI models
9 December 2025 | Barbara Moens
The European Commission has launched an antitrust investigation into whether Google unfairly uses publishers’ and creators’ content—particularly from YouTube—to train its AI models. Regulators will examine whether Google’s practices disadvantage rival AI developers or impose unfair terms on content providers. The case is part of a broader EU push to enforce competition rules in fast-moving AI markets despite political pressure from the US.
What you need to know: Training data is becoming a regulatory flashpoint for AI. How content rights are enforced in Europe could reshape model training economics and limit how incumbents leverage platform dominance.
Original link: https://www.ft.com/content/598aee7c-0b8b-4777-bea2-4ef121dc8197
In-house legal teams test AI for automating more tasks
9 December 2025 | Yasmin Lambert, Financial Times
Corporate legal departments are increasingly using generative AI to review contracts, support litigation, and manage compliance work previously outsourced to law firms. Early results suggest incremental productivity gains rather than radical transformation, alongside growing emphasis on guardrails, human oversight, and new legal skills.
What you need to know: Offers a grounded view of AI adoption in a heavily regulated profession where accuracy and accountability are critical.
Original link: https://www.ft.com/content/e5114ad0-66ca-4a12-a2fa-7c5944fbcf99
The countdown to the world’s first social media ban for children
9 December 2025 | Financial Times
Australia is implementing a world-first ban preventing under-16s from holding accounts on major social media platforms, placing enforcement responsibility on technology companies. The move aims to curb harm linked to addictive algorithms and damaging content, but critics warn it may push teenagers toward workarounds, fake accounts or less regulated platforms. The law is being closely watched by governments worldwide.
What you need to know: AI-driven recommendation systems are increasingly seen as a public health issue, accelerating regulatory intervention into algorithmic platforms.
Original link: https://www.ft.com/content/e93ab2b2-868f-441f-bbec-0af4fa3edeb5
US state attorneys-general demand better AI safeguards
10 December 2025 | Financial Times
A coalition of 42 US state attorneys-general has urged leading AI companies to strengthen safeguards around chatbots, citing cases of emotional harm and alleged links to suicides. The letter calls for more testing, clearer safety policies and stronger protections for children, even as federal authorities seek to centralise AI regulation. The move increases legal and political pressure on AI developers.
What you need to know: Mounting legal scrutiny signals that AI safety is becoming enforceable through law, not just voluntary commitments.
Original link: https://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c
Could America win the AI race but lose the war?
13 December 2025 | Tim Wu, Financial Times
Tim Wu argues the US has gone “all-in” on AI, with US tech giants spending hundreds of billions on AI infrastructure, while China is hedging across multiple “future” domains such as EVs, batteries, robotics, and clean energy. He suggests Silicon Valley’s fixation, often tied to grand narratives about AGI and exponential progress, creates risks of groupthink and a front-loaded bet that may not deliver the broadly transformative payoff investors expect. If AI’s utility proves narrower than advertised, Wu warns the US could face a destabilising crash, whereas China’s diversified industrial strategy may be less speculative.
What you need to know: This frames AI as a macroeconomic risk factor, frontier-model spending is now large enough to affect markets and national strategy, and the “AI bet” will influence where real industrial advantage accrues.
Original link: https://www.ft.com/content/12581344-6e37-45a0-a9d5-e3d6a9f8d9ba
Australia’s social media ban carries health warning for Big Tech investors
14 December 2025 | Louise Lucas, Financial Times (Lex)
Australia’s under-16 social media ban, backed by stringent age-verification requirements, is presented as a potential template for other jurisdictions, with the EU and others watching closely. The Lex column argues the intense lobbying by platforms shows what is at stake financially, given the advertising value of younger audiences, and notes enforcement may extend to biometric methods such as facial recognition. The piece draws a parallel with China’s earlier tech crackdowns and suggests Australia’s move may travel internationally more readily than past Australian tech-policy experiments.
What you need to know: Age verification is becoming a forcing function for AI, platforms may adopt more automated identity and risk systems (including biometrics), raising privacy, bias, and safety questions that intersect directly with AI regulation.
Original link: https://www.ft.com/content/7efbb1f8-c537-45fa-ac39-29a2183b6190
‘Game recognises game’: How Jensen Huang won over Donald Trump
14 December 2025 | Joe Miller and Demetri Sevastopulo in Washington and Michael Acton in San Francisco, Financial Times
Nvidia CEO Jensen Huang reportedly secured a major policy win: permission to export advanced AI chips (including H200s) to China under a framework that includes a US cut of proceeds, after intensive engagement with the White House and senior officials. The piece describes Nvidia’s rapid ramp-up of direct advocacy, arguing internally that blocking US chips wouldn’t stop China’s AI progress and would accelerate domestic competitors like Huawei. The episode underscores how export controls are being negotiated not just as security policy but as industrial strategy with huge commercial stakes.
What you need to know: Access to high-end GPUs remains a choke point for frontier AI, chip export policy is now a central lever shaping model capability, competition, and where leading AI systems can be trained and deployed.
Original link: https://www.ft.com/content/ba305968-5427-41ee-b65b-818d27f7db16
UK to push for nudity-blocking software on devices to protect children
15 December 2025 | Financial Times
The UK government is urging technology companies to build AI-powered nudity detection into operating systems to block explicit images by default. Adults would need to verify their age to disable the filters, shifting responsibility onto device makers rather than platforms alone. Critics warn of privacy risks and potential circumvention.
What you need to know: Automated content detection is becoming a regulatory requirement, not just a platform feature, raising stakes for AI accuracy and accountability.
Original link: https://www.ft.com/content/0ef79775-eadf-4cc9-b32c-e97b0eff816f
US suspends technology deal with the UK
15 December 2025 | Financial Times
Washington has paused a technology cooperation agreement with the UK amid wider trade tensions, complicating joint ambitions in AI, quantum computing and advanced research. The suspension underscores how tech collaboration is increasingly entangled with broader geopolitical and economic negotiations. British officials remain confident talks will resume.
What you need to know: International AI collaboration is fragile, with trade and politics capable of derailing even strategically aligned partnerships.
Original link: https://www.ft.com/content/afd45e58-5351-4379-8f7e-5788da3d2e20
Why China’s robotaxi industry is stuck in the slow lane
15 December 2025 | Financial Times
Despite large-scale deployments and regulatory support, China’s robotaxi companies are struggling to convince investors of their profitability. High hardware costs, thin margins and memories of failed mobility experiments weigh on valuations. Unlike US rivals framed as software platforms, Chinese operators are seen as capital-intensive service providers.
What you need to know: Autonomous driving economics matter as much as technical capability, shaping which AI business models attract long-term investment.
Original link: https://www.ft.com/content/09bba894-5150-406b-8962-7bcc4c80b31a
Is the ‘Made by AI’ label pointless?
16 December 2025 | Sarah O’Connor, Financial Times
As AI-generated content becomes harder to distinguish from human work, governments and platforms are moving toward labelling regimes, but the article argues this may be messier than it sounds. Drawing on the video game industry’s experience, it notes that AI use is often non-binary (spanning coding help, marketing assets, music, textures, and voice), making disclosures hard to standardise and even harder to police. Labelling can also become an “honesty tax” on those who disclose, while bad-faith actors evade. Even so, transparency could create a market for human-made work, but only if definitions and enforcement are credible.
What you need to know: Provenance and disclosure are becoming core AI governance problems, and “simple labels” may not work when AI is embedded across the entire creative pipeline.
Original link: https://www.ft.com/content/bab5a9e2-3847-4242-880f-6e1ce4396be1
China’s escalation dominance over Trump
17 December 2025 | Financial Times
The article argues that US policy reversals have handed strategic advantage to China across technology, energy and security. Decisions such as easing chip restrictions and weakening climate incentives are accelerating Beijing’s gains in AI and clean technology. The result is a widening gap between geopolitical intent and outcomes.
What you need to know: AI leadership is shaped as much by policy coherence as innovation, with inconsistent strategies risking long-term competitive disadvantage.
Original link: https://www.ft.com/content/a83c33cc-7bce-4c1c-8763-e21825324e6b
Malaysia’s data centre boom and TSMC’s Arizona acceleration
18 December 2025 | Cheng Ting-Fang, Lauly Li, William Sandlund and Jens Kastner, Financial Times
The newsletter traces how Asia’s AI compute landscape is shifting as data-centre build-outs surge in new hubs such as Malaysia’s Johor, reshaping regional competition once dominated by Japan and Singapore. It also highlights Taiwan’s expanding sovereign AI infrastructure, where supercomputers built on Nvidia’s latest chips amplify the practical constraints of modern AI, noise, cooling, weight, and above all electricity. The piece frames energy, water and land as decisive inputs in the next phase of the AI race, while noting a growing dependence on complex cross-border supply chains. Separately, it points to TSMC accelerating its US roadmap in Arizona as demand for advanced AI chips rises.
What you need to know: Frontier AI progress is increasingly determined by infrastructure and supply chains, power availability and advanced chip capacity are becoming strategic bottlenecks.
Original link: https://www.ft.com/content/6a686141-4880-412e-adf9-63e1a032aad0
Inside Tencent’s deal to use Nvidia’s best AI chips in Japan
21 December 2025 | David Keohane & Ryan McMorrow, Financial Times
The article details how Chinese tech giant Tencent is using Japanese company Datasection as an offshore route to access Nvidia’s most advanced Blackwell B200 and B300 AI chips, sidestepping US export curbs on high-end processors to China. Datasection has rapidly grown into a major “neocloud” player, signing more than $1.2bn in contracts and planning data centres with over 100,000 Nvidia processors in Japan and Australia, while drawing scrutiny from short sellers and regulators over export-control compliance and complex financing ties.
What you need to know: Shows how demand for frontier AI compute is reshaping global data-centre geography and export-control enforcement, with Chinese firms increasingly relying on offshore “workarounds” to obtain cutting-edge chips.
Original link: https://www.ft.com/content/9b47c335-9633-4560-9f57-5736c9d04bef
America’s risky bet on hydrocarbons might hurt it in the AI race
23 December 2025 | Ian Harnett, Financial Times
Ian Harnett warns that the US strategy for winning the AI race leans heavily on fossil-fuelled power for data centres, even as AI drives a surge in electricity demand. The International Energy Agency expects US data centres to account for nearly half of domestic electricity demand growth to 2030, with more than 40 per cent of their power still coming from hydrocarbons in 2035 due to policy shifts away from renewables. By contrast, China is pairing AI expansion with renewables near coastal regions, potentially lowering long-term energy costs and environmental stress, while US reliance on gas and other fossil fuels risks higher prices, greater water stress and knock-on threats to food security.
What you need to know: Highlights how energy mix and physical constraints, not just algorithms or chips, may determine long-term AI competitiveness, with the US risking structural disadvantages if it continues to power AI primarily with hydrocarbons.
Original link: https://www.ft.com/content/73e02356-adbd-4054-bd6e-bd6c8489f094
AI Market and Investment
OpenAI’s lead under pressure as rivals start to close the gap
30 November 2025 | Melissa Heikkilä, Tim Bradshaw and George Hammond, Financial Times
Once far ahead of the field, OpenAI now faces serious competition from Google and Anthropic as advances in model training, custom chips and distribution narrow the gap. Rising data centre costs and the challenge of monetising at scale are adding strain, even as ChatGPT remains the most widely used AI chatbot. Analysts warn that OpenAI’s vast infrastructure spending represents a risky bet if revenues fail to keep pace.
What you need to know: Illustrates the transition of AI from a breakthrough moment to a brutally competitive, capital-intensive industry.
Original link: https://www.ft.com/content/8881062d-ff4f-4454-8e9d-d992e8e2c4e3
AI start-ups in the UK need more than money
3 December 2025 | Nigel Toon
Nigel Toon argues the UK’s problem isn’t early-stage invention—Britain can create AI breakthroughs—but scaling them into global giants, as illustrated by DeepMind’s sale to Google and Graphcore’s sale to SoftBank. He points to a “middle stage” gap in early-growth support, where US venture firms provide not just capital but customer access, operational mentorship and commercial muscle. The piece calls for a UK ecosystem that helps AI start-ups become durable businesses with traction, revenue growth and credible paths to profitability—so they can raise large late-stage rounds without selling.
What you need to know: Where AI leadership ends up is shaped by scale-up infrastructure—go-to-market, distribution, and growth expertise—not just research talent. If the UK can’t consistently scale frontier companies, more foundational AI value will accrue elsewhere.
Original link: https://www.ft.com/content/5514ffc1-0525-430b-9866-5e72fb580be4
Anthropic taps IPO lawyers as it races OpenAI to go public
3 December 2025 | George Hammond
Anthropic has reportedly hired IPO lawyers as it explores a potential public listing as soon as 2026, positioning itself against OpenAI in the race to public markets. The piece says the company is also discussing fundraising at a valuation above $300bn and has begun internal readiness work typical of firms preparing to operate like public companies. It frames the move as a test of whether public investors will back “lossmaking research labs” with heavy training costs and hard-to-forecast financials—an issue that will shape the entire frontier-model business model.
What you need to know: Frontier AI is entering a phase where capital structure matters as much as model quality—public-market scrutiny could reshape transparency, spending discipline, and competitive dynamics. IPO readiness is also a signal that AI labs expect long-lived, platform-scale businesses rather than short product cycles.
Original link: https://www.ft.com/content/3254fa30-5bdb-4c30-8560-7cd7ebbefc5f
Anthropic’s IPO pitch: helpful, honest, harmless and hulking
3 December 2025 | John Foley (Lex)
This Lex column argues that Anthropic’s potential IPO story rests on two hard-to-price assets: a fast-growing enterprise business around its Claude chatbot, and a brand built on “helpful, honest and harmless” principles. It notes that investors may benchmark Anthropic closely against OpenAI on projected revenues and valuation multiples, while debating whether Anthropic’s narrower focus (primarily model-building) is a strength or a limitation versus OpenAI’s “full stack” sprawl. The column suggests that safety positioning could command a premium—or trigger scepticism—depending on whether markets value principled restraint in a hype-driven sector.
What you need to know: Safety and governance are becoming investable differentiators, not just ethics add-ons—especially as AI systems move into regulated enterprise workflows. How markets reward (or punish) “safer AI” will influence what frontier labs optimise for next.
Original link: https://www.ft.com/content/9c8c1bb7-b1aa-4c9c-9896-791998b506fd
Meta plans to slash metaverse spending as Zuckerberg shifts focus to AI
4 December 2025 | Hannah Murphy, Financial Times
Meta is preparing to cut its metaverse budget by as much as 30 per cent, scaling back a once-central strategy that failed to gain consumer traction. The retrenchment frees up capital for aggressive investment in AI infrastructure, talent and wearable devices, which Zuckerberg now sees as critical to winning the race for “superintelligence.” Investors welcomed the move as a sign of discipline after years of heavy losses in Reality Labs.
What you need to know: Signals a decisive reallocation of resources from speculative virtual worlds toward AI systems with clearer commercial and strategic value.
Original link: https://www.ft.com/content/d8e798a8-65db-44f1-8490-035b50303ee3
AI investing looks beyond the Magnificent Seven
5 December 2025 | Alice Ross
With mega-cap tech dominating AI headlines, the piece outlines how some investment trusts are trying to capture the AI boom through less obvious exposures—software suppliers, infrastructure plays like data centres, and companies embedded in the AI supply chain. Analysts quoted describe a move away from concentration in the “Magnificent Seven”, arguing active managers can find secondary winners and “enablers” such as compute, networking and power infrastructure. The article also flags a key tension: markets debate whether AI is a bubble, while supporters argue the real story is capability gains inside companies, not just spending.
What you need to know: Capital markets are starting to price AI as an economy-wide infrastructure build, not a single set of superstar firms—shaping where funding flows next (chips, power, data centres, enterprise software). That financing landscape influences which AI approaches scale fastest.
Original link: https://www.ft.com/content/3e66cd3b-35d5-4ed7-893f-6ae73661ae0d
Dario Amodei, ‘safe AI’ evangelist eyes Anthropic IPO
5 December 2025 | George Hammond
This profile traces how Anthropic co-founder Dario Amodei has emerged as both a commercial rival to OpenAI and the most prominent advocate for “safe AI” at scale. Anthropic is now generating around $10bn in annualised revenue and preparing for a possible IPO, even as critics argue its safety-first stance risks slowing innovation. The piece highlights the tension between Amodei’s mission-driven worldview and the realities of hyper-competitive frontier model development.
What you need to know: Leadership philosophy is shaping AI strategy at the frontier. How markets respond to Anthropic’s safety positioning will influence whether future AI labs optimise for restraint, speed—or both.
Original link: https://www.ft.com/content/3dd07583-21f7-42ec-be8f-78c58279ecc4
SoftBank, Nvidia looking to invest in Skild AI at $14bn valuation
8 December 2025 | Reuters
SoftBank and Nvidia are in talks to invest in Skild AI, a start-up building foundation models for robots, in a funding round that could value the company at $14bn. Skild focuses on universal “robot brains” that can operate across different machines, reflecting growing investor enthusiasm for humanoid and general-purpose robotics. While progress is rapid, experts caution that truly flexible robotic systems remain years away from mass deployment.
What you need to know: Robotics is emerging as the next major frontier for foundation models, extending AI from text and vision into the physical world. Capital is flowing early to teams that promise reusable, general-purpose intelligence for machines.
Original link: https://www.reuters.com/business/media-telecom/softbank-nvidia-looking-invest-skild-ai-14-billion-valuation-sources-say-2025-12-08/
Data centre boom sparks deals rush
9 December 2025 | Peter Barber
Hyperscalers including Google, Amazon, Microsoft and Meta are spending hundreds of billions of dollars on data centres as they race to secure AI capacity, triggering a surge in multibillion-dollar infrastructure and legal deals. The buildout is straining power grids, land availability and water resources, while fuelling debate over whether AI infrastructure spending is inflating an economic bubble. Some industry players argue the investments reflect durable demand for computation at unprecedented scale.
What you need to know: AI progress is now gated by physical infrastructure—power, land and cooling—not just algorithms. The data centre buildout will shape where and how fast advanced AI systems can realistically scale.
Original link: https://www.ft.com/content/42f3dec5-b8dc-49a2-aa5c-0e62ab529173
Is it a bubble?
9 December 2025 | Howard Marks, Financial Times
Veteran investor Howard Marks examines whether today’s AI-driven market enthusiasm has crossed the line from justified optimism into speculative excess. Drawing on past technology bubbles, he argues that while AI’s transformative potential is real, investor behaviour has become increasingly detached from clear revenue paths and long-term fundamentals. Marks warns that massive capital inflows may accelerate innovation but will inevitably destroy value for many participants, urging a balanced approach rather than all-in conviction or total scepticism.
What you need to know: Highlights the financial risk underpinning the AI boom, reminding policymakers, founders and investors that technological breakthroughs do not guarantee sustainable returns.
Original link: https://www.ft.com/content/353eda37-33b0-4d30-a912-a07607e278b8
Oracle shares sink as worries swirl over huge spending on data centres
10 December 2025 | Rafe Rosner-Uddin, Financial Times
Oracle’s shares fell sharply after the company disclosed a major increase in capital spending on data centres to support AI customers such as OpenAI. While long-term contracts promise future revenue, investors are increasingly concerned about rising debt levels and delayed returns. The episode highlights the financial strain placed on infrastructure providers by the explosive growth of AI compute demand.
What you need to know: Highlights the hidden financial risks behind the AI boom, particularly for companies building the physical backbone of model training and deployment.
Original link: https://www.ft.com/content/3633f277-d23b-44d0-b818-5fa3a89086cc
US tech stocks slide as fears over AI boom flare up
12 December 2025 | Financial Times
US technology stocks fell sharply after weak outlooks from companies heavily exposed to AI infrastructure spending reignited concerns about valuations. Investors are questioning whether massive investments in chips, data centres and cloud capacity will deliver sustainable returns. The pullback reflects growing market unease about the pace and profitability of the AI arms race.
What you need to know: Financial markets are beginning to test the AI narrative, forcing companies to justify not just technological ambition but economic returns.
Original link: https://www.ft.com/content/8b9519df-9154-4eb5-ab3b-d3da0115b65e
​
Oracle’s $300bn OpenAI deal has investors worried about its AI spending
12 December 2025 | Brody Ford and Drake Bennett, Bloomberg Businessweek
Oracle has emerged as one of the most aggressive infrastructure backers of the AI boom through a sweeping deal to supply OpenAI with vast amounts of cloud compute, committing an estimated $300bn in server rentals over time. The agreement requires Oracle to build some of the world’s largest data centre complexes, consuming gigawatts of power and millions of AI chips, despite OpenAI’s continued losses and uncertain long-term profitability. While the deal turbocharged Oracle’s valuation earlier this year, investors have grown uneasy as costs balloon, timelines slip and Oracle’s free cash flow turns negative, making the company a proxy for broader fears of an AI capex bubble.
What you need to know: This is a stress test for the AI business model, whether unprecedented infrastructure spending can be justified before clear, durable returns from AI applications emerge.
Original link: https://www.bloomberg.com/news/features/2025-12-12/oracle-s-300-billion-openai-deal-has-investors-worried-about-its-ai-spending
Is there an AI bubble and will it pop next year?
13 December 2025 | Nathan Brooker, Financial Times
Financial Times contributors and asset managers debate whether AI-related valuations have crossed into bubble territory, with sharp disagreement between market veterans and more optimistic fund managers. Some argue today’s largest AI-linked stocks have more defensible valuation anchors than the dotcom era, while others point to speculative behaviour and “froth” driven by hype and circular financing dynamics. The panel broadly agrees that macro factors, especially inflation and interest rates, could puncture sentiment, and that investors will increasingly demand proof of commercial returns on massive AI spending. Competition shocks, including breakthroughs from China, are flagged as another potential catalyst for volatility.
What you need to know: The AI story is colliding with financial reality , ROI pressure, rates, and geopolitics can reshape which AI bets survive, regardless of technical progress.
Original link: https://www.ft.com/content/21f59bee-8747-4a44-b992-336ef4c5157f
The four ‘O’s that shape a bubble
December 2025 | Financial Times
Financial bubbles, the author argues, tend to follow four recurring forces: optimism, overconfidence, overinvestment and opacity. Applying this framework to today’s AI boom, the piece highlights extraordinary expectations, massive capital flows and limited transparency around returns. While transformational technologies can justify hype, history suggests unchecked enthusiasm often ends painfully.
What you need to know: Understanding bubble dynamics helps distinguish genuine AI progress from speculative excess as investment surges.
Investors seek protection from risk of AI debt bust
14 December 2025 | George Steer, Kate Duguid, Eric Platt and Oliver Roeder, Financial Times
Trading in credit default swaps linked to major US tech groups has surged as investors hedge the risk that the AI boom turns into a bust. The rise reflects mounting unease over a wave of bond issuance used to finance AI infrastructure that may take years to pay off, particularly among firms building or securing data-centre capacity. The article highlights heightened CDS activity around companies such as Oracle and CoreWeave, as well as new CDS interest in Meta following large bond sales for AI projects. The shift signals that markets are beginning to price “AI capex risk”, not just AI upside.
What you need to know: AI’s next constraint isn’t only technical progress , it’s whether the industry can finance the compute build-out sustainably without triggering a debt and valuation unwind.
Original link: https://www.ft.com/content/c5f9380e-df86-42a9-a387-a0d5e04ad45f
China's economy stalls in November as calls grow for reform
15 December 2025 | Joe Cash, Reuters
China’s factory output growth slowed to a 15-month low and retail sales posted their weakest gains since late 2022, underscoring persistent weak domestic demand as trade-in subsidies fade and the property slump drags on household confidence. Policymakers have leaned on exports to hit an “around 5%” growth target, but partners are increasingly pushing back against China’s huge trade surplus with threatened or enacted tariff barriers. Economists and the IMF argue stimulus alone won’t fix the underlying issues, calling instead for structural reforms and a more consumption-led model.
What you need to know: AI build-outs are tightly coupled to macro conditions, slowing growth and property stress can constrain domestic tech investment, while rising trade barriers shape access to chips, infrastructure, and cross-border AI supply chains.
Original link: https://www.reuters.com/world/china/chinas-factory-output-retail-sales-weaken-november-2025-12-15/?nl=DealBook&segment_id=212298
Investors bet on Chinese companies powering global AI build-out
16 December 2025 | William Sandlund, Financial Times
Chinese manufacturers of batteries, transformers, and energy storage systems are surging as global data-center expansion strains legacy grids and forces operators to seek faster, cheaper power solutions. The article notes strong export-driven margins for firms like CATL and Sungrow despite tariffs and highlights the growing reliance of US energy storage imports on China even amid “decoupling” rhetoric. With data-center electricity demand forecast to rise sharply by 2030, the build-out is increasingly tied to supply chains for grid equipment, batteries, and microgrids, not just chips.
What you need to know: AI scaling is becoming an energy-and-infrastructure story, who supplies storage, transformers, and microgrids can indirectly determine where and how fast frontier compute expands.
Amazon in talks to invest more than $10bn in OpenAI
17 December 2025 | Rafe Rosner-Uddin, Financial Times
Amazon is in early talks to invest more than $10bn in OpenAI in a deal that could value the AI company above $500bn, according to people familiar with the matter. The agreement is expected to deepen OpenAI’s reliance on Amazon infrastructure, including using AWS data-centre capacity and adopting Amazon’s Trainium AI chips, building on a recently signed multiyear server-rental commitment. The discussions also underline OpenAI’s push to diversify beyond its historic dependence on Microsoft and Nvidia as compute becomes the primary bottleneck for scaling models and products.
What you need to know: The frontier race is increasingly a compute-finance flywheel, big model labs are becoming structurally intertwined with cloud and chip suppliers, shaping which hardware ecosystems win and how quickly capability scales.
Original link: https://www.ft.com/content/4217107e-2c98-4b78-9961-67d75237fac4
UK to shift more research funding into AI and video games
17 December 2025 | Chris Smyth, Financial Times
The UK government is overhauling how public research money is allocated, sharply increasing funding for artificial intelligence and video games as part of a broader industrial strategy. UK Research and Innovation will channel £1.6bn into AI over four years, prioritising areas such as energy-efficient algorithms and next-generation models rather than competing head-on in large language models. Officials argue that concentrating funds on fewer, larger projects will better translate research into economic growth, even at the risk of more failures.
What you need to know: Governments are becoming more selective about where to compete in AI, backing targeted, high-risk research niches rather than trying to match US-scale frontier model spending.
Original link: https://www.ft.com/content/cbb102c9-dd94-479b-9c2d-bbeea2183666
Oracle’s $10bn Michigan data centre in limbo after Blue Owl funding talks stall
17 December 2025 | Tabby Kinder and Rafe Rosner-Uddin, Financial Times
Oracle’s planned 1GW Michigan data centre, intended to serve OpenAI, has hit financing uncertainty after talks with key partner Blue Owl Capital stalled. The breakdown reflects investor concerns about tougher debt terms, potential construction delays, and the growing strain of Oracle’s AI infrastructure spending spree. The article describes how data-centre funding is often structured through special purpose vehicles that own facilities and lease them to operators, meaning shifts in credit conditions can quickly change project viability. Oracle says negotiations with another equity partner are progressing, but the episode highlights rising skepticism toward debt-fuelled AI expansion.
What you need to know: The AI build-out is creating “compute megaproject” financing risk, if capital tightens, even demand from frontier labs may not guarantee data-centre delivery timelines.
Original link: https://www.ft.com/content/84c147a4-aabb-4243-8298-11fabf1022a3
US tech stocks slide as Oracle data centre setback reignites AI concerns
17 December 2025 | Peter Wells and Rachel Rees, Financial Times
US tech stocks fell sharply after Oracle lost a key backer for a planned $10bn data centre, reigniting investor anxiety over the debt-fuelled expansion underpinning the AI boom. Oracle’s shares have nearly halved since September, dragging down peers such as Nvidia and Alphabet, as markets reassess whether AI infrastructure spending has raced ahead of realistic revenue expectations. The pullback highlights how sensitive AI valuations have become to financing conditions and execution risks.
What you need to know: AI optimism is now tightly linked to capital markets, setbacks in funding or infrastructure can rapidly spill over into broader tech valuations.
Original link: https://www.ft.com/content/ce60f74f-ccc8-4bde-bbec-842041ecf8e7
Meta’s Yann LeCun targets €3bn valuation for AI start-up
18 December 2025 | Ivan Levingston and Melissa Heikkilä, Financial Times
Yann LeCun, Meta’s long-time chief AI scientist, is reportedly in early talks to raise €500mn for a new venture that could be valued around €3bn before launch. The start-up, described as focusing on “world models” that understand the physical world, targets applications like robotics and transport and builds on research directions LeCun pursued at Meta beyond pure language modelling. The company is expected to launch in January, with Alexandre LeBrun named as chief executive, and Meta positioned as a commercial partner rather than an investor. The move lands amid broader upheaval in Meta’s AI strategy as it tries to catch up with leading frontier labs.
What you need to know: The next frontier may shift from chat-first AI toward systems that learn from video and spatial data, enabling planning and embodied reasoning, a key step toward robotics-grade intelligence.
Original link: https://www.ft.com/content/d88729c0-c44f-4530-b888-bafa29ee0446
World-beating 55,000% surge in India AI stock fuels bubble fears
18 December 2025 | Chiranjivi Chakraborty, Bloomberg
Shares in little-known Indian firm RRP Semiconductor have surged more than 55,000% in under two years, despite minimal revenues and only tenuous links to the AI chip boom. The rally has been driven by retail investor hype, a tiny free float and India’s scarcity of listed semiconductor plays, prompting regulators to investigate potential market abuse. The episode has become a cautionary tale of speculative excess at the fringes of the global AI rally.
What you need to know: AI hype is spilling into less mature markets, creating bubble-like behaviour where “AI exposure” alone can outweigh fundamentals.
Original link: https://www.bloomberg.com/news/articles/2025-12-18/world-beating-55-000-surge-in-india-ai-stock-fuels-bubble-fears
Australia’s biggest pension fund to cut global stocks allocation on AI concerns
20 December 2025 | Mary McDougall, Financial Times
AustralianSuper, the country’s largest pension fund, said it plans to reduce its allocation to global equities amid signs the US tech-led AI boom may be maturing and valuations are stretched. The fund’s investment strategy head pointed to rapidly rising leverage funding AI investment and a growing pipeline of fundraising via M&A, venture capital, and listings, while noting concentration risk in major US indices dominated by megacap tech. The caution reflects a broader institutional reassessment of whether AI-driven capex can sustain returns at current market prices.
What you need to know: Investor sentiment is a leading indicator for AI scaling, if capital gets more selective, it can reshape funding for compute build-outs, chip demand, and the pace at which frontier models reach market.
Original link: https://www.ft.com/content/4be26ef3-3f06-44c0-8878-60370336581c
AI debt boom pushes US corporate bond sales close to record
23 December 2025 | Kate Duguid, Financial Times
US investment-grade companies issued about $1.7tn of bonds in 2025, close to the pandemic record, driven heavily by borrowing to fund AI data centres and energy infrastructure. AI-related financing now makes up roughly 30 per cent of net investment-grade issuance, as groups such as Meta, Alphabet, Amazon and Oracle lock in funding while credit spreads remain relatively low. Investors are increasingly worried about the mismatch between soaring capex and still-uncertain AI revenues, with expectations that 2026 issuance could set a new record and rising use of credit default swaps to hedge the risk of an eventual credit bust.
What you need to know: Shows how the AI race is being financed through large-scale corporate leverage, turning AI infrastructure into a potential fault line for future credit stress if cash flows don’t materialise quickly enough.
Original link: https://www.ft.com/content/faa3d747-9a32-4219-93eb-a93c10502f06
Tech groups shift $120bn of AI data centre debt off balance sheets
24 December 2025 | Tabby Kinder, Financial Times
Big tech and AI infrastructure players including Meta, Oracle, xAI and CoreWeave have routed more than $120bn of data-centre financing through special purpose vehicles funded by private-credit giants such as Pimco, BlackRock, Apollo and Blue Owl. These off-balance sheet structures let companies lock in tens of billions in long-term debt for AI data centres while preserving headline leverage ratios, but also concentrate risk in opaque SPVs, residual value guarantees and securitised AI loans that could amplify stress if AI demand or key tenants like OpenAI falter.
What you need to know: Reveals that the AI infrastructure boom is increasingly intertwined with complex financial engineering, potentially creating hidden systemic risks if AI revenues disappoint or data-centre assets are repriced.
Original link: https://www.ft.com/content/0ae9d6cd-6b94-4e22-a559-f047734bef83
AI boom adds $500bn to net worth of US tech billionaires in 2025
26 December 2025 | Rafe Rosner-Uddin, Financial Times
US tech billionaires collectively gained more than $550bn in wealth in 2025 as markets poured money into companies tied to the AI boom. Elon Musk stayed at the top of the list with a net worth of about $645bn, while Nvidia’s Jensen Huang, Oracle’s Larry Ellison, and Google co-founders Larry Page and Sergey Brin also saw fortunes surge on the back of AI chips, data-centre expansion and model launches. Some executives have been cashing out billions in stock sales even as investors fret about a possible AI bubble and question whether costly infrastructure bets will pay off.
What you need to know: Highlights how AI is massively concentrating financial gains among a small group of platform and infrastructure owners, underlining both the scale of investor expectations and the systemic risks if those expectations prove over-optimistic.
Original link: https://www.ft.com/content/9dcd770a-1ca7-4533-980c-08c5704c9670
AI start-ups amass record $150bn funding cushion as bubble fears mount
28 December 2025 | George Hammond, Financial Times
Silicon Valley’s leading AI start-ups raised a record $150bn in 2025, as investors urged founders to build “fortress balance sheets” before a possible downturn. Mega-rounds such as OpenAI’s $41bn raise, Anthropic’s $13bn round and Meta’s multibillion-dollar investment in Scale AI pushed venture funding far above the previous peak, while high-growth companies like Anysphere, Perplexity and Ramp repeatedly tapped investors as valuations and revenues soared. The flood of capital is burning through VC funds faster than expected and setting the stage for aggressive M&A if markets wobble and weaker rivals struggle to refinance.
What you need to know: Illustrates how capital is being front-loaded into a small set of perceived “AI winners”, creating resilience for those firms but heightening bubble risk and the likelihood of rapid consolidation across the AI ecosystem.
Original link: https://www.ft.com/content/7f989b72-0722-4b0a-9a50-876417abc06f
Further Reading: Find out more from these resources
Resources:
-
Watch videos from other talks about AI and Education in our webinar library here
-
Watch the AI Readiness webinar series for educators and educational businesses
-
Listen to the EdTech Podcast, hosted by Professor Rose Luckin here
-
Study our AI readiness Online Course and Primer on Generative AI here
-
Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here
-
Read research about AI in education here
About The Skinny
Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.
In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.
Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.
As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.
