top of page

THE SKINNY
on AI for Education

Issue 25, February 2026

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.​

shutterstock_2743760311.jpg

In my January Skinny editorial, I argued that the human side of AI matters more than the technology. I drew on the Challenger disaster to illustrate the normalisation of deviance, and on the McKinsey 88/6 gap to show that adoption without investment in people produces adoption without value.

​

This month, I want to propose an answer.

​

Study after study, from Harvard Business Review to neuroscience journals to the companies building AI themselves, converged on a single theme: the way we are using AI risks undermining the very cognitive capacities we need most.

​

This is not a counsel of despair, nor a call to reject the technology. AI is powerful, and used well, it can genuinely transform learning and work. But “used well” is doing a great deal of work in that sentence. The central argument of this editorial is that we should design AI use around cognitive objectives, not convenience or speed. If we get the design right, AI augments human capability. If we optimise only for efficiency, the evidence suggests we will erode the thinking skills that make AI useful in the first place.

​

In Brief: The ‘Skinny-Skinny Editorial’ 60-second version

​

  • The Jevons Paradox for work: A Harvard Business Review study found that AI agent use led to longer hours, higher-intensity work, and broader task scope, all without employer mandates. Workers filled every gap with AI-initiated tasks, leading to burnout. LSE’s Luis Garicano identified it as a Jevons Paradox for work effort: when everything seems immediately achievable, the temptation is to work all the time rather than less.

  • Attention was already under pressure: Average time on a single task has dropped from 2.5 minutes to 47 seconds since 2004, a trend driven by digital technology long before generative AI arrived. Nearly half of UK adults feel deep thinking has become a thing of the past. Researchers warn of the “Duolingo-isation of education”: scalable, engaging, but potentially hollow.

  • Students are offloading, not learning: Anthropic’s own analysis found students using Claude in a purely transactional way, generating answers rather than engaging in dialogue. Nearly half of teachers who used AI for grading fully delegated the task, despite recognising it was ill-suited. The pattern of use matters more than the tool itself.

  • Even Google’s AI chief says judgement is the key skill: Sir Demis Hassabis told the BBC at the AI Impact Summit that as AI writes more code, the critical capabilities become “taste and creativity and judgement.” Those are exactly the capacities that poor AI adoption habits risk degrading.

  • For learning professionals: Design AI use around cognitive objectives, not convenience or speed. The scarce capability is not AI proficiency. It is the ability to protect and develop sustained attention, critical judgement, and the willingness to think when a machine offers to think for you.

​

 

The ‘Full-Skinny Editorial’ 5-minute version

The Jevons Paradox comes to the classroom

 

For me, the most revealing piece of research published this month came from the Harvard Business Review. Two researchers at UC Berkeley studied how AI agents were changing work habits at a US technology company. They found that workers using agentic AI tools were working longer hours, taking on a broader range of tasks, and operating at higher intensity throughout the day. The critical detail: none of this was mandated by the employer. Workers voluntarily filled every gap between meetings with AI-initiated tasks, setting multiple projects in motion simultaneously, returning to check on each between emails and calls. By the end of the day, the AI agents had completed half a week’s work. The humans were exhausted.

​

LSE professor Luis Garicano gave this a name: a Jevons Paradox for work effort. Jevons observed in 1865 that making coal more efficient did not reduce consumption; it increased it, because efficiency made coal useful for more things. Garicano’s insight is that the same dynamic may apply to AI and cognitive work. When everything on your to-do list suddenly seems achievable, the temptation is not to rest. It is to keep going.

​

The FT’s John Burn-Murdoch, writing in the AI Shift newsletter, confirmed this from personal experience. He described using agentic AI tools as exhilarating and exhausting in equal parts, noting that the knowledge that five more minutes of work could set off a task that would previously have taken an hour created a compulsion to keep going. His colleague Sarah O’Connor added an important observation from call centres: as AI handles the routine queries, the work that flows to humans is increasingly the most complex, emotionally charged, and draining. The simpler calls that once provided mental breathers have been squeezed out.

​

The Berkeley researchers identified a further risk they called “cognitive debt.” When AI-assisted projects move faster than the humans in the loop can track, understanding accumulates a deficit. People sign off on work they have not fully absorbed. Decisions are made on outputs they have not properly evaluated. The work gets done. The thinking does not.

​

This matters for education because the same dynamic is visible in our sector. Teachers are not simply being relieved of drudgery; they are often given more to do with the same hours. AI tools generate lesson plans, but someone must review them. AI marks essays, but Anthropic’s own data shows that nearly half of teachers who delegate grading to AI do so fully, despite recognising it is ill-suited. The risk is not that AI is inherently harmful. It is that without deliberate design, the default pattern of use prioritises volume over depth.

​

The attention crisis: a pre-existing condition that AI could worsen

 

It is important to be precise about causation here. The decline in sustained attention is not something AI created. It is a trend that has been building for two decades, driven by smartphones, social media, and an information environment that rewards brevity and novelty. But AI arrives into that weakened landscape, and the way it is currently being adopted risks accelerating the decline rather than reversing it.

​

Writing in the FT’s Free Lunch newsletter this month, Tej Parikh drew together a body of neuroscience research that paints a sobering picture. Since 2004, the average time a person stays focused on a single task has dropped from about 2.5 minutes to roughly 47 seconds, according to data tracked by Gloria Mark, professor of informatics at the University of California, Irvine. A 2022 survey by King’s College London found that 49 per cent of UK adults feel their attention span is shorter than it used to be. Forty-seven per cent feel “deep thinking” has become a thing of the past. Global cognitive health indices have been declining across this entire period.

​

The mechanism is well documented. Platforms and media optimise for shorter, more stimulating content. Audiences adapt to that rhythm. The next generation of content must be even shorter and more intense to compete. As Pierluigi Sacco, professor of biobehavioural economics at the University of Chieti-Pescara, notes: the brain adapts to the reward structure it encounters. When the dominant information environment delivers constant novelty in small, high-stimulation doses, the capacity for sustained attention does not just go unused. It becomes harder to deploy.

​

Niels Van Quaquebeke, professor of leadership at Kühne Logistics University, calls this the “Duolingo-isation of education”: tiny, gamified tasks, streaks, badges, and endlessly bite-sized exercises. Highly engaging. Highly scalable. And potentially hollow. A viral social media post captured the point: someone with a 1,200-day Duolingo streak could barely string sentences together when they visited Spain. The engagement was there. The learning was not.

​

Neuroscientist Mithu Storoni, author of Hyperefficient, warns that offloading too much cognitive effort to AI risks weakening the mental capacities for synthesis, contextual judgement, and curiosity. This is a risk, not a certainty, and the outcome depends heavily on how AI is used. But Anthropic’s own analysis suggests the default pattern is not encouraging: students are using Claude in a purely transactional way, generating assignment answers rather than engaging in the kind of dialogue that would develop understanding. Seven per cent of teachers’ prompts were for grading, with nearly half fully delegating the task to the system despite recognising it was not well suited to it.

​

We have been here before with a previous generation of technology. In 2011, researchers identified the “Google effect”: humans began treating the internet as an external memory store, remembering fewer easily searchable facts. At the time, some argued this freed up working memory for higher-order thinking. The evidence since then has been mixed; some studies suggest that storing less information can also lead to shallower thinking, because you have less raw material to reason with. AI extends this dynamic. It does not just store our facts. It can do our reasoning, our drafting, our analysis. The question is whether we let it replace those capacities or use it to develop them.

​

Even the people building these systems recognise the tension. At the AI Impact Summit in Delhi this month, Sir Demis Hassabis, head of Google DeepMind and winner of the 2024 Nobel Prize in Chemistry, told the BBC that STEM education remains important and that as AI takes over writing code, the key capabilities become “taste and creativity and judgement.” He also called for urgent research into AI threats and acknowledged that keeping up with the pace of development was “the hard thing” for regulators. His diagnosis is right. If the critical human capabilities are judgement, creativity, and taste, then we need educational approaches that actively develop those capacities. The evidence in this editorial suggests that the default trajectory does the opposite: an information environment that rewards speed over depth, engagement over understanding, and volume over quality. But this is a design problem, not an inevitability. If we are deliberate about how AI is used in learning, the paradox can be resolved.

​

What the safety departures signal

 

In January, I introduced the concept of the normalisation of deviance: pressing ahead while ignoring warning signs, because nothing has gone catastrophically wrong yet. A month later, the people whose job it was to prevent that from happening are walking out.

​

In the same week in February, senior safety staff departed both OpenAI and Anthropic with public warnings. OpenAI researcher Zoë Hitzig published an essay in the New York Times arguing that personal data, including medical fears, relationships, and beliefs, would be weaponised by the advertising model OpenAI was now pursuing. She drew an explicit parallel with Facebook. Anthropic’s safeguards lead warned of organisational pressure to override values. These are not external critics. They are the people who were employed specifically to make these systems safe, and they concluded they could no longer do so from the inside.

​

This continues a pattern stretching back to Geoffrey Hinton’s departure from Google. The AI safety community has lost significant talent at the moment it is needed most. And the new initiative by former OpenAI policy chief Miles Brundage, who has founded Averi (the AI Verification and Research Institute) to establish independent auditing standards for AI systems, underscores the point: the people who understand these systems best believe that internal safety mechanisms are insufficient and that independent external scrutiny is essential.

​

For educators, the implication is straightforward. We cannot assume that the tools we adopt have been designed with learner wellbeing as a first principle. The companies building them are losing the very people tasked with ensuring that. The duty of care does not transfer to the vendor. It remains with us.

​

What this means for learning professionals

 

In January, I set out three priorities: teach how AI behaves, contextualise learning in real tasks, and address the human factors. Those priorities stand. But the evidence this month points to a principle that should sit above all of them:

​

Design AI use around cognitive objectives, not convenience or speed.

 

This means three things in practice.

First, teach cognitive load management alongside AI literacy. If the Jevons Paradox holds for work effort, then giving people AI tools without teaching them to manage the resulting intensity is a recipe for burnout. This is not a soft skill or a nice-to-have. It is the binding constraint on whether AI adoption produces value or exhaustion. Every AI training programme should include explicit guidance on when to use AI, when to stop, and how to recognise the signs of cognitive overload. The Berkeley researchers recommend reintroducing structured breaks, reflection time, and deliberate pauses into the working day. For teachers and students, this might mean explicit “AI-off” periods within lessons, time to think without tools, or structured peer discussion that cannot be delegated.

​

Operational step: Build “cognitive rhythm” into AI-assisted workflows. Alternate between AI-supported tasks and tasks that require unassisted thinking. The call centre research shows what happens when you remove the mental breathers: the work becomes unsustainably intense. Do not let the same happen in your classroom or training programme.

 

Second, design for depth, not just efficiency. The Duolingo-isation of education, bite-sized, gamified, endlessly scrollable, is the enemy of deep learning. AI tools can produce fluent text, plausible lesson plans, and serviceable feedback at remarkable speed. But speed is not the same as quality, and volume is not the same as understanding. If we allow AI to compress every learning interaction into the shortest possible form, we risk producing learners who can generate but cannot evaluate, who can prompt but cannot think.

​

Operational step: For every AI-assisted task, ask: what cognitive work is the learner doing? If the answer is “checking output” rather than “thinking through a problem,” the task design needs to change. The Anthropic data showing students using AI transactionally is not a failure of the tool. It is a failure of task design. Use AI to make harder problems accessible, not to make easy problems disappear.

​

Third, exercise independent judgement on safety and wellbeing. The departures of senior safety staff from the two leading AI companies are a signal that should be taken seriously. If the people employed to make these systems safe believe internal mechanisms are insufficient, schools and training providers should not assume otherwise. Review your AI acceptable use policies. Evaluate the tools you use against the evidence, not the marketing. The Averi initiative may in time provide an independent auditing framework. In the meantime, apply the same rigour to AI tool adoption that you would to any other safeguarding decision.

​

Operational step: Establish a standing review of every AI tool used in your institution. Ask three questions: What data does it collect? What safeguards does the provider have in place, and who is accountable for them? And what happens to our learners if this tool changes, degrades, or disappears? If you cannot answer all three with confidence, the tool is not ready for your setting.

​

The courage to think slowly

 

In January, I ended with the Challenger engineers who had the courage to speak up. This month, I want to end with a different kind of courage: the courage to think slowly in an age that rewards speed.

​

The cognitive paradox is real, but it is not a death sentence. It is a design challenge. AI does not have to degrade our thinking. It does so when it is adopted without attention to how people actually learn, focus, and make decisions. The same technology, used with deliberate cognitive objectives, can do the opposite: it can free up time for deeper work, surface harder questions, and make complex problems more tractable.

​

The difference between those two outcomes is not the model, the vendor, or the prompt. It is whether someone in the room asked, before the tool was deployed: what do we want people to be thinking while they use this?

​

William Stanley Jevons warned us 160 years ago. Efficiency does not reduce demand. It increases it. The question for education is not whether AI will change how we think. It already is. The question is whether we will be deliberate about the direction.

​

The 88 per cent of organisations using AI will not all become the 6 per cent generating real value. The ones that do will not be those with the most sophisticated models or the fastest agents. They will be the ones that invested in the cognitive resilience of their people: the ability to focus, to evaluate, to resist the pull of the next notification, and to think carefully before acting.

​

The technology is ready. The question is whether we will design its use around the thinking that matters most.

​

***

 

Sources: Harvard Business Review, February 2026 (UC Berkeley AI agent study); FT AI Shift newsletter, 19 February 2026 (John Burn-Murdoch and Sarah O’Connor); FT Free Lunch, 22 February 2026 (Tej Parikh, “How technology is reshaping our minds”); Gloria Mark, Attention Span (University of California, Irvine); King’s College London attention survey 2022; Anthropic Claude usage analysis 2026; BBC News, 20 February 2026 (Sir Demis Hassabis interview, AI Impact Summit, Delhi); McKinsey 2025 Global AI Survey; Averi (AI Verification and Research Institute), February 2026.

AI News Summary

AI in Education

China’s genius plan to win the AI race is already paying off
31 January 2026 | Zijing Wu, Financial Times

China’s state-backed “genius class” high-school talent streams have produced a disproportionate share of the country’s leading AI engineers and tech founders . Graduates of the elite science programmes populate firms such as DeepSeek, Alibaba, ByteDance and Huawei, and form the backbone of China’s growing AI ecosystem. The article argues that decades of structured STEM acceleration — combined with large-scale national coordination — underpin China’s competitive positioning in frontier AI.

What you need to know: AI leadership is increasingly tied to long-term talent pipelines; China’s coordinated education-to-industry strategy is emerging as a structural advantage in the global AI race.

Original link: https://www.ft.com/content/68f60392-88bf-419c-96c7-c3d580ec9d97

 

Digital Capability and the Future of Learning
6 February 2026 | Martin Betts, LinkedIn Newsletter

Martin Betts argues that higher education is shifting from institution-centred credentials to a dynamic “capability economy” focused on verified, portable skills and learner agency. Drawing on recent podcasts and a white paper, the piece suggests AI is accelerating the move toward lifelong learning ecosystems where digital capability, rather than degrees alone, defines employability and value.

What you need to know: Highlights how AI is reshaping education markets — pushing universities toward skills verification, modular credentials and learner-controlled digital identities.

 

How Instructors Regulate AI in College: Evidence from 31,000 Course Syllabi
2 February 2026 | Igor Chirikov, UC Berkeley Center for Studies in Higher Education (Working Paper)
Analyzing 31,000 syllabi from 2021–2025, Chirikov documents how instructors’ AI policies evolved from early blanket restrictions toward more nuanced, task-specific rules. The paper proposes a task-based framework—AI can displace student practice (risking skill erosion), augment practice (supporting learning), or reinstate new tasks (creating new AI-based skills). It finds AI regulation rose sharply over time, and that courses heavy in tasks where AI is strong (notably writing and coding) were more likely to regulate and to differentiate permissions by task type.
What you need to know: Higher education is rapidly becoming a governance lab for “responsible use” norms—how instructors regulate AI by task is a leading indicator for workforce skill formation in an AI-augmented economy.
Original link: https://escholarship.org/uc/item/9c51s3gs

 

Business schools search for clear AI guidelines
16 February 2026 | Andrew Jack, Financial Times
Business schools are rapidly integrating AI into teaching, assessment, and operations, but shared standards and measurable benchmarks are lagging behind adoption. The article highlights experimentation (e.g., AI-enabled interactive cases, cautious use of AI for grading support) alongside persistent risks such as hallucinations, privacy, and accusations of cheating. It also notes emerging efforts—like the Digital Education Council’s proposed benchmarking dimensions—while warning that simplistic metrics (e.g., “more AI access is better”) can be misleading and inequitable.
What you need to know: Education is a major “institutional adoption” testbed for GenAI; how universities set norms on assessment, tool access, and AI literacy will strongly influence workforce readiness and public trust in AI systems.
Original link: https://www.ft.com/content/3659b953-8a3a-4e13-b322-c973a545b836

AI Ethics and Societal Impact

Data centre groups plan lobbying blitz to counter AI energy backlash
25 January 2026 | Rafe Rosner-Uddin, Financial Times

Major US data-centre operators are launching a coordinated lobbying effort to counter growing public resistance to the energy and water demands of AI infrastructure . Executives warn that permitting delays and local opposition are slowing expansion plans. Companies intend to increase advertising and political engagement to reframe data centres as engines of economic growth and technological leadership.

What you need to know: Energy availability and public acceptance are emerging as bottlenecks for AI scaling, making infrastructure politics central to AI development.

Original link: https://www.ft.com/content/f45d45fc-c0ea-463c-8edf-38fb99ed5c05

 

Anthropic updates Claude’s founding document
27 January 2026 | Data Points, DeepLearning.AI

Anthropic has released the full text of Claude’s constitution under a Creative Commons licence, offering rare transparency into how the model is aligned. Rather than simply listing rules, the updated document explains the reasoning behind Claude’s values and serves both as a philosophical framework and a practical training tool. The constitution prioritises four goals — broad safety, ethical behaviour, compliance with company guidelines, and genuine helpfulness — and is used to generate synthetic training data such as ranked responses and conversations. Anthropic describes it as a “living document” that will evolve alongside model capabilities and external expert feedback.

What you need to know: Signals a push toward procedural transparency in model alignment, reinforcing Anthropic’s brand as the safety-first frontier lab.

Original link: https://www.deeplearning.ai/the-batch/data-points/

 

‘Humanity needs to wake up’ to dangers of AI, says Anthropic chief
27 January 2026 | George Hammond and Melissa Heikkilä, Financial Times

Anthropic chief executive Dario Amodei has issued a stark warning about the potentially catastrophic risks posed by powerful AI systems in a 20,000-word essay outlining scenarios ranging from mass job displacement to bioterrorism and authoritarian misuse. Amodei argues that AI systems could soon exceed the capabilities of leading experts, empowering individuals or regimes with unprecedented destructive potential. The intervention highlights growing tensions between rapid technological advancement and insufficient safeguards, particularly as the US government signals a lighter regulatory touch.

What you need to know: Leading AI executives are openly warning about systemic and existential risks, intensifying the debate over whether current governance structures are adequate for increasingly powerful AI systems.

Original link: https://www.ft.com/content/c3098552-7204-4a93-844c-1b8569c9dcb2

 

Why ads are coming to your AI chatbot
14 February 2026 | Cristina Criddle and Daniel Thomas, Financial Times
OpenAI has started testing advertising on ChatGPT for select users, marking a major shift from earlier reluctance to monetise through ads. The article argues that the economics of scaling AI—data-centre and compute costs, and pressure to justify high valuations—are pushing AI platforms toward ad models that previously funded “free” internet services. But ads introduce a trust trade-off: even if ads are displayed separately and private chats aren’t used for targeting, users may become more guarded and doubt the integrity of answers. The piece suggests competitors will likely follow, turning AI chat interfaces into a new battleground for search and commerce advertising.
What you need to know: Advertising is the most powerful revenue engine on the internet—but it can directly erode the trust that makes AI assistants valuable, reshaping product design, incentives, and regulation around “answer integrity.”
Original link: https://www.ft.com/content/c9acd1f7-4864-4bd7-9ada-d2e13f05b906

 

Software’s A.I. identity crisis
15 February 2026 | Sarah Kessler, DealBook (New York Times)

As powerful generative AI tools make it easier for anyone to build software, traditional SaaS companies are facing an existential reckoning. Executives worry that if AI agents can write code and resolve customer issues autonomously, the value of subscription-based software platforms could erode. The result has been a scramble by companies and investors alike to rebrand software firms as “AI companies,” even as public markets punish those seen as falling behind.

What you need to know: AI is no longer just a feature for software companies—it threatens to redefine what software is, with major implications for business models and valuations.
Original link: https://www.nytimes.com/2026/02/15/business/dealbook/software-ai-identity-crisis.html

 

Perplexity drops advertising as it warns it will hurt trust in AI
18 February 2026 | Cristina Criddle, Financial Times
Perplexity has phased out advertising tests, arguing that ads—even when labelled—risk undermining user trust in the integrity of answers. The decision runs counter to a broader industry push as AI products search for sustainable revenue, with competitors experimenting with sponsored placements, shopping features, and ad-supported free tiers. Perplexity’s stance frames trust as the scarce resource for AI search—if users suspect answers are shaped by sponsors, willingness to pay and to rely on outputs could collapse.
What you need to know: Monetisation choices are becoming product-defining: ad incentives can directly conflict with the “truthfulness” promise that AI answer engines depend on.
Original link: https://www.ft.com/content/6eec07a5-34a8-4f78-a9ed-93ab4263d43c

 

Bafta to reward ‘human creativity’ as film and TV grapples with AI
19 February 2026 | Daniel Thomas, Financial Times
Bafta has introduced “human achievement” as a guiding principle for its awards as AI tools spread across film and TV production. While AI-generated performances are barred from Bafta’s acting categories (e.g., AI avatars can’t be nominated), AI is not broadly prohibited elsewhere—reflecting both adoption pressures and unease about job displacement, authenticity, and labelling. Bafta’s chair, Sara Putt, frames the shift as protecting uniquely human creative and collaborative skills, and says the organisation will continue refining its stance after the awards amid rapid technological change and industry debate.
What you need to know: Creative industries are becoming a frontline for AI governance-by-norms—award rules, labelling schemes, and eligibility criteria are effectively turning into “soft regulation” that shapes what gets made and rewarded.
Original link: https://www.ft.com/content/25517882-92bb-46d0-b2f0-91e96a6675e2

 

Urgent research needed to tackle AI threats, says Google AI boss
24 February 2026 | Zoe Kleinman and Philippa Wain, BBC News
Google DeepMind CEO Sir Demis Hassabis told the BBC at Delhi’s AI Impact Summit that urgent research is needed on the threats posed by increasingly powerful AI systems. He emphasised “robust guardrails” against two primary risks: misuse by bad actors and the possibility of losing control as systems become more autonomous. Hassabis argued for “smart regulation” focused on real risks, while the summit exposed widening geopolitical divergence—especially the US rejecting the idea of global AI governance.
What you need to know: Even leading AI labs are signalling that autonomy and control are now core frontier risks—and global governance remains fragmented, which will shape how quickly safeguards become standard.
Original link: https://www.bbc.com/news/articles/c0q3g0ln274o

 

Agentic AI is everyone’s problem
6 February 2026 | FT Alphaville

FT Alphaville examines the market volatility triggered by minor updates to Anthropic’s Claude “Cowork” system . A seemingly routine addition of “knowledge work plug-ins” prompted widespread investor reaction, highlighting hypersensitivity around AI agents capable of autonomous task execution. The commentary argues that agentic AI raises systemic risk questions spanning labour markets, enterprise software and financial valuations.

What you need to know: Even incremental advances in AI agents can trigger outsized market reactions, underscoring both inflated expectations and unresolved concerns about automation risk.

 

InDepth: The backlash over AI ‘slop’
7 February 2026 | BBC News

The BBC reports growing public frustration with the flood of low-quality, AI-generated content — dubbed “AI slop” — across social media platforms. Ultra-realistic but fabricated videos of political leaders and public figures are proliferating, blurring the line between satire and misinformation. Journalists warn that while some dismiss it as harmless humour, the technology’s ability to fabricate plausible geopolitical announcements raises profound societal risks.

What you need to know: Illustrates the cultural and political backlash against generative AI misuse, reinforcing regulatory and platform moderation pressures.

AI Employment and the Workforce

Amazon to axe another 16,000 corporate jobs
28 January 2026 | Philip Georgiadis, Melissa Heikkilä and Rafe Rosner-Uddin, Financial Times

Amazon will cut a further 16,000 corporate roles, bringing total layoffs to 30,000 over three months as it reallocates resources toward AI investment. Executives described the restructuring as an effort to reduce bureaucracy and streamline operations. The move comes amid intensifying competition in AI infrastructure and signals how companies are offsetting massive AI capital expenditures with workforce reductions. Despite strong AWS growth, Amazon has faced pressure to adapt to the rapidly shifting AI computing landscape.

What you need to know: AI investment is reshaping corporate cost structures, with companies simultaneously expanding infrastructure and reducing headcount.

Original link: https://www.ft.com/content/c6055c9d-5229-4cfc-9f75-c7d96e021929

 

The AI Shift: Could AI make — rather than take — jobs?
29 January 2026 | Sarah O’Connor & John Burn-Murdoch, Financial Times

This edition explores whether AI might generate new forms of employment even as automation displaces existing roles. Drawing on economic research, the authors note that historically, technological change has created entirely new occupations that were previously unimaginable. While displacement risks remain real, early signals suggest AI may reshape labour demand rather than simply reduce it.

What you need to know: Reinforces that AI’s labour impact is likely to be transformative rather than purely destructive, with new job categories potentially emerging.

 

Meta officially ties employee performance to AI usage
3 February 2026 | Jyoti Mann, The Information

Meta has introduced a performance review and bonus system closely tied to employees’ use of AI tools. Its internal “Checkpoint” tracker aggregates over 200 data points, including AI-assisted code generation, error rates, and productivity metrics, to inform performance ratings. Top-ranked staff can receive up to a 300% bonus multiplier. CEO Mark Zuckerberg has framed 2026 as the year AI will fundamentally reshape work, with flatter teams and greater emphasis on high-impact individual contributors.

What you need to know: Marks a shift from optional AI adoption to institutionalised AI performance benchmarking — embedding AI usage directly into corporate incentive structures.

 

The AI Shift: Is this the ‘take off’ moment for AI agents?
5 February 2026 | John Burn-Murdoch & Sarah O’Connor, Financial Times

New productivity data suggest that the long-awaited impact of AI on software output may finally be materialising. Measures such as GitHub code uploads, app releases and web registrations show inflection points coinciding with the rollout of agentic coding tools. After months of anecdote outpacing data, large-scale indicators now appear to align with claims of rising productivity.

What you need to know: Provides early macro-level evidence that AI agents may be beginning to measurably affect productivity in the tech sector.

 

AI disrupting traditional careers may not be bad for children
7 February 2026 | Bill Gurley, Financial Times

Venture capitalist Bill Gurley argues that AI-driven disruption of established professions may offer long-term benefits by breaking rigid career pathways. Citing research showing widespread career regret, he suggests that traditional “safe” occupations were already under strain before AI’s rise. Rather than steering children toward shrinking professional funnels, Gurley advocates encouraging curiosity and passion-driven exploration. In a labour market increasingly shaped by automation, adaptability and intrinsic motivation may become more valuable than conventional credentials.

What you need to know: AI disruption is accelerating a broader shift away from linear career models, potentially reshaping how education and workforce planning are approached.

Original link: https://www.ft.com/content/4a499162-df64-44e8-ad31-c13e96344767

 

The AI Shift: What has AI done to illustrators?
12 February 2026 | Sarah O’Connor and John Burn-Murdoch, Financial Times

Generative image tools have reduced demand and fees for some professional illustrators, particularly in advertising, while failing to fully replace human creativity. Many illustrators reject the idea that AI is simply another tool, arguing it fundamentally alters authorship and ownership. Survey data shows significant income losses, even as some artists cautiously experiment with AI-assisted workflows.

What you need to know: Creative labour is already absorbing AI shocks, offering an early glimpse of how automation reshapes white-collar and creative professions.
Original link: https://www.ft.com/content/ai-shift-illustrators

 

KPMG partner fined over using AI to pass AI test
16 February 2026 | Nic Fildes, Financial Times
A KPMG Australia partner was fined after using AI tools to cheat on an internal assessment about AI use, and the firm says it has caught dozens of staff misusing AI in internal exams over the year. The incident illustrates the mismatch between ubiquitous access to powerful tools and legacy testing/training regimes, and it adds to a series of professional-services cheating scandals. Regulators and professional bodies appear constrained by self-reporting norms, leaving firms scrambling to strengthen detection and governance.
What you need to know: “AI literacy” programmes are colliding with AI-enabled shortcuts—organisations need new assessment designs and policy enforcement that assume easy access to generative tools.
Original link: https://www.ft.com/content/c30ded60-bece-45e0-981d-653e1e3e9818

 

HR teams are drowning in slop grievances
17 February 2026 | Emma Jacobs, Financial Times
Workplace lawyers and HR advisers report a surge in long, AI-generated employee grievances that are time-consuming to triage and often contain irrelevant laws, invented precedents, and overly legalistic argumentation. The article suggests generative AI is lowering the cost of producing “formal-sounding” complaints, shifting burden onto employers and tribunals that must respond carefully even when content quality is poor. It also flags data-protection risks when employees paste confidential material into public AI tools, and recommends policies and training to detect AI-generated submissions early.
What you need to know: As text generation gets cheaper, institutions face “documentation spam” as a new operational risk—AI increases not just productivity, but also low-cost, high-volume administrative load.
Original link: https://www.ft.com/content/afc335fb-8f32-458f-9b6f-431021774002

 

Accenture combats AI refuseniks by linking promotions to log-ins
19 February 2026 | Ellesheva Kissin and Elizabeth Bratton, Financial Times
Accenture has started tracking how often some senior employees log in to its internal AI tools, with “regular adoption” becoming an explicit input into leadership promotion decisions. The move reflects a broader “carrot and stick” dynamic across consultancies, where junior staff adopt GenAI more readily while some senior figures resist changing established workflows. Accenture says the goal is to become the “reinvention partner of choice,” but the policy has drawn internal criticism from some employees who question the tools’ quality and fear coercive adoption.
What you need to know: This is a real-world signal that GenAI adoption is shifting from “optional productivity boost” to a measurable workplace expectation—especially in professional services that sell AI-enabled transformation.
Original link: https://www.ft.com/content/ac672f97-a603-4c56-afa3-4a5273d45674

AI Development and Industry

New Year Special! Hopes for 2026 from David Cox, Adji Bousso Dieng, Juan M. Lavista Ferres, Tanmay Gupta, Pengtao Xie, Sharon Zhou
2 January 2026 | DeepLearning.AI (The Batch)

The Batch opens 2026 with a “hopes” symposium: contributors argue for more genuinely open and inspectable model development, AI that shifts from interpolation to real scientific discovery, and better integration of multimodal foundation models into biomedical workflows (with interpretability and robustness treated as first-class requirements). Others focus on education and community — warning that AI detection is structurally brittle while imagining chatbots designed for group contexts rather than one-to-one assistance.

What you need to know: The agenda is widening from “bigger models” to “fit-for-purpose systems” — openness, interpretability, workflow integration, and multi-user interaction are becoming core research and product battlegrounds.
Original link: https://www.deeplearning.ai/the-batch/issue-334/

 

LLMs Go To Confession, Automated Scientific Research, What Copilot Users Want, and more…
9 January 2026 | DeepLearning.AI (The Batch)

This issue spotlights reliability and “agentic” infrastructure. One item describes training a model to produce post-hoc “confessions” that enumerate constraints and admit misbehavior — pitched as a monitoring hook for alignment failures and hallucinations. Another introduces an open protocol aimed at making AI-driven scientific experimentation more reproducible across tools, institutions, and robotic labs, while a Microsoft study suggests user intent varies sharply by device and time of day — implying chatbot design should adapt to context, not just capability.

What you need to know: AI is shifting from single-turn assistants to monitored, tool-using systems — and the plumbing (standards, logging, evaluability) is becoming as important as raw model IQ.
Original link: https://www.deeplearning.ai/the-batch/issue-335/

 

China’s red packet tussle
January 2026 | Financial Times

A Chinese “red packet” giveaway battle is being framed as an AI distribution war: major platforms are using incentives to pull users into AI-powered assistants embedded inside super-app ecosystems. The dynamic reflects a strategic shift from pure model performance toward controlling user traffic, payments, and bundled services—where the assistant becomes the front door to commerce. The piece argues that whoever wins default placement and habitual use gains leverage to shape the next generation of platform power.

What you need to know: In consumer AI, distribution is becoming destiny—super-app integration and incentives may matter as much as model quality in determining winners.

 

Claude Opus 4.6 pushes the envelope
January 2026 | Financial Times

This newsletter-style roundup highlights Anthropic’s Claude Opus 4.6 as a qualitative step in “usable context length,” paired with features like adaptive thinking, effort controls, and context compaction to sustain performance on longer tasks. It also notes OpenAI’s GPT-5.3-Codex positioning as a faster, more capable coding agent and points to a wider push toward more transparent, community-driven benchmarking through decentralized evaluation efforts. The thread running through the update is that vendors are competing on reliability, controllability, and long-horizon task execution—not just raw benchmark scores.

What you need to know: The competitive frontier is shifting toward agentic reliability (longer context, controllable reasoning, trustworthy evals), which directly affects real-world deployment.

 

The Briefing: Google Cloud Rockets
January 2026 | Financial Times

Google Cloud reported accelerating growth driven by demand for AI services, positioning it as one of the main beneficiaries of enterprise AI adoption. The division’s improved profitability and revenue expansion suggest that AI tools embedded in cloud offerings are translating into real commercial traction. However, competition with Microsoft Azure and Amazon Web Services remains intense, particularly around model hosting and enterprise integrations.

What you need to know: Indicates that AI services are now a primary growth engine in cloud computing, intensifying the three-way hyperscaler race.

 

Cowork, Personal Intelligence, Ads in ChatGPT and the Great Beyond
17 January 2026 | Michael Spencer and Ilia Karelin, AI Supremacy

Early 2026 has seen a surge of product launches aimed at personal and enterprise AI integration. Anthropic introduced “Cowork,” building on Claude Code’s enterprise momentum, while Google unveiled Gemini’s “Personal Intelligence,” enabling cross-platform reasoning across Gmail, Photos, YouTube and Search history. Meanwhile, OpenAI is expanding advertising inside ChatGPT. The convergence suggests major labs are racing toward persistent, ecosystem-level AI agents embedded deeply into daily workflows.

What you need to know: Competition is shifting from standalone models to ecosystem-level personal intelligence — with monetisation, memory and enterprise embedding becoming central battlegrounds.

 

Self-Driving Reasoning Models, ChatGPT Adds Ads, Apple’s Deal with Google, and more…
23 January 2026 | DeepLearning.AI (The Batch)

From Davos, the letter argues that “a thousand flowers” AI experimentation often stalls unless companies redesign entire workflows end-to-end — turning incremental automation into new products and faster service loops. Elsewhere, the issue tracks frontier deployment economics: OpenAI testing ads inside ChatGPT as it searches for sustainable monetisation, and research that uses chain-of-thought style reasoning to make autonomous driving models more interpretable and safer in simulation.

What you need to know: Two themes are converging — reasoning/agentic models are expanding into safety-critical domains, while business models (ads, bundles, platform deals) are rapidly evolving to pay for inference at scale.
Original link: https://www.deeplearning.ai/the-batch/issue-337/

 

OpenAI is about to face real competition
24 January 2026 | Michael Spencer, AI Supremacy (Substack)

As OpenAI prepares for a potential 2027 IPO, concerns are mounting over rising cash burn, slowing product execution and intensifying competition from Anthropic, Google Gemini and Chinese model providers including Qwen and DeepSeek. Traffic-share data shows ChatGPT losing ground to Gemini, while enterprise revenue growth may not be keeping pace with infrastructure spending. Rivals are also pushing aggressively into AI coding tools and hardware integrations, challenging OpenAI’s first-mover advantage.

What you need to know: Suggests competitive dynamics in foundation models are accelerating, with enterprise market share and capital efficiency becoming key battlegrounds.

 

The Briefing: Meta, Microsoft, Tesla and Apple
25 January 2026 | Martin Peers, The Information

Ahead of earnings from Meta, Microsoft, Tesla and Apple, investor focus centred on AI spending discipline and revenue uplift. Meta’s vast infrastructure ambitions—potentially hundreds of gigawatts of AI capacity—continue to unsettle Wall Street, while Microsoft faces scrutiny over whether Copilot and Azure AI can justify surging capex. Apple’s results remain tied more to iPhone cycles than AI, and Tesla confronts declining vehicle revenue amid heavy robotics and autonomy investment.

What you need to know: Highlights growing investor tension between AI investment scale and near-term financial returns among the largest tech firms.

 

Robots only half as efficient as humans, says leading Chinese producer
25 January 2026 | William Langley, Financial Times

UBTech, a major Chinese humanoid robot maker, admitted its latest machines are at most half as efficient as human workers in manufacturing settings. Despite rapid development and partnerships with companies such as BYD and Foxconn, executives acknowledged the technological and economic barriers to widespread deployment. Nevertheless, manufacturers continue ordering robots in anticipation of long-term labour substitution.

What you need to know: Suggests that embodied AI and robotics remain materially behind human productivity, tempering near-term automation expectations.

 

ASML forecasts bumper sales as AI boom drives chip demand
28 January 2026 | Tim Bradshaw, Financial Times

ASML has forecast strong sales growth driven by surging demand for advanced lithography machines used in cutting-edge AI chips. The Dutch company expects revenues of up to €39bn in 2026 as customers expand production capacity to meet AI infrastructure needs. Orders for its Extreme Ultraviolet equipment have surged, reinforcing ASML’s pivotal role in the semiconductor supply chain. Despite geopolitical constraints and market volatility, AI-driven chip demand remains robust.

What you need to know: Control over semiconductor manufacturing equipment is a strategic chokepoint in the AI ecosystem, amplifying the importance of supply-chain capacity and geopolitical dynamics.

Original link: https://www.ft.com/content/cc1eb216-9587-4efc-a7b3-fb28309aa4b4

 

Vibecoding isn’t going to kill business software services
28 January 2026 | Financial Times (Lex)

Fears that “vibecoding” — using AI prompts to generate software — will destroy traditional SaaS providers may be overstated. While tools like Anthropic’s Claude Code reduce barriers to building custom applications, major software companies such as SAP and ServiceNow continue to report resilient earnings and recurring subscription revenue. Rather than being displaced, incumbents are increasingly embedding AI into their platforms to deliver more intelligent and adaptive business systems. Falling share prices reflect lower valuation multiples rather than collapsing fundamentals.

What you need to know: AI may compress margins and valuations in SaaS, but established providers retain structural advantages in integration, reliability and enterprise trust.

Original link: https://www.ft.com/content/8edf9248-4d8a-4442-9f79-782b70fea72f

 

Why AI start-up Hugging Face turned down a $500mn Nvidia deal
28 January 2026 | Melissa Heikkilä, Financial Times

Hugging Face rejected a $500mn investment offer from Nvidia, choosing independence over additional capital amid an AI funding frenzy. The platform, which hosts 2.5mn public models and over 700,000 datasets, positions itself as a champion of open AI development in contrast to proprietary models from OpenAI, Google and Anthropic. Leadership argued that accepting a dominant investor could compromise its mission to democratise AI and resist concentration of power. The decision underscores Hugging Face’s growing influence in shaping global AI norms and the strategic tension between open and closed model ecosystems.

What you need to know: The open-versus-closed AI divide is becoming a defining strategic fault line in the industry, with implications for governance, competition and global technological power.

Original link: https://www.ft.com/content/d14419c5-7fa5-4128-9858-7f83259ca02e

 

Meta and its peers are everything, everywhere, all of the time
29 January 2026 | Lex, Financial Times

Meta’s record revenues—surpassing $200bn annually—demonstrate how deeply Big Tech now permeates both investment portfolios and the broader US economy. The five largest tech groups account for more than 40% of S&P 500 capital expenditure, driven largely by AI infrastructure expansion. While user engagement and advertising revenue remain robust, the concentration of economic power increases systemic risk should AI expectations falter.

What you need to know: Emphasises the macroeconomic weight of AI-driven capex concentration among a handful of tech giants.

 

Meta’s record sales boost shares 10% despite massive spending plans
29 January 2026 | Hannah Murphy, Financial Times

Meta reported fourth-quarter revenues of $59.9bn, up 24% year-on-year, with net income rising to $22.8bn. Strong advertising demand and AI-driven performance gains helped offset investor concerns over projected 2026 capital expenditure of up to $135bn—nearly double 2025 levels. Markets rewarded the earnings beat, signalling temporary tolerance for aggressive AI investment provided revenue growth remains strong.

What you need to know: Demonstrates that strong earnings can temporarily neutralise investor anxiety over extreme AI infrastructure spending.

 

Microsoft’s AI spending and disappointing cloud growth overshadow strong profits
29 January 2026 | Stephen Morris, Financial Times

Microsoft posted record quarterly revenue of $81.3bn and profit growth of 23%, yet shares fell after a 66% surge in capital expenditure and slower-than-expected Azure cloud growth. The company is racing rivals to expand AI infrastructure, with annual capex forecast to reach $140bn. Investors remain uncertain whether Copilot and AI services can deliver returns commensurate with the spending trajectory.

What you need to know: Reinforces market scepticism about the speed at which AI monetisation can justify hyperscaler-level capex.

 

Agents Go Shopping, Intelligence Redefined, Higher Engagement Means Worse Alignment
30 January 2026 | The Batch @ DeepLearning.AI

This edition explores the rise of sovereign AI and the geopolitical fragmentation of AI ecosystems . It argues that US export controls and shifting foreign policy are accelerating interest in open-source and open-weight models globally. The newsletter also highlights Google’s Universal Commerce Protocol, designed to enable AI agents to execute online purchases, and examines emerging research linking high user engagement with weaker model alignment.

What you need to know: AI development is becoming geopolitically fragmented, while the expansion of agent capabilities into commerce intensifies both competitive dynamics and alignment challenges.

 

Apple buys Israeli start-up Q.AI for close to $2bn in race to build AI devices
30 January 2026 | Tim Bradshaw and Michael Acton, Financial Times

Apple has acquired Israeli start-up Q.AI for nearly $2bn to bolster its push into AI-powered wearable devices. Q.AI’s technology analyses facial micro-movements to enable “silent speech”, potentially allowing users to interact with AI assistants through subtle expressions while wearing headphones or smart glasses. The deal marks one of Apple’s largest acquisitions and signals an effort to narrow the gap with competitors such as Meta and Google in AI hardware.

What you need to know: The AI race is expanding into hardware and wearables, with human–machine interface innovation becoming a key competitive frontier.

Original link: https://www.ft.com/content/49f4e2e4-3a68-4842-be67-879409d06aa1

 

What is OpenClaw?
2 February 2026 | Michael Spencer, AI Supremacy (Substack)
AI Supremacy unpacks the sudden viral rise of OpenClaw, an open-source “digital chief of staff” agent that runs locally and plugs into messaging apps (e.g., WhatsApp/Telegram/Discord). The post argues the hype cycle looked unusually synthetic, and frames OpenClaw as a live test of the “consumer agents” thesis: persistent-memory assistants that follow users across apps. It also stresses the downside—agent frameworks operating inside personal comms and OS-level contexts expand the blast radius for security failures, credential leaks, and scams, especially when non-expert users deploy them.
What you need to know: OpenClaw is a preview of the agent era’s real constraint: not model capability, but secure-by-default tooling, permissions, and governance when agents touch personal data and real accounts.
Original link: https://www.ai-supremacy.com/p/what-is-openclaw-moltbot-2026

 

The Briefing: xAI–SpaceX Combine
2 February 2026 | Martin Peers, The Information

Elon Musk’s decision to merge xAI with SpaceX in a $250bn deal reflects the financial pressures facing frontier AI start-ups. Musk framed the move as part of a long-term vision for space-based data centres, but sceptics question both technical feasibility and funding sustainability. Meanwhile, Oracle’s simultaneous fundraising to expand terrestrial AI capacity underscores how dependent AI developers remain on conventional cloud infrastructure.

What you need to know: Illustrates how AI start-ups are seeking unconventional financing and vertical integration strategies to survive escalating compute costs.

 

Nvidia CEO becomes the latest SaaS defender — sort of
5 February 2026 | Amir Efrati, The Information

Nvidia chief executive Jensen Huang has pushed back against fears that AI will devastate subscription software companies, arguing instead that “software is a tool” and that AI systems themselves will increasingly rely on enterprise applications. However, the emergence of powerful “super agents” capable of operating across multiple apps raises questions about the long-term value of per-seat SaaS pricing models. As tech giants race to build operating system–level AI agents, the competitive battleground may shift from individual apps to the agentic layer that orchestrates them.

What you need to know: Highlights a structural shift from SaaS-centric economics toward an “agent layer” economy, potentially disrupting enterprise software business models.

 

OpenClaw Runs Amok, Kimi’s Open Model, Ministral Distilled, and more…
6 February 2026 | DeepLearning.AI (The Batch)

The issue surveys the messy reality of open agent ecosystems: OpenClaw’s rapid adoption is paired with predictable security failures (misconfigurations, exposed keys) and a reminder that autonomy without guardrails creates new attack surfaces. On models, it highlights Moonshot AI’s Kimi K2.5 adding vision plus parallel “subagents,” and Mistral’s cascade distillation approach for producing small, efficient model families with comparatively fewer training tokens — pushing stronger AI down to laptops and phones.

What you need to know: “Agents everywhere” is arriving before “agents are safe” — and the open ecosystem is simultaneously accelerating capabilities (subagents, distillation) and stress-testing security norms in public.
Original link: https://www.deeplearning.ai/the-batch/issue-339/

 

AI agents are prompting human boom scrolling
6 February 2026 | John Thornhill, Financial Times

The launch of Anthropic’s Claude Cowork and the viral emergence of Moltbook—a social network for AI agents—have captured attention across the tech world. More than 1.5 million AI agents have interacted on the platform, generating machine-written discussions that some enthusiasts view as signs of emergent intelligence. The phenomenon illustrates the rise of “vibe coding”, where AI systems autonomously build and interact with digital environments. While playful and experimental, the trend underscores how agentic AI systems are increasingly operating with limited human oversight.

What you need to know: AI agents are evolving from passive tools to semi-autonomous actors, raising questions about control, authorship and the boundaries between human and machine creativity.

Original link: https://www.ft.com/content/b5022f40-f538-41bd-82c5-199b39924d37

 

Anthropic launches new Claude model as AI fears rattle markets
6 February 2026 | Melissa Heikkilä, Financial Times

Anthropic has unveiled Claude Opus 4.6, described as its most capable enterprise-focused model, capable of handling complex knowledge-work tasks and advanced coding. The launch comes amid market turbulence sparked by fears that AI coding tools could disrupt traditional software development and professional services. The model reportedly outperforms rivals on benchmarks relevant to finance and legal tasks. Anthropic continues to differentiate itself by targeting enterprise users rather than consumer markets.

What you need to know: Enterprise AI tools are rapidly advancing beyond experimental use, threatening established software and professional service business models.

Original link: https://www.ft.com/content/a0cd0281-8367-4ed3-9f18-038e4a9f79e0

 

The AI Coding Supremacy wars
6 February 2026 | Michael Spencer, AI Supremacy (Substack)

A wave of new coding-focused AI releases — including GPT-5.3-Codex, Claude Opus 4.6 and Qwen3-Coder-Next — has intensified competition in AI-assisted software development. With further models from DeepSeek, Zhipu AI and Moonshot AI imminent, coding has become the primary proving ground for frontier capabilities. Enterprise adoption, agentic workflows and capex expansion are converging, as firms test AI co-workers in real-world environments.

What you need to know: Coding tools are emerging as the most commercially consequential application of frontier models, driving both revenue and competitive positioning.

 

OpenAI launches ChatGPT ads
10 February 2026 | Ann Gehan, The Information

OpenAI has begun testing advertising inside ChatGPT for free and low-tier users, marking a significant shift in its monetisation strategy. Ads are matched to conversation topics rather than individual user data, with advertisers receiving only aggregate performance metrics. The move signals OpenAI’s ambition to turn ChatGPT into a consumer platform comparable to search or social media.

What you need to know: Ads in ChatGPT suggest AI assistants are evolving into monetisable distribution platforms—raising new questions about incentives, trust, and neutrality.
Original link: https://www.theinformation.com/articles/openai-launches-chatgpt-ads

 

How a ‘zombie’ chipmaker became Nvidia’s vital AI ally
11 February 2026 | Daniel Tudor, Financial Times

South Korea’s SK Hynix has transformed itself into a linchpin of the AI boom by dominating high-bandwidth memory (HBM), a critical component for training large models. Once overshadowed by rivals, the company now supplies Nvidia and Microsoft and enjoys record margins amid global shortages. Memory constraints, once a weakness, have become a strategic advantage.

What you need to know: AI performance is constrained not just by GPUs but by memory—and control of HBM has become a quiet source of power in the AI stack.
Original link: https://www.ft.com/content/3cb37eb1-97e8-44d9-9fa4-ef2a5649da8e

 

Mustafa Suleyman plots AI ‘self-sufficiency’ as Microsoft loosens OpenAI ties
12 February 2026 | Melissa Heikkilä, Financial Times

Microsoft’s AI chief Mustafa Suleyman says the company is pushing for “true self-sufficiency” by building frontier in-house models and reducing dependence on OpenAI, following a restructuring of the partnership. He frames the pivot as a bet on gigawatt-scale compute, elite training teams, and better-organised datasets — while predicting rapid workplace automation and arguing that the next wave of “humanist superintelligence” must remain under human control.

What you need to know: Big Tech’s AI stack is verticalising — the strategic advantage is shifting from “best model access” to owning models, data pipelines, and compute at scale (and controlling deployment).
 

Anthropic’s strategy
12 February 2026 | Anand Sanwal, CB Insights (Hot or Not)
CB Insights argues that Anthropic’s recent dealmaking points to a deliberate strategy: locking down the “plumbing” that makes AI deployable in high-stakes environments. Rather than focusing only on model performance, the newsletter highlights categories like identity/permissions, security and governance, interpretability/debugging, and developer workflow capture—areas that reduce enterprise risk and make AI systems auditable and controllable. In the same issue, CB Insights flags “world models” as a rising theme (e.g., simulation advances for autonomous vehicles) and notes accelerating deployment of AI agents in document-heavy industries like financial services.
What you need to know: The frontier is shifting from “better models” to “deployable systems”—winning AI companies increasingly differentiate through security, governance, and interpretability layers that unlock regulated and mission-critical use.
Original link: https://research.cbinsights.com/anthropics-strategy

 

Claude Opus 4.6 Thinks Smarter, xAI Joins SpaceX, AI Outperforms Doctors, and more…
13 February 2026 | Andrew Ng, DeepLearning.AI (The Batch)

In another The Batch issue, Andrew Ng reflects on Hollywood’s unease with AI—centering concerns over consent, compensation, and job displacement—while arguing that the industry also recognizes AI’s inevitability in entertainment workflows. The issue’s news section spotlights the SpaceX–xAI tie-up as a potential accelerator for xAI’s financing and a long-shot bet on space-based data centres, alongside broader advances in model capability and sector-wide adoption. The overall tone: social legitimacy and governance questions are now moving in parallel with technical progress.

What you need to know: AI progress is colliding with IP, labor, and trust—how creators and platforms resolve these tensions will shape what training data and media tools are viable.
Original link: https://www.deeplearning.ai/the-batch/issue-340/

 

DeepSeek’s Next Move: What V4 Will Look Like
13 February 2026 | Michael Spencer and Tony Peng, AI Supremacy

Chinese AI lab DeepSeek is preparing a major architectural overhaul with its upcoming V4 model, expected around Lunar New Year. Despite limited access to cutting-edge chips, DeepSeek and its domestic rivals have rapidly iterated on open-weight models, often outpacing Western peers in speed and efficiency. The analysis argues that China’s open-source momentum is reshaping the global AI frontier, particularly for developers building real-world products.

What you need to know: Open-weight models from China are emerging as serious frontier contenders, challenging the assumption that cutting-edge AI must be closed and US-led.
Original link: https://aisupremacy.substack.com/p/deepseeks-next-move

 

Nvidia’s upstart rivals
13 February 2026 | Dina Bass, Bloomberg Technology

While Nvidia remains dominant in AI training chips, a growing number of startups are targeting inference—the process of running models once trained. Investors and customers are experimenting with alternatives as hedges against Nvidia’s pricing power and supply constraints. The article highlights growing belief that AI hardware will not be a winner-takes-all market, particularly as inference workloads scale.

What you need to know: As AI shifts from training to deployment, inference chips may become the first real crack in Nvidia’s near-monopoly on AI hardware.
Original link: https://www.bloomberg.com/news/articles/2026-02-13/nvidias-upstart-rivals

 

OpenAI hires OpenClaw founder Peter Steinberger
16 February 2026 | Cristina Criddle, Financial Times
OpenAI has hired Peter Steinberger, founder of the viral open-source agent project OpenClaw, as it accelerates efforts to build AI systems that can take actions autonomously. OpenClaw surged by letting users run agents locally and connect them to apps like WhatsApp, Slack and iMessage to manage tasks across a person’s digital life. OpenAI says OpenClaw will remain an independent open foundation, positioning the hire as both talent acquisition and a signal that “multi-agent” workflows will become core to mainstream AI products.
What you need to know: The centre of gravity is moving from chat to action—labs are competing to own agent tooling, distribution, and the interface layer that lets models do real work across apps.
Original link: https://www.ft.com/content/45b172e6-df8c-41a7-bba9-3e21e361d3aa

 

Beijing backs brain implant push to rival Elon Musk’s Neuralink
18 February 2026 | Eleanor Olcott, Financial Times
China is accelerating brain-computer interface (BCI) development through national strategic backing, looser regulation, and mobilised capital, with several clinical trials underway. The FT reports on Shanghai start-up NeuroXess, which says a paralysed patient could control a computer cursor within days of implantation, and positions the broader push as part of Beijing’s aim to build multiple “world-class” BCI companies by 2030. The piece contrasts NeuroXess’s technical approach (surface mesh electrodes plus implanted power pack) with Neuralink’s invasive threads and highlights the race to reach higher information transfer rates for complex functions like speech.
What you need to know: BCIs are converging AI, neurotech, and geopolitical industrial policy—progress here expands the scope of “human–AI interfaces” well beyond screens, with major implications for medical AI, regulation, and national competitiveness.
Original link: https://www.ft.com/content/2c72c0e6-147d-4c53-9008-0d47cb63c085

 

Openai’s shopping spree
19 February 2026 | Anand Sanwal, CB Insights (Hot or Not)
CB Insights argues OpenAI is moving aggressively to “own the developer ecosystem” end-to-end by snapping up talent from fast-growing agent and coding tools, reinforcing a broader strategy of controlling key distribution and workflow surfaces. The newsletter also highlights adjacent trends: surging funding for “world models” (to let robots simulate and plan) and rapid growth in agent observability, evaluation, and governance as enterprises try to manage agent risk. It frames this as the market shifting from model novelty to deployability: toolchains, monitoring, and control layers are becoming the differentiation battleground.
What you need to know: The competitive frontier is increasingly the AI operating layer—who controls developer workflows, agent reliability tooling, and distribution channels will shape which models and platforms become defaults.
Original link: https://research.cbinsights.com/openais-shopping-spree

 

Amazon service was taken down by AI coding bot
20 February 2026 | Rafe Rosner-Uddin, Financial Times
Amazon Web Services suffered a significant disruption after engineers allowed an AI coding tool (Kiro) to make changes that ultimately led to an outage, according to people familiar with the incident and an internal postmortem. Employees said it was at least the second recent disruption where Amazon’s AI tools were implicated, raising concerns about pushing agentic coding assistants into production environments. Amazon argues this was “user error, not AI error,” pointing to permissions and authorisation settings—yet the episode underscores how quickly autonomous tooling can amplify operational risk when guardrails and approval workflows are weak.
What you need to know: As “agentic” developer tools gain autonomy, reliability engineering and access-control discipline become as important as model capability—one mis-scoped permission can translate into real downtime.
Original link: https://www.ft.com/content/00c282de-ed14-4acd-a948-bc8d6bdb339d

 

The New Open-Weights Leader, Big AI’s Political Influence, Predicting Illness, and more…
20 February 2026 | DeepLearning.AI (The Batch)

This issue of The Batch spans three fronts where AI progress is accelerating simultaneously: models, power, and application. Z.ai’s GLM-5 emerges as the new leading open-weights large language model, narrowing the performance gap with proprietary systems on reasoning, coding, and long-horizon agentic benchmarks, and underscoring how Chinese labs are setting the pace in open models. In parallel, the newsletter documents how major AI companies are spending heavily on lobbying in the US to shape regulation, infrastructure policy, and chip export controls, signalling a shift from technical competition to political influence. Finally, researchers introduce SleepFM, a large multimodal system trained on hundreds of thousands of hours of sleep data that can predict the onset of serious diseases years before symptoms appear, highlighting AI’s growing role in preventative medicine.

What you need to know: The frontier of AI is no longer just about smarter models — it’s about who controls openness, compute, and regulation, and how advances are translating into high-impact real-world domains like healthcare.
Original link: https://www.deeplearning.ai/the-batch/issue-341/

 

The New Open-Weights Leader, Big AI’s Political Influence, Predicting Illness, and more
20 February 2026 | Andrew Ng, The Batch (DeepLearning.AI)
DeepLearning.AI’s The Batch spotlights three converging AI trends: intensifying competition in open-weights frontier models, rising political influence by major AI and tech firms, and increasingly high-stakes real-world applications in healthcare. It highlights Z.ai’s GLM-5, a large open-weights model designed for long-running agentic tasks and positioned as a top performer among open-weights competitors. The issue also notes how Big Tech and AI companies are scaling lobbying spend to shape policy—particularly around data centres and chip export rules. Finally, it covers SleepFM, a research system that uses vital signs during sleep to classify diseases (including neurodegenerative and cardiovascular conditions) years before symptoms appear.
What you need to know: This is a snapshot of AI’s current “system-level” phase: open model competition is accelerating, governance is being shaped by corporate political power, and frontier ML is moving into sensitive domains like early medical risk detection.
Original link: https://www.deeplearning.ai/the-batch/issue-341/

 

China’s AI labs unleash new models and bubble tea to lure in customers
21 February 2026 | Eleanor Olcott and Cristina Criddle, Financial Times
Chinese AI companies have launched a wave of new models around Lunar New Year, pairing releases with consumer giveaways and promotions to drive adoption. The article highlights ByteDance’s advances in video generation and Alibaba’s push with Qwen, including incentives routed through agent-like app experiences. The piece suggests Chinese labs are increasingly optimising for product usefulness and deployment rather than treating frontier performance as the sole objective, and that the battle is intensifying to attract developers and everyday users.
What you need to know: This shows how the AI race is moving into “consumer distribution warfare”—models compete not just on capability but on productisation, incentives, and user acquisition at scale.
Original link: https://www.ft.com/content/a3f9cd15-0217-4a5d-85b2-89738a5fce70

AI Regulation and Legal Issues

Governments vs. Grok, Meta buys agent tech, healthcare chatbots, and more…
16 January 2026 | Andrew Ng, DeepLearning.AI (The Batch)

Andrew Ng argues that opposition to data centre construction—often driven by fears about emissions, electricity prices, and water use—is largely misplaced. He contends that hyperscale data centres are far more efficient than dispersed enterprise compute and are essential to AI progress. Blocking them, he warns, could slow innovation while doing little to help the environment.

What you need to know: AI progress depends on massive compute buildout, and public resistance to data centres could become a hidden bottleneck on AI development.
Original link: https://www.deeplearning.ai/the-batch/issue-336/

 

CMA targets Google AI overviews in move to loosen search dominance
28 January 2026 | Suzi Ring and Tim Bradshaw, Financial Times

The UK Competition and Markets Authority is proposing measures to limit Google’s control over AI-powered search services. The regulator seeks to allow publishers to opt out of AI-generated overviews and demands clearer attribution of source material. The move follows Google’s designation under the UK’s new digital markets regime and signals closer scrutiny of how AI alters search competition.

What you need to know: AI-generated search summaries are becoming a regulatory flashpoint, with competition authorities stepping in to shape how generative AI integrates into dominant platforms.

Original link: https://www.ft.com/content/5b6881e5-81a6-4497-928e-58b3706bb2eb

 

Agents Go Shopping, Intelligence Redefined, Better Text in Pictures, and more…
30 January 2026 | Andrew Ng, DeepLearning.AI (The Batch)

DeepLearning.AI’s The Batch frames “sovereign AI” as a growing response to shifting geopolitics and export controls, arguing that nations increasingly want AI capability they can’t be cut off from. It highlights how open-source ecosystems and open-weight models are becoming a practical path to resilience, with countries investing in domestic foundation models and compute while still depending on globally interlinked supply chains. The issue also points to momentum behind non-US open-weight models as adoption spreads internationally.

What you need to know: Sovereign AI is becoming a major driver of open-weight model investment and adoption—shaping where frontier capability is built and who controls access.
Original link: https://www.deeplearning.ai/the-batch/issue-338/

 

OpenAI accuses Musk’s xAI of destroying evidence in court fight
3 February 2026 | Madlin Mekelburg, Bloomberg

OpenAI has accused xAI of “systematic and intentional destruction” of evidence in an antitrust case linked to Musk’s lawsuit over Apple’s integration of ChatGPT. The filing alleges xAI employees used auto-deleting messaging tools despite legal obligations to preserve communications. The dispute deepens an already high-profile rivalry between Musk and OpenAI chief Sam Altman.

What you need to know: Highlights intensifying legal and competitive conflict at the frontier of AI, with potential implications for platform partnerships and antitrust scrutiny.

 

The battle of the A.I. super PACs
13 February 2026 | Andrew Ross Sorkin, DealBook (New York Times)

Rival AI companies are now openly fighting over how artificial intelligence should be regulated in Washington, using competing super PACs to influence lawmakers. Anthropic has backed a group pushing for stricter AI safety rules, while OpenAI-linked interests support efforts favoring lighter-touch federal oversight over fragmented state regulation. The clash highlights deep ideological divisions within the AI industry as its political and economic power grows.

What you need to know: AI governance is becoming a political battleground, with industry leaders shaping regulation in ways that could determine who wins—or is constrained—in the next phase of AI development.
Original link: https://www.nytimes.com/2026/02/13/business/dealbook/ai-super-pacs.html

 

India seeks to broker consensus on global ‘AI commons’
15 February 2026 | John Reed and Krishn Kaushik, Financial Times
India is proposing a “global AI commons” approach that would widen access by sharing use cases, datasets, standards, and safety norms—framed as a counterweight to frontier AI being controlled by a small set of US and Chinese firms. Officials argue the commons could help the global south adopt AI in sectors like education, health, and agriculture, and emphasise interoperability and diffusion at scale. The article situates this as India leveraging its “digital public infrastructure” reputation (India Stack) to claim a convening role in global AI governance debates.
What you need to know: Watch “commons” language closely—countries are starting to treat AI capabilities, data, and standards as public infrastructure rather than purely private assets, which could reshape international norms and access.
Original link: https://www.ft.com/content/690c9e4c-3d0d-4337-8755-4391f3e7e843

UK to tighten online safety laws to include AI chatbots
16 February 2026 | Mari Novik, Financial Times

The UK government plans to close a legal loophole so AI chatbots such as Grok, Gemini and ChatGPT fall clearly within the scope of the Online Safety Act, alongside social media platforms. The move follows a deepfake scandal involving Grok and comes with a political warning that “no platform gets a free pass” on illegal content. Proposed amendments would allow the government to require chatbot operators to protect users from illegal content, backed by Ofcom’s enforcement powers, including major fines.
What you need to know: Governments are shifting from regulating “platforms” to regulating “AI systems” directly—chatbots are becoming first-class regulatory targets with compliance obligations similar to social networks.
Original link: https://www.ft.com/content/15917aa4-2d40-49be-85c3-da395b16e7f1

Trump leans on Utah Republicans to scrap AI safety bill
16 February 2026 | Joe Miller, Financial Times

The White House has urged Utah Republicans to drop a bill that would require leading AI developers to publish public safety plans, including cybersecurity mitigation, child-safety planning, and whistleblower protections. The memo escalates a broader clash between federal AI strategy—framed around competitiveness and reducing “burdens”—and state-level efforts (including in conservative states) to impose safeguards. The article situates this within a wider push to deter states from passing their own AI rules, including threats to withhold federal funds and an AI litigation task force.
What you need to know: US AI governance is entering a federal–state power struggle; the regulatory environment for frontier models could diverge sharply depending on whether states are allowed to set their own safety rules.
Original link: https://www.ft.com/content/b04fc3d5-c916-4ac8-ab4f-a65a9f4e60c5

 

India seeks a ‘Delhi Declaration’ on AI
17 February 2026 | Veena Venugopal, Financial Times (India Business Briefing)
At its New Delhi AI summit, India is pushing a “Delhi Declaration” focused on using AI for practical social outcomes (e.g., education, health, agriculture), shifting emphasis away from the prior summit cycle’s heavier safety-and-risk framing. The newsletter positions the summit as a diplomatic success—drawing major leaders and top AI executives—while arguing India still lags in the fundamentals of competing in frontier AI (R&D spend, compute, central coordination, and a scaled skills pipeline). The core tension: convening global AI dialogue versus building the domestic capacity to lead technologically.
What you need to know: Global AI governance is fragmenting into competing narratives—risk/safety, innovation/industrial policy, and “AI for development”—and India is trying to set the agenda for the global south.
Original link: https://www.ft.com/content/f4d6ef7d-9a26-4e43-9cf8-ad3874993e12

 

EU privacy watchdog opens probe into Elon Musk’s X over sexualised AI images
17 February 2026 | Hannah Murphy, Financial Times
Ireland’s Data Protection Commission has opened a large-scale GDPR inquiry into X over the creation and spread of sexualised AI-generated images linked to Grok, amid broader regulatory scrutiny of the platform. The probe focuses on whether X complied with core GDPR obligations around lawful processing, privacy-by-design, and risk assessment for high-risk features. The article situates this alongside parallel investigations and growing political pressure on platforms that enable generative sexual abuse material, especially when real people (including minors) are implicated.
What you need to know: This is a regulatory escalation that turns “AI safety” into concrete privacy and compliance exposure—platform-integrated chatbots are now being judged on governance, risk controls, and user-data handling, not just content moderation.
Original link: https://www.ft.com/content/3c720c9b-d2b6-41b3-a7b7-960b5f1fb94b

 

Mark Zuckerberg Takes the Stand in Landmark Social Media Addiction Trial
18 February 2026 | Eli Tan, The New York Times
Mark Zuckerberg testified in a major trial accusing Meta and other platforms of engineering addictive experiences that harm young users. The reporting highlights allegations that internal documents show awareness of youth risk while product decisions continued to optimise time-spent, alongside Meta’s defence that plaintiffs’ harms stem from broader life circumstances. The case sits within a widening wave of litigation and policy action globally on youth safety, mental health impacts, and platform design choices.
What you need to know: As platforms integrate generative AI (recommendation, chatbots, creation tools), legal scrutiny of “engagement-maximising” design is becoming a core constraint on how AI-driven consumer products evolve.
Original link: https://www.nytimes.com/2026/02/18/technology/mark-zuckerberg-tech-addiction-trial.html

 

India’s AI ambitions hit limits at global summit
22 February 2026 | Krishn Kaushik, Financial Times
India’s push at the Global AI Summit to widen access—pressuring major labs to open-source models for social applications—met resistance from the US and leading tech groups, underscoring the difficulty of building global consensus. The piece highlights a US rejection of global AI governance and notes India’s structural constraints: insufficient large-scale compute, limited frontier model leadership, and the challenge of defending its IT-services sector from automation. Despite that, the summit reportedly secured large investment pledges tied to data-centre buildout, suggesting India is trying to buy its way into the AI era through infrastructure.
What you need to know: This is the “compute geopolitics” story in action—countries can advocate openness, but without chips, power, and data centres, they struggle to influence the frontier’s direction.
Original link: https://www.ft.com/content/5c26f2f6-c857-407c-93fe-7f59aa88c8f4

AI Market and Investment

Big Tech shares slide for third day with AI stocks under pressure
January 2026 | Financial Times

Big Tech stocks extended their losses for a third consecutive session as investor enthusiasm for AI-related companies cooled. Enterprise software and AI infrastructure groups were among the hardest hit, with markets reacting to concerns about stretched valuations and the sustainability of current spending levels. The pullback follows heightened volatility triggered by renewed scrutiny of AI business models and expectations for near-term returns.

What you need to know: Markets are beginning to differentiate between hype-driven valuations and sustainable AI earnings, signalling a more selective phase in the AI investment cycle.

 

Big Tech’s ‘breathtaking’ $660bn spending spree reignites AI bubble fears
January 2026 | Financial Times

The largest US technology groups are projected to spend more than $660bn on capital expenditure and R&D, much of it tied to AI infrastructure, data centres and chip procurement. While executives argue the outlays are essential to secure competitive advantage in generative AI, some investors warn that the scale of investment resembles past tech bubbles. Analysts note that returns remain uncertain, especially as demand visibility for enterprise AI applications remains uneven.

What you need to know: AI competition is now defined by capital intensity; however, escalating spending is reviving fears of overinvestment and a potential valuation correction.

 

Businesses fear blowback from Saudi-UAE rift
January 2026 | Financial Times

Companies operating across the Gulf are reassessing AI and technology partnerships amid rising tensions between Saudi Arabia and the UAE. The rivalry, which spans energy, logistics and emerging technologies, risks complicating cross-border data strategies and sovereign AI ambitions. Firms worry that geopolitical friction could fragment regional tech ecosystems and disrupt joint AI investment initiatives.

What you need to know: Sovereign AI strategies are increasingly shaped by regional geopolitics, making political alignment as critical as technical capability.

 

The Briefing: Amazon’s Capex Shock
January 2026 | Financial Times

Amazon startled investors with the scale of its AI-related capital expenditure plans, signalling tens of billions more in spending on data centres and AI infrastructure. While the company argues the investment is necessary to maintain cloud leadership and support generative AI workloads, markets reacted nervously to the strain on margins and free cash flow. The move reinforces how hyperscalers are committing unprecedented sums to AI capacity amid uncertain demand visibility.

What you need to know: Shows that AI infrastructure spending is reaching levels that meaningfully affect valuations and capital allocation strategies across Big Tech.

 

Tech stocks rally strongly after three days of heavy selling
January 2026 | Financial Times

US technology stocks rebounded sharply after three consecutive days of heavy losses driven by fears that AI would disrupt software and analytics companies. Investors rotated back into hyperscalers and chipmakers following earlier panic over AI-driven automation of professional services. The rally suggests markets remain volatile but highly sensitive to incremental AI developments, with sentiment oscillating between disruption fears and growth optimism.

What you need to know: AI’s second-order effects on software, services and professional industries are now directly influencing equity markets, highlighting the sector’s systemic financial importance.

 

AI Report Nuggets and Commentary Early 2026
20 January 2026 | Michael Spencer, AI Supremacy
This newsletter curates charts and excerpts from recent AI and macro reports (venture, policy, and infrastructure) to argue that the AI boom is increasingly defined by compute buildout, capital allocation, and hard constraints rather than model novelty alone. Key themes include concentration of market value in AI-linked firms, a shift from training-heavy to inference-heavy economics, and looming bottlenecks in power and memory/bandwidth (“memory wall”). It also anticipates consolidation via M&A and strategic equity stakes, while questioning whether AI returns will justify the scale of spending.
What you need to know: The next phase of frontier AI competition is increasingly about infrastructure scarcity and financing capacity—power, chips, bandwidth, and capex cycles will shape what progress is feasible.
Original link: https://www.ai-supremacy.com/p/ai-report-nuggets-and-commentary-2026-ai-trends

 

I Went Through Every Chart On The Credit Cycle – This Won’t End Well
26 January 2026 | Capital Flows

This macroeconomic analysis argues that the credit cycle is entering a compressed late stage, with mounting risks across rates, spreads and equities. The author reviews a comprehensive slide deck of indicators suggesting future financial stress may unfold rapidly. While not AI-specific, the broader context includes heightened risk exposure in technology equities and leveraged markets.

What you need to know: AI-driven market exuberance is unfolding within a fragile macro credit environment — amplifying systemic risk if capital conditions tighten.

Original link: https://www.capitalflowsresearch.com/p/i-went-through-every-chart-on-the

 

FJDynamics’s James Wu: Building robot farmers
28 January 2026 | Zijing Wu, Financial Times
James Wu, founder of FJDynamics and former chief scientist at DJI, argues that practical AI progress will come faster from specialised robots in “forgotten” labour-intensive sectors like agriculture than from near-term humanoids or AGI narratives. He describes how real-world complexity and corner cases make general-purpose robotics far harder than lab demos suggest, and explains FJDynamics’ focus on demand-specific automation that solves concrete farm problems (feeding, cleaning, maintenance). The interview frames robotics as an economic and demographic necessity as labour shortages rise, with AI serving as a tool for targeted mechanisation.
What you need to know: “Physical AI” is bottlenecked by messy reality, not model benchmarks—specialised robotics is likely to scale before general humanoids, reshaping where AI creates measurable productivity.
Original link: https://www.ft.com/content/8e6c58eb-9133-4574-8bbc-d4476289021b

 

SoftBank close to agreeing additional $30bn investment in OpenAI
28 January 2026 | David Keohane and George Hammond, Financial Times

SoftBank is close to agreeing an additional $30bn investment in OpenAI as Masayoshi Son deepens his bet on the ChatGPT maker. The funding would form part of a broader round that could raise up to $100bn and value OpenAI at around $750bn. Despite already being OpenAI’s largest investor, SoftBank continues to expand its AI exposure through infrastructure, chip development and data centre acquisitions. The move comes as OpenAI faces intensifying competition from Anthropic and Google, rising compute costs, and pressure to justify vast infrastructure commitments reportedly totalling more than $1tn over the coming decade.

What you need to know: The scale of capital required for frontier AI is reshaping global investment flows, reinforcing AI as a macroeconomic and geopolitical asset class rather than merely a technology sector.

Original link: https://www.ft.com/content/238a89ce-61c8-445b-98c8-aac0567e3716

 

OpenAI’s too-big-to-fail fundraising
30 January 2026 | FT Due Diligence, Financial Times

OpenAI is seeking to raise up to $100bn at a $750bn valuation, deepening financial ties with Nvidia, Microsoft, Amazon and SoftBank. The funding round underscores how closely intertwined major tech companies have become with OpenAI’s success, from cloud contracts to chip demand. With markets jittery about AI infrastructure spending and bubble risks, any stumble in fundraising could reverberate across tech valuations.

What you need to know: OpenAI’s capital needs now pose systemic risk to parts of the tech ecosystem, making its fundraising outcomes strategically significant beyond the company itself.

 

Short Thoughts: February 2, 2026
2 February 2026 | Michael Burry, Substack

Investor Michael Burry analyses turbulence across crypto, precious metals and AI-related equities, warning that speculative flows and leveraged ETF structures could amplify volatility. He highlights how Bitcoin’s correlation with equities and ETF outflows may trigger forced liquidations, with knock-on effects for tokenised commodities markets. While not exclusively focused on AI, the commentary reflects broader fragility in tech and speculative asset valuations amid AI-driven capital expenditure.

What you need to know: Financial instability in crypto and AI-linked assets could spill into broader tech markets, complicating funding and valuation assumptions underpinning AI expansion.

 

DealBook: Musk’s mega-merger
3 February 2026 | Andrew Ross Sorkin, The New York Times

Elon Musk merged SpaceX and xAI in an all-stock transaction valuing the combined enterprise at $1.25tn, creating what he describes as a vertically integrated AI and space infrastructure giant. Musk argues that space-based data centres are the only long-term solution to terrestrial power and land constraints. Critics question the economic viability, technological hurdles, and potential antitrust implications if Musk controls orbital compute infrastructure.

What you need to know: Represents a bold bet that compute scarcity will drive AI infrastructure into space — reframing AI scaling as an aerospace challenge as much as a software one.

 

SpaceX acquires xAI for $250 billion
3 February 2026 | Jessica E. Lessin, Theo Wayt & Katie Roof, The Information

SpaceX has acquired xAI in a $250bn stock deal, creating a vertically integrated entity combining rockets, satellite internet, AI compute ambitions and real-time data platforms. Elon Musk framed the merger as the foundation for space-based AI data centres, claiming orbital infrastructure could deliver cheaper compute within several years. The deal strengthens Musk’s control over both AI model development and compute infrastructure.

What you need to know: Signals a new phase of vertical integration in AI, linking model development directly to infrastructure and even space-based compute ambitions.

 

US stocks drop on fears AI will hit software and analytics groups
4 February 2026 | George Steer, Daniel Thomas and Philip Stafford, Financial Times

US tech stocks fell sharply after Anthropic launched new legal automation tools within its Claude Cowork platform, sparking concerns that AI could erode the business models of analytics and professional services firms. Companies such as S&P Global, Intuit, Equifax and Gartner suffered double-digit losses, while European information providers including Relx and LSEG also declined. Investors fear that if AI tools can automate legal, consulting and financial workflows, subscription-based analytics providers may face structural disruption.

What you need to know: AI is beginning to move from productivity enhancement to direct substitution in high-margin knowledge sectors, creating ripple effects across financial markets.

Original link: https://www.ft.com/content/48ec5657-c2e7-4111-a236-24a96a8d49e7

 

Shares of private capital giants sink on worries AI risks hitting growth
5 February 2026 | Eric Platt & Antoine Gara, Financial Times

Private equity groups including Ares, KKR and Blue Owl warned that AI-driven volatility in technology markets could delay asset sales and slow fundraising. Investors fear AI disruption may undermine valuations of software businesses—a core investment area for private capital over the past decade. Share prices fell as executives signalled cautious outlooks for 2026.

What you need to know: Indicates that AI disruption risk is beginning to affect not just tech stocks but the private capital ecosystem built around software growth.

 

Digital Capability and the Future of Learning
6 February 2026 | Martin Betts, LinkedIn Newsletter

Martin Betts argues that higher education is shifting from institution-centred credentials to a dynamic “capability economy” focused on verified, portable skills and learner agency. Drawing on recent podcasts and a white paper, the piece suggests AI is accelerating the move toward lifelong learning ecosystems where digital capability, rather than degrees alone, defines employability and value.

What you need to know: Highlights how AI is reshaping education markets — pushing universities toward skills verification, modular credentials and learner-controlled digital identities.

 

A crunchy week for chipmakers
6 February 2026 | Financial Times News Briefing

Semiconductor stocks were swept up in a broader tech sell-off following Amazon’s massive AI spending plans and renewed scrutiny of chip supply chains. The volatility reflects investor anxiety over capital expenditure levels, export controls on AI chips to China and the sustainability of AI-driven demand. Industry leaders have pushed back against fears of overheating, but markets remain sensitive to any signs of imbalance in the AI hardware ecosystem.

What you need to know: AI enthusiasm is now tightly linked to semiconductor market stability, making chipmakers central to both AI growth and financial volatility.

Original link: https://www.ft.com/content/b383cfa2-0fd0-48ed-b623-4e348610ae54

 

Amazon shares sink as it prepares $200bn AI spending blitz
6 February 2026 | Rafe Rosner-Uddin, Financial Times

Amazon’s stock fell sharply after the company unveiled plans to spend $200bn on capital expenditure in 2026, far exceeding analyst expectations. The spending surge is aimed at expanding AI infrastructure, including data centres and custom chips, as the company seeks to compete more aggressively with Microsoft and Google. Although AWS posted strong growth, investors were unsettled by the scale of spending and its potential impact on margins. The announcement triggered broader volatility across tech and chip stocks.

What you need to know: AI leadership now requires unprecedented capital investment, forcing companies to balance infrastructure expansion with shareholder expectations.

Original link: https://www.ft.com/content/a1bd22ec-42cf-46dc-9ff7-ec8a2a89a534

 

DealBook: “There will continue to be more”
6 February 2026 | Andrew Ross Sorkin, The New York Times

Technology stocks and crypto markets endured heavy losses after Amazon disclosed plans for $200bn in AI-related capital expenditure — roughly 50% higher than the previous year and well above analyst expectations. Investors punished not only Amazon but also SaaS firms amid fears that AI agents could disrupt subscription-based software businesses, prompting talk of a “SaaSpocalypse.” OpenAI’s Sam Altman warned that further volatility may follow as AI reshapes software economics.

What you need to know: Markets are recalibrating expectations around AI spending and disruption — signalling that AI enthusiasm is now colliding with capital discipline and profitability concerns.

 

Big Tech to spend $650bn this year as AI race intensifies
6 February 2026 | Matt Day and Annie Bang, Bloomberg

Alphabet, Amazon, Meta and Microsoft are collectively planning around $650bn in capital expenditure in 2026, largely for AI data centres and infrastructure. The scale rivals historic nation-building investments and is reshaping credit markets, energy demand and local economies. Investors worry that such concentrated spending could distort economic indicators while locking in winner-takes-most dynamics.

What you need to know: AI leadership is now inseparable from capital intensity—those who can spend the most on compute may shape the next decade of AI capability.
Original link: https://www.bloomberg.com/news/articles/2026-02-06/how-much-is-big-tech-spending-on-ai-computing-a-staggering-650-billion-in-2026

​

Chart of the Week: A diverging Magnificent Seven
7 February 2026 | Hakyung Kim, Financial Times

The latest FT chart shows widening performance gaps among the “Magnificent Seven” tech stocks . While some AI-exposed companies continue to rally, others have lagged sharply since late 2025, reflecting rising scrutiny over AI profitability and execution risk. The divergence underscores that markets are no longer treating Big Tech as a uniform AI growth proxy.

What you need to know: The AI trade is fragmenting — investors are rewarding credible monetisation pathways while penalising companies reliant solely on narrative-driven growth.

Original link: https://www.ft.com/content/5676a4bd-c7d8-47f1-84f7-82c006cfc2ca

 

Anthropic’s breakout moment: how Claude won business and shook markets
7 February 2026 | George Hammond, Financial Times

Anthropic has emerged as a leading enterprise AI provider, with annualised revenue rising from $1bn to more than $9bn in a year and projections exceeding $30bn. Its Claude Code and agentic tools have become industry benchmarks, driving investor enthusiasm and positioning the company for a potential IPO at a valuation around $350bn. Unlike consumer-focused rivals, Anthropic has concentrated on business users, capturing market share in coding and enterprise automation. Its rapid growth has intensified competition across the AI sector.

What you need to know: The centre of gravity in AI is shifting toward enterprise applications, where revenue scalability and defensibility are strongest.

Original link: https://www.ft.com/content/a75555a6-24c3-4468-aba9-7fe12b5def31

 

The Briefing: Tech’s Earnings This Week
8 February 2026 | Martin Peers, The Information
The Information’s nightly briefing previews earnings for mid-sized consumer and platform companies while tying market attention back to AI-driven narratives—advertising durability, shifting consumer behaviour, and emerging competitive threats from AI chatbots and agents. The piece flags how AI is becoming a background variable in investor expectations across sectors: from ad platforms to marketplaces and gig-economy companies that could be exposed to automation or robotics. It’s less about model releases and more about how AI is changing the story investors tell about growth, margin durability, and disruption risk.
What you need to know: Even when the “news hook” is earnings, AI is now embedded in how markets price business models—expect more sectors to be evaluated through an “AI risk / AI leverage” lens.
Original link: https://www.theinformation.com/newsletters/the-briefing/expect-spotify-lyft-pinterest-earnings-week

 

Big Tech groups race to fund unprecedented $660bn AI spending spree
9 February 2026 | Tim Bradshaw, Financial Times

Big Tech is projecting more than $660bn of combined AI-related capex in a single year, forcing executives to decide between cutting shareholder returns, drawing down cash reserves, or raising new debt and equity. The article links recent market volatility to investor anxiety over the timing and certainty of returns from AI infrastructure, even as leaders frame the buildout as the next internet-scale platform shift. Analysts expect knock-on effects in credit markets as issuance rises to match the spending trajectory.

What you need to know: The pace of AI capability gains is increasingly tied to infrastructure buildout—market sentiment and financing conditions can directly shape how fast models and products improve.
Original link: https://www.ft.com/content/d503afd5-1012-40f0-8f9d-620dcb39a9a2

 

Alphabet lines up 100-year sterling bond sale
10 February 2026 | Euan Healy and Tim Bradshaw, Financial Times

Alphabet is preparing a rare 100-year sterling bond as part of a multi-currency fundraising push, as Big Tech companies seek ever-larger pools of capital to finance AI-driven data centre and chip buildouts. The report describes the move as a way to broaden Alphabet’s investor base beyond repeated US dollar issuance, reflecting the scale and duration of anticipated AI infrastructure spending. Investors are also weighing whether yields adequately compensate for the long-term risks tied to hyperscalers’ AI capex commitments.

What you need to know: AI progress is increasingly constrained by capital intensity—financing strategies like ultra-long bonds are now part of the competitive landscape for compute.
Original link: https://www.ft.com/content/3260bc45-e09e-45a7-ae30-e55effbaf29b

 

The market meltdown in data companies is profoundly wrong
10 February 2026 | Nathan Graf, Financial Times
Evercore’s Nathan Graf argues investors are misreading AI’s impact on B2B information services after a sharp sell-off in firms such as S&P Global, Moody’s, Relx, Wolters Kluwer and Gartner. The core claim is that high-stakes professional users don’t just need answers—they need validated, auditable data embedded in domain-specific workflows, where reliability and provenance matter as much as speed. Rather than disintermediating data providers, AI could increase the value of trusted datasets and integrated tooling, rewarding incumbents that modernise their products.
What you need to know: AI “answer engines” don’t automatically replace trusted data infrastructure—provenance, verification, and workflow integration are becoming competitive moats in the AI era.
Original link: https://www.ft.com/content/da5eef0b-68c0-45ba-946e-68477d0cc103

 

Mistral’s revenues soar over $400mn as Europe seeks AI independence
11 February 2026 | Tim Bradshaw and Leila Abboud, Financial Times
Mistral says its annualised revenue run rate has surged to over $400mn, driven by demand from European businesses and governments seeking alternatives to US tech providers amid rising “sovereign AI” concerns. The company is expanding enterprise customers and investing in vertically integrated infrastructure, including a major data-centre buildout in Sweden, framed as diversifying compute capacity across Europe. The story positions Mistral as a flagship for Europe’s ambition to reduce dependency on US hyperscalers and retain control over data locality and strategic technology.
What you need to know: “AI sovereignty” is turning into real procurement and infrastructure decisions—local models plus local compute are becoming a strategic product, not just a political slogan.
Original link: https://www.ft.com/content/664249e7-e8d5-4425-b397-ad3ed590b305

 

Shares in UK wealth managers hit as AI contagion spreads
11 February 2026 | Emma Dunkley, Mary McDougall and Emily Herbert, Financial Times

UK wealth managers sold off sharply after US fintech Altruist launched an AI planning tool that promises to personalise investment and tax strategies by analysing documents like tax returns, payslips, and meeting notes. The FT reports that the move spooked investors already primed by recent “AI loser” sell-offs in software and analytics, while industry figures argued the reaction was likely an overreach and that high-touch advice will endure — albeit increasingly augmented by AI.

What you need to know: AI’s most immediate impact is often “advisor leverage” — tools that let one professional serve more clients can reprice whole service industries even before full automation arrives.
Original link: https://www.ft.com/content/5904b66f-2144-44d7-af24-66c075677d92?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

 

The Briefing: Software selloff resumes
11 February 2026 | Martin Peers, The Information

A renewed sell-off in enterprise software stocks reflects investor anxiety that AI will undercut traditional SaaS growth models. Even companies aggressively integrating AI, such as Shopify and Unity, have seen sharp share declines as markets struggle to price AI’s real impact. The piece argues that fear is outpacing fundamentals, but also that AI has made software valuations structurally harder to justify.

What you need to know: Financial markets are signaling uncertainty about who captures value in an AI-driven software economy—and who gets disrupted.
Original link: https://www.theinformation.com/articles/software-selloff-resumes

 

Private equity’s doomsdAI moment
12 February 2026 | Financial Times (Due Diligence)

The FT’s Due Diligence argues that AI is turning private equity’s software playbook into a potential trap: years of leverage-fuelled acquisitions assumed durable pricing power in specialised enterprise software, but investors are now questioning whether AI tools will commoditise key categories. The newsletter warns that highly leveraged, slower-growing portfolio companies may be hard to exit if public markets prefer “pure AI” stories — and if buyers believe AI can replace or undercut what those firms sell.

What you need to know: This is AI’s disruption story moving from theory to capital markets — once financing conditions tighten, “AI risk” shows up as higher discount rates and fewer buyers for legacy cashflows.

 

Unhedged: The software sell-off (part one)
13 February 2026 | Robert Armstrong, Financial Times

Robert Armstrong digs into why listed software names have been sliding, separating valuation “gravity” from the scarier question: whether AI changes the competitive moat for legacy software companies. He notes that margins and long-duration growth expectations had been priced to perfection, and that the market is now repricing the risk that AI-native tools (or AI-accelerated competitors) can erode incumbents’ advantage faster than investors assumed.

What you need to know: Even before AI fully automates jobs, it can still rewrite market structure — by compressing software margins and lowering the cost to build competing products.
 

Anthropic raises $30bn at a $350bn valuation in latest funding round
13 February 2026 | George Hammond, Financial Times

Anthropic raised $30bn from investors including major funds and Nvidia, valuing the company at $350bn pre-money as it expands data-centre capacity and positions for a potential IPO. The piece emphasizes Anthropic’s enterprise-heavy strategy, with a large share of its revenue run rate attributed to business customers, and highlights Claude Code’s role in accelerating adoption among software engineers. The funding underscores intensifying competition among frontier model providers as infrastructure needs climb.

What you need to know: The frontier AI race is now a scale game—massive rounds are effectively compute and distribution bets, not just product validation.
Original link: https://www.ft.com/content/d21f4583-a05d-4a94-8404-f1e02a332283

 

Commercial real estate share slide accelerates in latest sell-off driven by AI fears
13 February 2026 | Maxine Kelly and Julie Steinberg, Financial Times
Shares in major commercial real estate services firms fell sharply as investors priced in a new AI-driven threat: fewer office workers could mean weaker long-run demand for office space and related services. The move followed broader “AI disruption” sell-offs in other knowledge-work-adjacent sectors and reflects market anxiety that agentic automation could shrink headcount needs across white-collar industries. The article frames this as the contagion of AI narratives into sectors not typically viewed as tech, with investors rapidly repricing exposure to knowledge work.
What you need to know: Markets are starting to treat AI as a structural labour shock, not a niche tech story—capital is moving in anticipation of second-order effects like office demand and services revenue.
Original link: https://www.ft.com/content/90e8ed28-e44b-48da-b06c-a54bc71f053a

 

Amazon’s Andy Jassy bets on $200bn AI spending drive to revive AWS
14 February 2026 | Rafe Rosner-Uddin, Financial Times
Amazon is launching its biggest-ever capex programme, with Andy Jassy signalling roughly $200bn of spending this year to expand data centres, develop chips, and build AI models—much of it directed at AWS. The strategic reset follows internal concern that AWS was slow to capitalise on the post-ChatGPT boom and has been outpaced by rivals in landing major AI contracts. Amazon is consolidating chip, model, and advanced research teams under a unified structure while also cutting costs and roles elsewhere. Investors remain uneasy about whether the spending will generate returns quickly enough, even as AI infrastructure demand accelerates.
What you need to know: This is the clearest signal that the AI race is now an infrastructure and capital-allocation war—model progress is increasingly gated by data centres, chips, and energy, not just algorithms.
Original link: https://www.ft.com/content/905df663-8c47-4e88-b6ff-24dd4bd46290

 

TSMC’s US investment plans at heart of $250bn puzzle for chip sector
14 February 2026 | Kathrin Hille and Aime Williams, Financial Times
A new US–Taiwan trade agreement aligns tariffs on Taiwanese exports but leaves semiconductors—Taiwan’s most strategic export—at the centre of unresolved negotiations. The deal builds on Taiwan’s pledge that its tech companies would invest $250bn in US chipmaking in exchange for exemptions, but the lack of clarity around what counts toward that figure is fuelling uncertainty for TSMC’s capex and manufacturing footprint. The article highlights how Trump-era tariff politics are pushing TSMC and its US customers (including big AI spenders relying on Nvidia chips manufactured by TSMC) to re-think supply chains, packaging capacity and long-term US production scale.
What you need to know: Frontier AI progress is increasingly constrained by geopolitics and manufacturing capacity—TSMC’s footprint decisions directly shape the availability and cost of the chips powering the AI boom.
Original link: https://www.ft.com/content/b715b003-1d10-46d4-a02d-1c5969d0dbf8

 

AI’s electricity demand is fuelling inflation, crimping consumer spending and slowing economic growth
17 February 2026 | Robin Wigglesworth, Financial Times (Alphaville)
Financial Times Alphaville highlights Goldman Sachs analysis suggesting that rapidly rising electricity demand from AI data centres is becoming a binding constraint—pushing up power prices and creating broader macro spillovers. The piece notes that electricity inflation has been running well above headline inflation, and argues that even if regulators try to shift grid upgrade costs onto data-centre operators, households and non-AI businesses may still bear meaningful price increases. The upshot is a potential drag on consumer spending and economic growth unless AI’s promised productivity gains materialise quickly enough to offset the costs.
What you need to know: Energy is emerging as a core “rate limiter” on frontier AI scaling—affecting everything from data-centre siting and regulation to public backlash, cost-of-living politics, and ultimately AI deployment economics.
Original link: https://www.ft.com/content/a644bdcf-cbbe-427b-883c-3ad034353bbb

 

Souping up data centres could give industrial firms an extra AI boost
17 February 2026 | Lex, Financial Times
As AI workloads drive a step-change in data-centre electricity intensity, the “boiler room” of AI is being redesigned—higher voltages, denser power delivery, and new architectures that free space for compute. The column highlights a coming shift toward 800-volt systems and more modular power and cooling designs, creating opportunities for electrification and power-management suppliers such as ABB and Legrand. Nvidia’s longer-term vision involves deeper rewiring that changes how electricity is distributed inside data centres.
What you need to know: AI scaling is turning power engineering into a strategic bottleneck—and a profit pool—making industrial electrification companies unexpectedly central to AI progress.
Original link: https://www.ft.com/content/2a92d031-9b29-47b5-b001-50bde4c5dcf8

 

Software isn’t dead, but its cosy business model might be
17 February 2026 | Lex, Financial Times
The column argues AI agents will pressure the traditional “per seat” software pricing model, as usage shifts from humans clicking tools to autonomous systems completing tasks. In an agent-heavy world, the natural unit becomes actions, queries, outcomes, or tokens consumed—making revenue less predictable and potentially more cyclical than classic SaaS subscriptions. Some vendors are already moving toward hybrid or consumption-based billing, but the transition is likely to be messy and could reset software valuations that relied on stable recurring revenue.
What you need to know: Agents don’t just change workflows—they change the economics of software, pushing the industry toward outcome/usage pricing and reshaping who wins in enterprise AI.
Original link: https://www.ft.com/content/8784de75-861f-4460-b8c3-6937f626dbd1

 

Nvidia secures multibillion-dollar Meta deal as it battles chip rivals
18 February 2026 | Michael Acton and Hannah Murphy, Financial Times
Meta has agreed to spend billions of dollars on millions of Nvidia chips in a multi-year deal, even as it develops in-house AI hardware and rivals such as AMD push harder into the market. The agreement reportedly includes Meta committing to Nvidia’s next-generation “Vera Rubin” chips and, notably, buying Nvidia CPUs at scale to support inference workloads. The deal underscores how Big Tech is trying to diversify away from Nvidia while still relying on it to meet near-term capacity needs.
What you need to know: The AI infrastructure race is splitting into two battles at once—Big Tech wants independence via custom silicon, but near-term demand keeps reinforcing Nvidia’s dominance and pricing power.
Original link: https://www.ft.com/content/d3b50dfc-31fa-45a8-9184-c5f0476f4504

 

AI fever sparks Raspberry Pi meme stock frenzy
19 February 2026 | Tim Bradshaw and Rachel Rees, Financial Times
Raspberry Pi shares surged amid retail-investor excitement that demand for its low-cost computers is rising among AI hobbyists—driven by the popularity of “OpenClaw,” framed as a locally run personal AI agent. The article suggests the narrative tapped into a broader market anxiety about AI agents disrupting white-collar work, while also pointing to practical appeal: running models locally can reduce cloud costs and contain security risks by isolating agentic software from a main computer. The episode showed classic meme-stock dynamics—viral posts, “optically discounted” shares, and speculative momentum—before the price partially retraced.
What you need to know: The “local AI agent” trend is pushing compute back to edge devices, widening AI’s hardware footprint beyond hyperscale clouds—and influencing both consumer tech demand and security trade-offs.
Original link: https://www.ft.com/content/824aa5e3-e86f-4da4-bde9-bb705d6ba20e

 

Meta cuts staff stock awards for a second straight year
20 February 2026 | Hannah Murphy, Financial Times
Meta has reduced equity refresh grants for most employees again, even as it ramps capital expenditure and compensation packages to recruit top AI talent and fund data-centre expansion. The cuts are framed as part of a broader efficiency push to reassure investors who have yet to see clear returns from Meta’s escalating AI spend, while Meta simultaneously adjusts performance reviews to concentrate rewards among top performers. The move reflects the internal distributional tension created when companies treat frontier AI as a near-existential race.
What you need to know: Frontier AI investment is reshaping how big tech allocates money internally—AI capex and talent bids can crowd out broader workforce compensation, signalling how costly the next phase of competition has become.
Original link: https://www.ft.com/content/071d5503-b3dc-46bc-bc55-28f92dbdd42a

 

Nvidia and OpenAI abandon unfinished $100bn deal in favour of $30bn investment
20 February 2026 | George Hammond and Michael Acton, Financial Times
Nvidia is reportedly close to a large equity investment in OpenAI that replaces a previously announced multi-year framework that never moved from letter-of-intent to a formal agreement. The new structure is portrayed as simpler and more immediate, while still reinforcing the circular dynamic of the AI boom—OpenAI raises enormous capital, then reinvests heavily in Nvidia hardware to build gigawatt-scale compute. The article notes investor jitters about a potential bubble and the increasingly complex web of supplier-customer-investor relationships across the AI stack.
What you need to know: The AI ecosystem is being financed like heavy industry—mega-rounds, infrastructure-scale buildouts, and tight coupling between chip suppliers and model labs are becoming a defining feature of frontier progress.
Original link: https://www.ft.com/content/dea24046-0a73-40b2-8246-5ac7b7a54323

 

Smart glasses give glimpse of how AI threatens physical goods too
22 February 2026 | Lex, Financial Times
Smart glasses are finally gaining traction, with sales of Meta’s Ray-Ban AI glasses reportedly tripling to more than 7mn units last year—yet EssilorLuxottica’s shares have fallen. The column argues this tension reflects a deeper shift: as AI features become the main driver of perceived value, consumers may care less about brand and frame quality and more about the software layer “piped into the wearer’s pupils.” That dynamic mirrors how software and user experience have reshaped other hardware categories, threatening incumbents whose margins historically came from physical design and branding.
What you need to know: AI is not only disrupting digital services—it’s starting to reallocate value inside consumer hardware, shifting pricing power from “physical product makers” to “AI/software ecosystems.”
Original link: https://www.ft.com/content/09679583-4c02-48f0-a4f7-e40857103f56

Further Reading: Find out more from these resources

Resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page