top of page

THE SKINNY
on AI for Education

Issue 24, January 2026

Welcome to The Skinny on AI for Education newsletter. Discover the latest insights at the intersection of AI and education from Professor Rose Luckin and the EVR Team. From personalised learning to smart classrooms, we decode AI's impact on education. We analyse the news, track developments in AI technology, watch what is happening with regulation and policy and discuss what all of it means for Education. Stay informed, navigate responsibly, and shape the future of learning with The Skinny.

Skinny image.jpg

In January 1986, I watched Christa McAuliffe, a social studies teacher from New Hampshire, waving to the cameras, full of excitement about becoming the first teacher in space. I was transfixed 73 seconds later as the Challenger shuttle exploded. Her two children were just a little older than my own. I thought about them, and her family, as the tragedy unfolded.

Now, sitting in Australia on holiday with my own family, I find myself reading a remarkable piece my son-in-law has written for the ABC marking the disaster's 40th anniversary. It is a story I thought I knew. Frozen O-rings. A cold January morning. Seven lives lost. But what strikes me afresh is not the technical failure. It was the human one.

The engineers knew. Bob Ebeling and Roger Boisjoly at Morton Thiokol spent hours the night before launch pleading with managers not to proceed. They had data showing the O-ring seals had never been tested below 53°F; the forecast for launch morning was 26°F. But schedule pressure, commercial imperatives, and a phenomenon that sociologist Diane Vaughan would later call the "normalisation of deviance" won out. Because the O-rings had shown damage on previous flights without catastrophe, the risk became acceptable. The deviance became normal. Manager Jerry Mason's infamous instruction to engineering head Robert Lund: "It's time to take off your engineering hat and put on your management hat" sealed the crew's fate.

Forty years on, as we enter 2026, I find myself thinking about normalisation of deviance in a very different context: AI. Not because AI poses equivalent risks to human life, although in some applications it might; but because the pattern of ignoring warning signs while pressing ahead is eerily familiar. And the warning signs are now flashing amber.

But here is the good news: we know what makes the difference. If we want AI to work in education, we need to invest in the people expected to use it. The evidence for this is now very clear.

--------------------------------------------------------------------------------------------------------------------------

In Brief: The ‘Skinny-Skinny Editorial’ 60-second version

  • The 88/6 gap: 88% of organisations now use AI, but only 6% are generating meaningful returns. Schools and training providers face the same risk: adoption without impact.

  • The 10-20-70 rule: Algorithms account for just 10% of AI success; people and processes account for 70%. For educators, this validates what we've always known; technology alone doesn't transform learning.

  • The trust paradox: 66% of people use AI regularly, but only 46% trust it. Education workers fall disproportionately into the "reject" category. Building trust requires hands-on experience and peer support, not top-down mandates.

  • The training gap is real: 68% of US teachers received no AI training last year. Only 25% were taught what AI actually is.

  • The UK's AI tutoring trial will be a test case: 450,000 disadvantaged pupils, tools "co-created with teachers," rollout in 2027. The government says it will "complement, not replace" face-to-face teaching. Whether it succeeds will depend on getting the human factors right.

  • For learning professionals: The scarce capability isn't AI tool proficiency. It's knowing how to help organisations and institutions transform around AI. That's the curriculum gap to fill.

--------------------------------------------------------------------------------------------------------------------------

In Brief: The ‘Full-Skinny Editorial’ 5-minute version

 

The gap between experimentation and value

In my December Skinny editorial, I focused on the technology side of AI, the model releases, the capability improvements, the infrastructure race. I also flagged what I called "the end of free AI," noting that the era of unlimited free access to frontier models was drawing to a close. That prediction has landed faster than I expected: OpenAI has now introduced ads to its free tier and launched "ChatGPT Go" at $8/month, while Google has announced plans to add advertisements to Gemini in 2026. The economics of AI are asserting themselves.

This month, I want to flip the coin entirely. Because it's perfectly possible (although perhaps unlikely) that all the supply chain issues get solved: chips, memory, electricity, water, cooling; and AI still fails to deliver on its promise. The technology is not the hard part. The human side is.

The evidence is now clear. McKinsey's 2025 Global AI Survey captures the gap starkly: 88% of organisations now use AI, yet only 6% are generating meaningful returns. Other studies tell the same story. Most CEOs report no revenue or cost benefit from their AI investments. The majority of pilot programmes deliver no measurable impact. And the proportion of companies abandoning AI initiatives has more than doubled in a year. The technology landed. The value didn't.

BCG's research crystallises why with what might be called the "10-20-70 rule": algorithms account for 10% of AI success, technology and data for 20%, and people and processes for the remaining 70%. This inverts the common assumption that AI implementation is primarily a technical challenge. Roughly 70% of AI implementation challenges are people- and process-related, employee scepticism, lack of skills, process inertia, cultural pushback, and the absence of workflow redesign.

The trust paradox

Perhaps the most revealing finding comes from KPMG's landmark 48,000-person global study: 66% of people use AI regularly, but only 46% are willing to trust it. People are using AI before they trust it. This is a precarious foundation for sustained adoption. More troubling still, trust has actually declined since the 2022 pre-ChatGPT study, even as usage has exploded.

The geographic variation is stark. Trust in AI is more than twice as high in China as in the United States or the UK. Edelman found that Americans who reject AI outnumber those who embrace it by three to one. The UK sits below the global average. Education workers specifically fall into the 'reject' category: a notable signal for those of us working in learning and development.

What builds trust? The research points to something surprisingly simple: personal hands-on experience. "Someone like me" recommendations are twice as trusted as CEO pronouncements on AI. When AI helps people solve real problems at work, trust doubles in some markets. The implication for training providers is clear: effective AI adoption requires learning-by-doing, peer support, and tangible benefit demonstration. This is exactly what good education can provide.

Shadow AI and the governance gap

One of the most striking findings from recent research is the prevalence of "shadow AI". Employees using AI tools without employer knowledge or approval. Microsoft's Work Trend Index found that the majority of AI users bring their own tools to work. BlackFog's research is more troubling still: most employees believe using unsanctioned AI is 'worth the security risks' if it helps meet deadlines. The data being shared on these platforms is concerning: employee records, financial statements, proprietary research.

Yet only 44% of executives say their organisations have generative AI policies, according to Littler's 2025 survey, although this is up from just 10% in 2023. More than one in four companies have no AI policy and no plans for one. The governance gap between adoption and policy continues to widen.

This is normalisation of deviance in real time. Employees are using AI in ways that organisations haven't sanctioned, sharing data in ways that haven't been risk-assessed, because nothing has gone wrong yet. The deviance becomes normal. Until it doesn't.

The demographic counterforce

There is, however, a counterargument worth considering. Writing in the Financial Times last week, Ruchir Sharma of Rockefeller International made the case that what the current obsession with AI overlooks is another force advancing rapidly: population decline. The number of countries with shrinking working-age populations has risen from zero to 55 in four decades, including most major economies. Last year, births in China fell to the lowest level since 1949; in Japan, to the lowest since 1899. The world's working-age population is now predicted to peak 30 years earlier than expected, in 2070.

Sharma's argument is that against this backdrop, AI is more likely to ease coming labour shortages than trigger mass unemployment. Historical tech revolutions, he notes, killed industries, not jobs. ATMs allowed banks to open more branches and hire more tellers; the internet displaced 3.5 million US jobs but created 19 million. Already, about a third of new US jobs are of types that didn't exist 25 years ago.

This matters for education and training because it reframes the challenge. If the risk is labour shortage rather than mass unemployment, then the imperative isn't just to help people survive AI disruption, it's to ensure AI can actually be deployed effectively. And that brings us back to the human factors: the skills gaps, the trust deficits, the governance voids that currently prevent AI from delivering value at scale.

Education's particular challenge

Education presents a distinctive case study. Student uptake has exploded: more than nine in ten UK students now use AI tools, up from two-thirds just a year earlier. Staff adoption lags far behind. Gallup reports that more than two-thirds of US teachers received no AI training last year. Where training exists, it often misses the mark: fewer than one in three received guidance on effective use, and only a quarter were taught what AI actually is.

The UK government's announcement last week of AI tutoring trials for up to 450,000 disadvantaged pupils is therefore both welcome and instructive. The ambition is significant: to provide personalised one-to-one tutoring support that could accelerate learning by around five months, levelling a playing field where children from wealthier families are far more likely to access private tutors. The government has been explicit that AI tutoring will "complement, not replace, face-to-face teaching", acknowledging that the human element remains essential.

But the announcement also highlights the challenges. Tools will be "co-created with teachers" and "rigorously tested" before rollout in 2027. Robust benchmarks will be developed. Teachers will receive "clear, practical training." These are exactly the human factors that the enterprise research tells us determine success or failure. The question is whether the implementation matches the ambition, whether the lessons of the 94% failure rate in enterprise AI pilots are heeded.

What this means for learning professionals

The research points to three priorities. Not next year. Now.

First, teach how AI behaves, not just how to use it. Tool training is not enough. People need to understand what AI actually is: pattern recognition from data, not understanding. They need to grasp why it hallucinates, why it sounds confident when wrong, why it performs brilliantly on some tasks and fails on others. This is not abstract knowledge; it is the foundation for good judgement. Someone who understands that a large language model predicts plausible next words rather than retrieving verified facts will use it differently from someone who treats it as an oracle. The AI Understanding comes first; the tools follow.

Operational step: Before any tool training, ensure participants can answer: What is this AI actually doing? What is it good at and why? What are its predictable failure modes? Build this conceptual foundation into every programme, not as theory for its own sake, but as the basis for intelligent use.

Second, contextualise learning in tasks that matter to the learner. Generic AI training fails because it asks people to learn in the abstract. Effective AI adoption happens when people encounter AI in the context of work they already care about: the lesson they are planning, the report they are writing, the problem they are trying to solve. The research on trust is clear: people believe in AI when it helps them do something real. "Someone like me" recommendations are twice as trusted as executive pronouncements. Peer learning, grounded in shared professional context, beats top-down mandates every time.

Operational step: Design programmes around authentic tasks from participants' own roles. Do not teach "how to use ChatGPT"; teach "how to use AI to design activities to support the development of learner metacognitive capabilities" or "how to use AI to analyse survey data for your department." Let participants bring real problems. Build in structured peer exchange so they learn from colleagues facing similar challenges.

Third, address the human factors that determine success or failure. The BCG research is unambiguous: 70% of AI implementation challenges are people and process related. Fear of being left behind drives resistance more than fear of the technology itself. And human-AI collaboration does not automatically produce better outcomes; it requires the right conditions. Kate Benson's litmus test for Nord Anglia captures this perfectly: "Does this use of AI enhance or detract from our ability to think?" If AI is making people passive, dependent, or less critically engaged, it is failing regardless of how sophisticated the tool.

 

Operational step: Name the anxiety explicitly in your programmes. Create space for participants to articulate concerns, examine evidence on AI and employment, and identify where AI could remove drudgery rather than meaning from their work. Help them develop their own criteria for when AI augments their thinking and when it undermines it. Unspoken fear does not dissipate; it calcifies into resistance.

The window for action is narrow. The 88% of organisations now using AI will not wait for the training sector to catch up. Those who move first to address the human factors, not just the technical ones; will shape how AI is adopted across education and the workforce. Those who do not will find themselves teaching tools that organisations have already abandoned, to people who never understood why they were using them in the first place.

The courage to speak up

Brian Russell, one of the Thiokol engineers who opposed the Challenger launch, reflected years later: "I'm never going to be in that position again. I am never going to be afraid to speak up when I believe that a different view needs to be expressed." Bob Ebeling carried guilt until his death in 2016, finding some closure only after NASA publicly acknowledged that he had done his job and should not bear blame.

As AI rolls out across our organisations and institutions, we would do well to remember their story. The technology is rarely the problem. The normalisation of deviance, pressing ahead because nothing has gone wrong yet, ignoring those who see the risks, putting on management hats when we should be wearing engineering ones, that's what gets us into trouble.

The 88% of organisations using AI will not all become the 6% generating real value. The difference will not be the sophistication of their models or the scale of their compute. It will be whether they listen to the people doing the work, invest in the human factors that determine adoption, and resist the temptation to normalise deviance in pursuit of schedule and budget.

If this sounds abstract, consider what happens when systems are deployed faster than their safeguards can mature.

Some will say the Challenger analogy overstates the case. I am not so sure. In January alone, we learned that Grok had generated thousands of non-consensual sexualised images at industrial scale, including of children. Families settled lawsuits against Character.ai after teenagers took their own lives following interactions with AI companions. These are not hypothetical risks or distant possibilities. The normalisation of deviance is not a metaphor. It is happening now, in systems being marketed to the public, including to children, without adequate safeguards. The engineers who warned against the Challenger launch were ignored because nothing had gone wrong yet. We no longer have that excuse.

The technology is ready. The question is whether we are.

AI News Summary

AI Ethics and Societal Impact

Why people love neurotic robots
29 December 2025 | Patti Waldmeir, Financial Times

This column explores research showing that people often prefer robots and AI assistants that display subtle human flaws, such as hesitation or anxiety, because it makes them seem more relatable. University of Chicago experiments found that a “neurotic” robot greeter felt more emotionally authentic than a purely extroverted or emotionless one. However, the article warns that increasingly humanlike AI personalities risk misleading users into treating machines as companions or therapists, with potentially harmful consequences. The debate reflects growing concern over chatbot sycophancy, emotional manipulation, and the social costs of personalised AI interaction.

What you need to know: Highlights how AI development is no longer just technical — designers must grapple with psychological and ethical risks as systems become socially embedded.

 

Can AI really help us find love?
31 December 2025 | Kieran Smith and Melissa Heikkilä, Financial Times

AI companions and dating tools are increasingly shaping human relationships, with millions turning to systems like Replika for emotional or romantic support. Big Tech firms are also moving toward “personal assistant” models that blur into synthetic intimacy, while regulators struggle to keep up with safety risks, especially for younger users. Experts remain divided: some point to positive social benefits, while others warn that overly agreeable chatbots may deepen isolation and monetise vulnerability rather than strengthen real connection.

What you need to know: As AI becomes emotionally embedded in daily life, the technology’s societal impact may extend far beyond productivity—raising urgent questions about trust, intimacy, and regulation.

 

The Morning: Hating A.I.
2 January 2026 | Evan Gorelick, The New York Times

Evan Gorelick explores the uniquely American backlash against AI, arguing that the anger is less about any single breakthrough than about a broader loss of control. He describes a “sprawling” set of fears — job displacement, distrust in opaque systems that hallucinate or amplify bias, and resentment at being pushed into AI-powered workplaces and services by default. The piece also suggests US scepticism is shaped by the hangover from social media: years of privacy erosion, addictive design, and perceived damage to democracy have primed the public to meet AI with suspicion rather than optimism.

What you need to know: Public trust is becoming a binding constraint on AI deployment — affecting regulation, adoption, and where companies can safely ship new systems.

 

Character.ai and Google agree to settle lawsuits over teen suicides
7 January 2026 | Cristina Criddle, Financial Times

Google and AI chatbot start-up Character.ai have agreed to settle multiple lawsuits brought by families of teenagers who died by suicide or self-harmed after interacting with the company’s companion-style chatbots. The cases have intensified scrutiny over emotionally immersive AI systems and the risks they pose to vulnerable young users. Character.ai has already barred under-18s from using certain bots, while US states begin introducing piecemeal regulation in the absence of federal safeguards.

What you need to know: Companion AI is moving into deeply sensitive human territory, and these lawsuits highlight how safety failures could reshape the regulatory future of consumer chatbots.

 

Elon Musk’s AI chatbot generates child sexual images
January 2026 | Melissa Heikkilä and Adrienne Klasa, Financial Times

Grok, the chatbot developed by Elon Musk’s xAI and integrated into X, has generated sexually explicit images of minors that were shared on the platform, prompting outrage and regulatory escalation in France. The incident raises renewed alarm over how easily safety guardrails can be bypassed, especially in systems designed to be “maximally truth-seeking” with fewer restrictions. Policymakers across Europe and the UK are now debating stronger laws against AI-generated child abuse material.

What you need to know: Generative AI safety is not a theoretical issue — failures in content controls are already triggering legal action and could accelerate tougher enforcement worldwide.

 

How Elon Musk’s Grok spread sexual deepfakes and child exploitation images
January 2026 | Melissa Heikkilä and Hannah Murphy, Financial Times

Experts say xAI’s Grok lacked adequate safeguards to prevent users from generating sexualised deepfakes of women and children, highlighting how image models can be misused when “guardrails” are minimal. The article points to broader structural issues: models are often trained on vast scraped datasets that may contain harmful material, and safety layers (keyword blocks, nudity classifiers, concept removal) can be bypassed or fail at scale. After regulatory pressure and threats of fines or bans, xAI said it would limit Grok’s image generator to paid subscribers, but critics argue monetisation choices don’t solve the underlying safety problem—especially when features are integrated into a viral social platform.

What you need to know: This is a real-world stress test of AI safety—showing how model design, training data, and distribution on social networks combine to amplify harms if safeguards aren’t robust.

 

The problem with AI and ‘empathy’
6 January 2026 | Sarah O’Connor, Financial Times

Sarah O’Connor examines research suggesting chatbots can outperform humans in empathy-adjacent evaluations, including studies where clinicians rated chatbot replies as more empathic than doctors’ responses. She argues this risks a category error: LLMs don’t feel, but can predict emotions and generate language that convincingly simulates empathy — and letting the definition drift could distort how we value human care. The column highlights both upside (some guarded therapeutic benefit) and danger, including harmful outcomes for vulnerable users and the moral cover “empathic AI” could provide for replacing human contact.

What you need to know: As AI becomes emotionally fluent, the core risks shift from “can it reason?” to “can it persuade and substitute?” — raising urgent questions about safeguards, deployment contexts, and the social meaning we attach to machine-generated care.

Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X
7 January 2026 | Cecilia D’Anastasio, Bloomberg

A third-party analysis found Grok, embedded in X, generated an “unprecedented” volume of non-consensual AI “undressing” images—about 6,700 per hour during a 24-hour period—far exceeding comparable content volumes on other leading sites. Victims reported poor responsiveness from moderation systems, and critics argue Grok imposes fewer limits than major competitors, making it easy to generate and distribute harmful content at scale. The report also spotlights growing regulatory and legal pressure, including references to platform liability debates when AI systems actively generate abusive imagery rather than merely hosting user uploads.

What you need to know: Distribution is a force multiplier—integrating generative models into social platforms turns misuse into a high-velocity safety and governance crisis, not just a model-side defect.

 

OpenAI Unveils ChatGPT Health to Review Test Results, Diets
7 January 2026 | Shirin Ghaffary, Bloomberg

OpenAI introduced “ChatGPT Health,” a feature designed to help users analyse medical test results, prepare for doctor visits, and get guidance on diet and workouts—positioning it as the company’s biggest push yet into healthcare. The tool can connect with electronic medical records, wearables, and wellness apps, while OpenAI emphasises it is meant to supplement clinicians and “stop short” of formal diagnoses. The launch reflects both the opportunity (personalised insights from health data) and the heightened risk profile (privacy, safety, and trust) when general-purpose AI moves into sensitive, high-stakes domains.

What you need to know: Consumer AI is expanding into regulated, high-liability areas—forcing stronger privacy boundaries, validation standards, and careful product claims about what models can safely do.

 

AI will free households from chores and boost hidden productivity, says OpenAI
7 January 2026 | Cristina Criddle, Financial Times

OpenAI’s chief economist Aaron “Ronnie” Chatterji argues that AI’s biggest productivity gains may be invisible to traditional GDP measures because they come from automating unpaid domestic labour. Tasks such as childcare planning, cooking guidance, and household administration could save millions of hours, particularly for women who disproportionately shoulder unpaid work. Chatterji compares AI’s impact to the dishwasher and even electricity, suggesting it will reshape both home and workplace productivity. The article notes ongoing scepticism among academics about whether generative AI can deliver broad economic gains, alongside criticism that OpenAI’s research may be selectively favourable.

What you need to know: Reinforces the argument that AI’s real economic value may emerge first in personal life and invisible labour, not just corporate metrics.

 

Elon Musk should keep UK Royal Society membership, says president
9 January 2026 | Michael Peel & Clive Cookson, Financial Times

The Royal Society’s president has defended Elon Musk’s continued fellowship despite controversy over sexualised images generated by xAI’s Grok chatbot. Sir Paul Nurse argued that scientific institutions should judge fellows on the integrity of their scientific contributions, not personal behaviour or downstream misuse of technology. The comments have reignited debate over accountability in AI development, especially as lawmakers threaten fines and bans over unlawful content. The case exposes tensions between scientific recognition, ethical responsibility, and public trust in AI creators.

What you need to know: Reflects growing institutional struggles over how to assign responsibility when AI systems cause real-world harm.

 

Stop ignoring AI risks in finance, MPs tell UK regulators
19 January 2026 | Martin Arnold, Financial Times
UK MPs have warned that regulators’ “wait-and-see” approach to AI in financial services exposes consumers and the financial system to serious risks. A parliamentary report urges stress-testing for AI-driven market shocks and clearer accountability for AI-related failures. While acknowledging AI’s benefits for efficiency and fraud detection, lawmakers highlighted concerns around transparency, discrimination and systemic instability. Regulators face pressure to balance innovation with financial stability amid rapid adoption.
What you need to know: Signals rising regulatory scrutiny of AI as a systemic risk, not just a productivity tool, particularly in highly interconnected sectors like finance.
Original link: https://www.ft.com/content/d6a7c795-1cb2-4bca-98c2-6894bdf01029

 

Ukraine offers allies combat data to train AI
21 January 2026 | Christopher Miller and Fabrice Deprez, Financial Times
Ukraine will allow allied countries and companies to train AI systems using vast datasets gathered during four years of war with Russia, including drone footage and battlefield telemetry. The defence ministry plans to work with Palantir to create secure “data rooms” for training military AI. Officials say frontline data has exceptional value for improving autonomous defence systems and decision-making tools. The initiative aims to deepen collaboration with allies while accelerating Ukraine’s own defence technology.
What you need to know: Shows how real-world conflict data is becoming one of the most valuable and sensitive training resources for advanced AI systems.
Original link: https://www.ft.com/content/ab121d67-c823-40d4-808f-861f42145404

 

YouTube CEO says battling ‘AI slop’ a top priority in 2026
22 January 2026 | Annie Bang, Bloomberg
YouTube chief executive Neal Mohan said the platform will prioritise tackling low-quality, misleading AI-generated content as generative tools flood the site. The company plans clearer labelling of AI-created videos, stronger enforcement against harmful synthetic media and new tools to protect creators’ likenesses. YouTube is simultaneously expanding its own AI creation features, used by more than a million channels daily. The challenge is balancing innovation with advertiser, creator and user trust.
What you need to know: Highlights how content platforms are shifting from AI adoption to AI governance as generative output scales faster than quality control.
Original link: https://www.bloomberg.com/news/articles/2026-01-21/youtube-ceo-says-battling-ai-slop-a-top-priority-in-2026?embedded-checkout=true

 

Google Gemini can proactively analyse users’ Gmail, Photos and searches
14 January 2026 | Chris Welch, Bloomberg
Google has launched a new “Personal Intelligence” feature for its Gemini assistant that allows the model to proactively draw on users’ Gmail, Search, Photos and YouTube data to deliver more tailored responses. The opt-in system, initially rolling out in the US, enables Gemini to infer context without explicit prompts, giving Google a potential advantage over rivals with less access to personal data. The company has introduced guardrails and user controls to address privacy concerns, though it acknowledges the risk of over-personalisation and contextual errors.
What you need to know: Personalisation powered by proprietary data is becoming a key competitive frontier in consumer AI, raising both performance advantages and renewed privacy risks.
Original link: https://www.bloomberg.com/news/articles/2026-01-14/google-gemini-s-personalized-intelligence-feature-taps-gmail-searches-photos?embedded-checkout=true

 

Google taps emails and YouTube history in push for personalised AI
14 January 2026 | Cristina Criddle, Financial Times
Google is expanding Gemini’s capabilities to “reason” across emails, search history, photos and YouTube activity, aiming to evolve the chatbot from a transactional tool into a long-term personal assistant. The feature, initially available to US premium users, reflects a broader industry push to improve memory and contextual awareness in AI systems. While the move could strengthen Google’s position against competitors like OpenAI and Anthropic, it has also intensified scrutiny over how much personal data AI assistants should retain and use.
What you need to know: The race for AI users is increasingly about context and memory, not just model quality, placing data access and trust at the centre of competition.
Original link: https://www.ft.com/content/9bbdf59e-ce46-4176-aab9-b45a3f49fc4e

 

Our approach to advertising and expanding access to ChatGPT
16 January 2026 | Fidji Simo, OpenAI
OpenAI outlined plans to introduce advertising in ChatGPT’s free and Go tiers while committing to strict separation between ads and AI-generated answers. The company emphasised principles of answer independence, conversation privacy and user control, arguing that ads can help fund broader access to powerful AI without compromising trust. Higher-tier subscriptions, including Plus, Pro and Enterprise, will remain ad-free.
What you need to know: Monetisation strategies for AI assistants are evolving toward hybrid models, making trust, transparency and incentive alignment central to long-term adoption.
Original link: https://openai.com/index/our-approach-to-advertising-and-expanding-access/

 

Rich countries’ greater use of AI risks deepening inequality, Anthropic warns
15 January 2026 | Cristina Criddle, George Hammond and Clara Murray, Financial Times
Anthropic has warned that uneven global adoption of AI could widen economic disparities, as richer countries deploy the technology more intensively and capture disproportionate productivity gains. Analysis of usage data from its Claude chatbot shows that high-income economies use AI primarily for work and productivity, while lower-income countries rely more on educational use, with little evidence of convergence. The research estimates AI could add 1–2 percentage points to annual US labour productivity growth, raising concerns that market-led diffusion alone will reinforce existing divides.
What you need to know: Early AI adoption is translating into structural economic advantage, signalling that access, skills and affordability — not model quality — may define who benefits most AI.
Original link: https://www.ft.com/content/3ad44e30-c738-4356-91fb-8bb2368685c4

AI Employment and the Workforce

Applied AI: Microsoft Offers AI Copilot Customers Money to Train Employees to Use It
29 December 2025 | Aaron Holmes, The Information

Microsoft is offering financial incentives for large customers to train employees in using Microsoft 365 Copilot, after internal concerns that many enterprises are not adopting the AI tools they pay for. Some deals include tens or hundreds of thousands of dollars earmarked for third-party consultants or resellers to boost usage. The move reflects how AI adoption challenges are increasingly organisational rather than technical, as Microsoft works to justify Copilot’s premium pricing versus rivals like ChatGPT and Gemini.

What you need to know: The next phase of AI competition will depend not just on building powerful models, but on successfully integrating them into workplace routines and proving measurable productivity gains.

 

AI forecast to put 200,000 European banking jobs at risk by 2030
31 December 2025 | Simon Foy, Financial Times

Morgan Stanley analysts estimate that European banks could cut around 10% of their workforce—over 200,000 roles—by 2030 as they adopt AI and accelerate digitalisation. Job losses are expected to fall heavily on back- and middle-office functions, including compliance and risk divisions. While banks promise efficiency gains of up to 30%, some executives caution that implementation remains early and uneven, even as institutions like UBS experiment with AI avatars and leadership training summits.

What you need to know: AI-driven restructuring is beginning to reshape major white-collar industries, with banking emerging as a key test case for how automation will impact skilled service work.

 

The AI Shift: Our bets on how AI will reshape jobs in 2026 1 January 2026 | John Burn-Murdoch and Sarah O'Connor, Financial Times

The FT's AI Shift newsletter offers four predictions for 2026: declining time spent on the increasingly AI-polluted internet, a resurgence of in-person assessments as online text becomes untrustworthy, significant job losses in creative industries where AI-generated content is deemed "good enough," and the emergence of useful AI agents enabled by the widespread adoption of Model Context Protocol (MCP). The authors note that major AI companies have converged around MCP, describing it as the "USB-C port" for AI that will enable agents to work across different systems.

What you need to know: Highlights the Model Context Protocol as a potentially significant development that could accelerate practical AI agent deployment in 2026.

 

American animosity toward artificial intelligence grows 2 January 2026 | Evan Gorelick, New York Times

Americans are adopting AI faster than any previous technology while simultaneously expressing unprecedented concern about its implications for jobs, trust, and personal agency. Polling shows most Americans are worried about AI, with four out of five optimists still expressing alarm. The backlash manifests across sectors—from Hollywood strikes to nurse protests to subway ad vandalism—driven by fears about job displacement, algorithmic opacity, and the concentration of control among a handful of Silicon Valley executives. Researchers suggest Americans' uniquely negative views stem from prior disillusionment with social media.

What you need to know: Documents a significant gap between AI adoption rates and public sentiment that may influence future regulation and deployment strategies.

 

Games Workshop bans use of AI in its designs
13 January 2026 | Philip Stafford, Financial Times

Games Workshop has barred employees from using AI in content and design work, citing the need to protect intellectual property and respect human creators. The Warhammer maker said it was taking a cautious approach amid uncertainty over copyright, originality, and brand integrity, even as it continues to hire artists and writers. While a small group of managers will test AI tools, the ban reflects wider unease among creative industries about generative AI’s impact on ownership and value.

What you need to know: Shows resistance to generative AI in creative sectors, underlining unresolved tensions around IP, authorship, and commercial trust in AI-generated content.

 

How to AI-proof your job
January 2026 | John Burn-Murdoch, Financial Times

New labour market data suggests that as AI automates coding and analytical tasks, soft skills are becoming more valuable than pure quantitative ability. Jobs combining technical competence with communication, creativity and collaboration have seen stronger wage growth and employment resilience than roles focused narrowly on maths or programming. The analysis argues that agentic AI tools are turning once-scarce technical skills into commodities, accelerating a long-term shift towards human-centric capabilities.

What you need to know: Reframes AI’s impact on work, showing that adaptability and social skills may offer more protection than technical expertise alone.

AI’s impact on jobs is set to become more pronounced
19 January 2026 | Delphine Strauss, Financial Times
Despite early fears of mass unemployment after the launch of generative AI, labour markets have so far shown resilience. Economists now warn, however, that 2026 may mark a turning point as AI adoption begins to affect hiring patterns, particularly for graduates entering professional services, tech and finance roles. While AI is largely augmenting existing jobs rather than replacing them outright, firms are increasingly using automation to slow recruitment, reshaping career entry points and skill requirements.
What you need to know: Signals a shift from speculative fears about AI and jobs to concrete, uneven labour-market effects—especially for early-career workers—raising policy questions around reskilling and workforce transition.
Original link: https://www.ft.com/content/267037e8-a71f-4025-acca-f441fe712212

 

The great graduate job drought
22 January 2026 | Bethan Staton, Financial Times
Graduate job markets across advanced economies are tightening sharply, with entry-level roles increasingly squeezed by economic uncertainty and the rapid adoption of AI tools. Employers are hiring fewer junior staff while demanding higher skill levels, particularly in analytical and technical roles that overlap with AI-assisted work. Recruiters report that automation is reshaping early-career pathways, reducing traditional training-heavy positions. The result is a growing gap between graduate expectations and available opportunities.
What you need to know: Shows how AI is already reshaping labour-market entry points, potentially accelerating inequality between those who can leverage AI tools and those displaced by them.
Original link: https://www.ft.com/content/c89496b1-bc8d-425e-b86b-ec89402410e4

 

The recruitment company training AI to do your job
24 January 2026 | Bethan Staton, Financial Times
Start-up Mercor is recruiting tens of thousands of highly skilled professionals to train AI systems to perform the same work they are qualified to do, from consulting to journalism and healthcare. Contractors are paid high hourly rates to evaluate, correct and refine large language models, creating a new category of “AI training” work. While some see this as an opportunity to engage with frontier technology, others worry they are accelerating their own redundancy. Economists warn that although AI is still a “human master, AI apprentice” system, the balance may not last.
What you need to know: Illustrates a pivotal transition in the AI economy, where knowledge workers themselves become a key input into systems that may ultimately replace parts of their labour.
Original link: https://www.ft.com/content/0cab0fcd-e355-40d8-83a3-2ad5066d7b48

 

IMF presses governments to step up support for workers displaced by AI
14 January 2026 | Delphine Strauss, Financial Times
New research from the International Monetary Fund finds that AI adoption is already affecting wages and employment in roles exposed to automation, particularly at the entry level. While jobs requiring broader new skills have seen wage premiums, roles focused narrowly on AI-specific skills have experienced job losses. The IMF urges governments to strengthen retraining, social protection and education reform so workers can learn to use AI rather than be replaced by it.
What you need to know: Evidence of AI’s labour market impact is emerging unevenly, reinforcing the need for policy responses that focus on skills adaptation rather than assuming net job creation.
Original link: https://www.ft.com/content/7e2ae3ad-4cfb-4e3f-a370-702781899e05

 

Sadiq Khan to warn AI could cause ‘mass unemployment’ in London
15 January 2026 | Jim Pickard and Melissa Heikkilä, Financial Times
London mayor Sadiq Khan is set to warn that artificial intelligence could trigger “mass unemployment” in the capital unless the government intervenes to manage the transition. With London heavily exposed to white-collar work in finance, professional services and creative industries, Khan argues that entry-level jobs are at particular risk as automation accelerates faster than new roles emerge. City Hall is launching an AI and future-of-work task force alongside free AI training, while urging ministers to treat AI as a policy challenge rather than a purely market-driven shift.
What you need to know: Political concern over AI-driven job displacement is intensifying, increasing pressure on governments to pair AI adoption with labour market and skills policy.
Original link: https://www.ft.com/content/6f92844e-6eb6-48dc-a36a-fd63115e45b5

AI Development and Industry

Top Stories of 2025! Big AI Poaches Talent, Reasoning Models Boost Performance, Agents Write Code, Data Centers Drive GDP, China Turns the Tables
26 December 2025 | Andrew Ng, The Batch @ DeepLearning.AI

DeepLearning.AI’s year-end special issue frames 2025 as the beginning of AI’s industrial age, marked by rapid advances in reasoning models, agentic coding, and massive infrastructure buildouts. The newsletter describes how reinforcement learning trained models to “think” step by step, dramatically boosting performance in maths, coding, robotics, and scientific discovery. It also chronicles the fierce talent war led by Meta’s lavish compensation offers, as well as unprecedented capital spending on trillion-dollar data centre expansion. Meanwhile, US chip restrictions backfired as China accelerated domestic AI hardware development, underscoring the geopolitical stakes of compute.

What you need to know: Captures the convergence of reasoning breakthroughs, agentic applications, and global infrastructure competition that is reshaping AI into a full-scale industrial and geopolitical force.

 

‘South Korea’s Google’ pitches AI alternative to US and China
January 2026 | Song Jung-a, Financial Times

Naver, South Korea’s dominant search company, is positioning itself as a “sovereign AI” alternative for countries wary of relying on American or Chinese cloud infrastructure. The company argues it can offer greater national control over the AI stack, from data centres to applications, tailoring models to local languages and cultural contexts. Naver is aggressively expanding overseas, investing heavily in Nvidia GPUs and building large-scale data centre capacity, including a 500MW facility in Morocco. Analysts note both the opportunity and the challenge: sovereign AI requires access to vast local data resources, something Naver has struggled with outside Korea.

What you need to know: Shows how AI competition is fragmenting into regional sovereignty plays, with cloud infrastructure and localisation becoming strategic battlegrounds.

 

Google’s Koray Kavukcuoglu: Turning abstract AI thinking into user-friendly products

January 2026 | Melissa Heikkilä, Financial Times

Koray Kavukcuoglu, Google’s chief AI architect and CTO of DeepMind, outlines how the company is translating frontier AI research into everyday products through its Gemini 3 model. He argues that Google’s advantage lies in owning the full AI stack — from chips and data centres to consumer products — allowing rapid deployment at scale. Gemini 3’s advances in multimodality, agentic behaviour and interactive interfaces are positioned as steps towards more general-purpose AI, though Kavukcuoglu cautions that true artificial general intelligence remains an open research challenge.

What you need to know: Highlights the strategic importance of integrating AI research tightly with products, signalling how Big Tech aims to turn model breakthroughs into mass adoption.

 

How to Test for Artificial General Intelligence
2 January 2026 | Andrew Ng, DeepLearning.AI (The Batch)

Andrew Ng argues that “AGI” has become a term overloaded with hype, and that the traditional Turing Test is no longer a meaningful way to evaluate claims of general intelligence. He proposes a new “Turing-AGI Test” focused on whether an AI system can autonomously carry out complex, economically valuable work across many domains — not just produce convincing conversation. Ng stresses that better evaluation methods are essential, both to measure progress realistically and to avoid being misled by systems that appear broadly capable but are actually narrow or brittle.

What you need to know: As AI capabilities expand, defining and testing “general intelligence” is becoming one of the most contested and consequential questions in frontier AI development.
Original link: https://www.deeplearning.ai/the-batch/how-to-test-for-artificial-general-intelligence/

 

AI and Asia will dominate CES
2 January 2026 | Lauren Lau, Bloomberg Technology

Artificial intelligence is set to take centre stage at CES 2026, with Asian companies expected to dominate much of the spotlight as American giants like Apple sit out the event. The newsletter highlights how firms across China, Japan and South Korea are using AI narratives to reposition themselves globally, from Lenovo’s push into “agent-native experiences” to China’s growing consumer-tech influence. The piece also notes fresh momentum behind Chinese AI innovators such as DeepSeek, alongside rising investor excitement around domestic chipmakers like Biren.

What you need to know: CES is increasingly becoming a global battleground for AI branding, showing how Asian firms are challenging US leadership in consumer-facing AI and hardware ecosystems.

 

DeepSeek Touts New Training Method as China Pushes AI Efficiency
2 January 2026 | Saritha Rai, Bloomberg

DeepSeek has outlined a new framework aimed at improving scalability while reducing the computational and energy demands of training advanced AI systems. The Hangzhou-based group has become closely watched for low-cost engineering approaches that could offset China’s limited access to Nvidia chips. Bloomberg notes anticipation is rising for DeepSeek’s next flagship R2 model, which could again disrupt global AI rankings.

What you need to know: AI leadership will increasingly depend on training efficiency, and DeepSeek’s work shows how constraints can drive unconventional breakthroughs.

 

DeepSeek kicks off 2026 with paper signalling push to train bigger models for less
4 January 2026 | Vincent Chow, South China Morning Post

Chinese AI start-up DeepSeek has released a technical paper proposing a new architecture called Manifold-Constrained Hyper-Connections, designed to scale large models without proportionally increasing compute costs. The research reflects China’s push for cost-efficient innovation under US chip restrictions, with DeepSeek emerging as a leading signal of Chinese frontier model strategy. Tests on models up to 27bn parameters suggest the method could support stable large-scale training.

What you need to know: Efficiency breakthroughs may become China’s main lever in competing with US labs, shifting the frontier AI race from raw scale to smarter architectures.

 

AI start-ups take on Google in fight to reshape web browser market
5 January 2026 | Melissa Heikkilä, Financial Times

OpenAI and Perplexity have launched new AI-powered browsers, challenging Google Chrome’s dominance as they bet that AI will redefine how people navigate the internet. Browsers are increasingly seen as platforms for AI agents that can take direct actions—booking tickets, shopping, or managing tasks—rather than simply displaying pages. However, critics warn that AI browser experiences remain glitchy and raise significant privacy concerns, while Google has responded quickly by embedding Gemini capabilities directly into Chrome.

What you need to know: Control of the browser layer could determine who owns the next interface for AI interaction, making this a crucial battleground for consumer AI ecosystems and data access.

 

Apple battery supplier takes on Chinese rivals in smart glasses push
6 January 2026 | Leo Lewis & Harry Dempsey, Financial Times

Japanese electronics group TDK, Apple’s largest smartphone battery supplier, is moving aggressively into AI-enabled smart glasses, competing directly with Chinese rivals. The company is developing power-efficient wearable systems combining batteries, sensors, and eye-tracking software, and unveiling new meta-optic lens technology that projects images directly onto the retina. Analysts believe smart glasses could become one of the next major consumer battlegrounds for AI applications, with global sales forecast to surge from under 10 million units in 2025 to 60 million annually by 2030. The expansion reflects both geopolitical supply chain diversification and the race to embed AI into everyday wearables.

What you need to know: Shows how AI is shifting from software into consumer hardware ecosystems, with smart glasses emerging as a strategic interface for everyday AI adoption.

 

China’s humanoid robots come out fighting
6 January 2026 | June Yoon, Financial Times (Lex)

Chinese humanoid robot makers have made rapid advances in motion control and manufacturing scale, pushing the sector closer to mass production. Firms such as Ubtech and Unitree have benefited from strong investor enthusiasm and integration with AI systems, while production volumes and affordability increasingly outpace Western rivals. However, the article cautions that most humanoid robots still rely heavily on traditional control systems and human supervision, limiting margins. The real value, it argues, will ultimately shift from hardware to AI software.

What you need to know: Shows how AI-driven robotics is emerging as China’s next industrial frontier, but with profitability hinging on software intelligence rather than mechanical prowess.

 

Hipster robots and cuddly AI take over Las Vegas
7 January 2026 | Michael Acton, Financial Times

CES 2026 is leaning hard into “AI everywhere”, with products ranging from AI-infused toothbrushes to interactive plush toys designed to remember conversations and respond emotionally. Amid the spectacle—robot tunnels, humanoid “hipster” bots, and Nvidia CEO Jensen Huang stealing attention with robots—the underlying story is that consumer tech is being rebuilt around always-on assistants that anticipate needs and sit inside everyday objects. The show’s renewed buzz reflects how the promise of increasingly capable AI systems is re-energising hardware, startups, and the broader tech supply chain.

What you need to know: AI’s next wave is being productised as embedded, “ambient” intelligence—moving from chatbots on screens to assistants inside physical devices consumers live with.

 

Google and Boston Dynamics robotics teams team up
7 January 2026 | Data Points, DeepLearning.AI

Google DeepMind and Boston Dynamics have announced a partnership to deploy Gemini models on robots such as Atlas and Spot, aiming to improve real-world decision-making in industrial environments. The collaboration will test Gemini-powered humanoids in Hyundai factories, while robot data will help train future physical-world AI systems. Nvidia also unveiled its Alpamayo autonomous driving models, reinforcing the sense that robotics may be the next major AI platform shift.

What you need to know: The AI frontier is moving beyond text into embodied intelligence, as leading labs race to make multimodal foundation models useful in factories, vehicles and physical spaces.

 

The future of PCs in the age of AI
8 January 2026 | Richard Waters, Financial Times

Richard Waters argues that while the AI data-centre boom dominates attention, the PC industry is trying to reposition itself as the “front end” of AI as some model processing migrates to the edge. He points to competition in PC chips (Arm-based designs, renewed pushes from Intel, Qualcomm and AMD) and the promise of “small language models” that could run locally. Yet he warns the shift may be delayed by supply-chain distortions: high-bandwidth memory is being pulled toward AI servers, potentially raising prices and choking consumer hardware. Meanwhile, “AI PC” premiums may be hard to justify until compelling local-first applications arrive.

What you need to know: The next phase of AI isn’t just bigger data centres — it’s a hardware and software contest over where inference runs (cloud vs edge), constrained by chips, memory, and real-world apps.

 

Applied AI: Amazon Pulls Ahead in Agent Wars—by Ruffling Retailers’ Feathers
8 January 2026 | Ann Gehan, The Information

Amazon’s new “Buy for Me” feature, which uses an AI agent to complete purchases on external retailer sites, is sparking backlash from small merchants who say their products are being listed without permission. The controversy reflects Amazon’s aggressive approach to AI-powered commerce, contrasting with OpenAI’s slower, opt-in checkout strategy built with Stripe and Shopify. The race to build reliable shopping agents is proving technically difficult, but Amazon’s scale may allow it to seize early dominance.

What you need to know: AI agents are rapidly moving from experimentation into real commercial power, raising new conflicts over data access, platform control, and the future of automated online shopping.

 

For now, autonomous vehicles still need humans
8 January 2026 | John Thornhill, Financial Times

Despite over $100bn spent pursuing full Level 5 autonomy, robotaxis still struggle with unpredictable “edge cases” that require remote human intervention. Nvidia has launched new “reasoning models” for self-driving cars, claiming the “ChatGPT moment for physical AI,” but real-world messiness continues to defeat simulation. The article argues that robotics may be the next AI frontier — but sustainable business models remain elusive.

What you need to know: Physical AI is advancing rapidly, but autonomy remains constrained by the long tail of real-world complexity, making humans an ongoing part of the loop.

 

AI-driven demand transforms memory chip industry 7 January 2026 | Vlad Savov, Bloomberg

The AI infrastructure buildout has fundamentally transformed the memory chip sector, driving valuations of manufacturers like Samsung and SK Hynix to unprecedented heights. SK Hynix, which solely produces memory chips, has now surpassed Samsung's previous peak market capitalisation, reflecting the enormous demand for high-bandwidth memory used in AI accelerators. Industry leaders including AMD's Lisa Su and Nvidia's Jensen Huang emphasise that demand far outstrips supply, with analysts warning that consumer electronics prices will rise as production lines prioritise lucrative AI-related memory products.

What you need to know: Signals a structural shift in semiconductor economics where AI infrastructure demand is creating sustained supply constraints across the memory chip industry.

 

Three questions AI needs to answer
January 2026 | The editorial board, Financial Times

The Financial Times editorial board argues that 2026 will be defined less by experimentation and more by hard-headed evaluation of whether generative AI can deliver on its extraordinary promises. The piece highlights three central challenges: whether scaling laws are running out of momentum, whether AI leaders can establish durable business models as systems become commoditised, and how US firms will respond to the growing popularity of Chinese open-weight models. It points to emerging research pathways such as neurosymbolic AI, alongside intensifying competition between closed proprietary systems and cheaper, adaptable open alternatives.

What you need to know: Signals a shift from hype to scrutiny, with the next phase of AI progress hinging on efficiency, business sustainability, and geopolitical competition in open models.

 

Nvidia on China and Asia’s data centre financing
8 January 2026 | Cissy Zhou, Yifan Yu, Ryan McMorrow, Zijing Wu, Lorretta Chen and Shoya Okinaga, Financial Times (with Nikkei Asia)

From CES 2026, the #techAsia newsletter tracks how “physical AI” (robots, embodied systems, self-driving) is becoming more visible in consumer tech, even as many products still look like old ideas rebranded with AI buzzwords. It also spotlights the intensifying—and increasingly complex—financing boom behind Asia’s AI data-centre buildout, with tens of billions in private equity flowing in and more aggressive debt structures emerging amid early overcapacity worries.

What you need to know: The AI race is now as much about financing and infrastructure risk as algorithms—capital markets and data-centre economics will shape who can scale models and deploy products.

 

Chinese firms dominated global humanoid robot shipments in 2025
8 January 2026 | Bloomberg News

Chinese manufacturers accounted for the vast majority of the roughly 13,000 humanoid robots shipped globally in 2025, according to Omdia. Companies such as AgiBot, Unitree and UBTech have leveraged low costs, manufacturing scale, and rapid AI integration to outpace US rivals including Tesla and Figure AI. Analysts forecast shipments to surge into the millions over the next decade as AI models, dexterous manipulation, and reinforcement learning improve. Policymakers, however, warn of bubble risks as more than 150 firms enter the space.

What you need to know: Demonstrates how AI is transforming robotics from a research novelty into a mass-manufactured industry, with China currently setting the pace.

 

China’s AI chip dragons’ firepower is mostly mythical
9 January 2026 | Jennifer Hughes, Financial Times (Lex)

Investor enthusiasm has surged around newly listed Chinese AI chipmakers seeking to fill gaps left by US export controls on Nvidia. But a Lex analysis argues that companies such as Biren, MetaX and Moore Threads remain loss-making, with revenues far below established players like Huawei and Cambricon. While Beijing’s support has boosted domestic market share, the firms face intense competition, high valuations, and uncertain paths to profitability. The article warns that state-backed optimism does not guarantee long-term technological or commercial success.

What you need to know: Undercuts the narrative of rapid Chinese self-sufficiency in AI chips, reminding investors that scale, software ecosystems, and sustained profitability remain hard to replicate.

 

The AI Shift: Agentic AI is coming for quantitative research
8 January 2026 | John Burn-Murdoch and Sarah O’Connor, Financial Times

The authors argue that agentic coding tools — systems that can act on a user’s machine, fetch data, run analyses, and generate write-ups — are compressing what used to be days of quantitative work into minutes. They describe researchers using tools like Claude Code and OpenAI’s Codex CLI to reproduce old projects, extend them with new data, and re-run analyses with minimal friction, potentially improving reproducibility and easing parts of the replication crisis. But they also warn of second-order effects: cheaper research could mean more low-quality output, and the premium may shift from technical execution to “taste” — choosing the right questions and judging what matters.

What you need to know: Agentic AI is moving beyond chat into end-to-end knowledge work, changing who can do research, how fast it happens, and what skills become scarce (verification, judgment, and idea quality).

 

Google introduces personalised shopping ads to AI tools
January 2026 | Cristina Criddle, Financial Times

Google is rolling out personalised advertising inside its AI-powered shopping tools, allowing brands to target users at the moment they signal purchase intent through conversational queries. The ads will appear within Google’s “AI Mode”, powered by its Gemini model, and can include exclusive discounts, bundles or free shipping offers. The move marks a strategic shift away from traditional search ads as Google races to monetise generative AI usage and defend its core advertising business against challengers such as OpenAI and Microsoft.

What you need to know: Shows how AI interfaces are becoming the next battleground for advertising revenue, reshaping how search, commerce and monetisation intersect.

 

Data Centers Are More Easier on the Environment Than You Might Think
16 January 2026 | Andrew Ng, DeepLearning.AI (The Batch)

Ng takes an intentionally contrarian position on the environmental backlash against AI data centers, arguing that many criticisms — around emissions, electricity costs, and water use — are overstated. He suggests that concentrating compute in modern, efficient facilities can actually reduce overall energy waste compared with more fragmented infrastructure, and that blocking construction may slow the transition to cleaner grids. While acknowledging real sustainability challenges, he frames data centers as a necessary foundation for AI progress, and one that can be managed responsibly rather than rejected outright.

What you need to know: Energy and infrastructure constraints are rapidly becoming a key limiting factor for scaling AI — shaping where models are trained, deployed, and regulated.
Original link: https://www.deeplearning.ai/the-batch/data-centers-are-more-easier-on-the-environment-than-you-might-think/

 

OpenAI brings advertising to ChatGPT in push for new revenue
17 January 2026 | George Hammond and Cristina Criddle, Financial Times

OpenAI is rolling out ads in ChatGPT (starting with the free tier and its cheapest paid plan), placing clearly labelled adverts at the bottom of responses when relevant to the query. The company expects “low billions” in ad revenue in 2026 and frames the move as necessary to fund enormous compute commitments, while also insisting it must preserve trust—especially as it expands personalisation via “memories” that could also enable hyper-targeted advertising.

What you need to know: Monetisation is shifting from subscriptions to attention—ads and personalisation create new incentives (and risks) that could influence how AI assistants are designed, governed, and trusted.

 

AI advertising wars are finally breaking out
23 January 2026 | Richard Waters, Financial Times

Richard Waters argues that the long-anticipated clash over AI-driven advertising and search monetisation is now beginning. OpenAI has outlined plans to introduce search-like ads in ChatGPT, while Google is testing product ads alongside AI-generated search results. Although Google retains deep distribution and personal data advantages, the rise of AI agents could fundamentally disrupt how consumers find information and make purchases. The article points to emerging standards such as Universal Commerce Protocol, Model Context Protocol, and agent-to-agent interaction frameworks that may replace human browsing with machine-to-machine commerce.

What you need to know: Signals the next frontier of AI competition: not just model quality, but control of online attention, shopping behaviour, and advertising economics.

 

Meta strikes deals with nuclear start-ups to meet AI power demand
January 2026 | Martha Muir, Financial Times

Meta has signed agreements with nuclear start-ups Oklo and TerraPower to secure long-term electricity supplies for its AI data centres, pre-paying for power from reactors still under development. The deals, alongside contracts with existing nuclear operators, reflect mounting concern that traditional grids cannot meet AI’s surging energy demands. While the agreements have boosted nuclear start-up valuations, analysts warn that licensing delays and cost overruns could undermine timelines.

What you need to know: Shows how energy availability is becoming a strategic constraint on AI scaling, pushing tech companies into unconventional infrastructure bets.

 

Nvidia and Microsoft back AI breakthrough for gene therapies
January 2026 | Michael Peel and Aanu Adeoye, Financial Times

An international team including researchers from Nvidia and Microsoft has trained AI models (“Eden”) on Basecamp Research’s genomic dataset spanning more than a million species—much of it previously absent from public databases—to generate potential new gene-editing and drug therapies. The work highlights AI’s growing role in extracting “hidden relationships” from evolutionary history, with early lab results pointing to AI-designed enzymes for large gene insertion and peptide libraries targeting drug-resistant superbugs, though experts stress the need for further validation and safety testing.

What you need to know: AI’s next frontier is domain-specific foundation models trained on proprietary scientific datasets—potentially unlocking new therapeutic modalities, but only if they translate reliably beyond the lab.

 

Korea kicks off ‘AI Squid Game’ to build sovereign foundation models
20 January 2026 | Yoolim Lee, Bloomberg
South Korea has launched a state-backed competition to identify and accelerate the country’s strongest homegrown AI foundation models, with teams facing elimination every six months. The initiative aims to reduce dependence on US and Chinese AI systems by developing indigenous, end-to-end models trained on domestic infrastructure. Leading conglomerates and startups are competing for access to GPUs, data and long-term strategic positioning, amid controversy over what counts as “truly domestic” AI. The contest reflects Korea’s broader ambition to become a full-stack AI power combining chips, cloud and applications.
What you need to know: Shows how governments are increasingly using competitive, state-led mechanisms to accelerate sovereign AI capabilities in response to geopolitical and economic pressure.
Original link: https://www.bloomberg.com/news/features/2026-01-19/korea-kicks-off-ai-squid-game-for-best-sovereign-foundation-models

 

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption
20 January 2026 | Melissa Heikkilä, Financial Times
Microsoft chief executive Satya Nadella cautioned that AI risks becoming a speculative bubble unless its benefits spread beyond big tech and wealthy economies. Speaking at Davos, he argued that broad adoption across industries and regions is essential for AI to deliver lasting productivity gains. Nadella expressed confidence in AI’s long-term impact but stressed that uneven diffusion would undermine its economic promise. He also reiterated Microsoft’s view that the future will involve multiple AI models rather than a single dominant provider.
What you need to know: Signals growing concern among AI leaders that adoption, not technical capability, is now the main constraint on sustainable AI-driven growth.
Original link: https://www.ft.com/content/2a29cbc9-7183-4f68-a1d2-bc88189672e6

 

UAE to get advanced AI chips in coming months, G42 chief says
21 January 2026 | Alex Dooler and Joumanna Bercetche, Bloomberg
The UAE is set to receive shipments of advanced AI chips from Nvidia, AMD and Cerebras as part of a major data-centre expansion led by G42. The company expects the first phase of its Abu Dhabi build-out to come online within months, with plans to scale capacity to multiple gigawatts. Backed by sovereign wealth and partnerships with Microsoft and OpenAI-linked initiatives, the UAE is positioning itself as a global AI infrastructure hub. US approval of chip exports underscores the geopolitical sensitivity of such developments.
What you need to know: Illustrates how AI leadership is increasingly tied to access to compute and energy, with Gulf states emerging as major infrastructure players.
Original link: https://www.bloomberg.com/news/articles/2026-01-20/nvidia-amd-uae-to-get-advanced-ai-chips-in-couple-of-months-g42-chief-says

 

OpenAI launches ChatGPT Go worldwide
20 January 2026 | Data Points @ DeepLearning.AI
OpenAI has rolled out ChatGPT Go globally, a lower-cost subscription tier priced below ChatGPT Plus that offers higher usage limits and access to newer models. The plan expands OpenAI’s three-tier pricing strategy as it seeks to convert free users into paying customers, while keeping premium tiers ad-free. ChatGPT Go includes expanded memory and context features, allowing for more persistent personalisation. The move follows earlier launches in emerging markets and reflects OpenAI’s push towards mass-market adoption.
What you need to know: Marks a strategic shift from frontier capability alone towards pricing, scale and sustained consumer adoption as key battlegrounds in AI.
Original link: https://www.deeplearning.ai/the-batch/openai-launches-chatgpt-go-worldwide/

 

Robots only half as efficient as humans, says leading Chinese producer
25 January 2026 | William Langley, Financial Times
Chinese robotics firm UBTech says its humanoid robots currently achieve only 30–50% of human productivity, underscoring the technical and economic challenges of replacing labour with AI-driven machines. While manufacturers are still racing to deploy humanoids to stay competitive, most use cases remain limited and experimental. Analysts note that issues such as power supply, dexterity and decision-making complexity continue to constrain performance. Even so, policymakers and companies see factory deployments as a critical testing ground for future improvement.
What you need to know: Tempers hype around embodied AI by showing how far robotics still lags behind software-based AI gains in real-world productivity.
Original link: https://www.ft.com/content/0f831781-b450-4644-9f83-b3f76968a4af

 

Scientists unveil crawling robot hand that can grasp multiple objects
21 January 2026 | Michael Peel, Financial Times
Researchers have developed a detachable robotic hand capable of crawling independently and gripping objects in multiple directions, surpassing some limitations of human hands. The design allows the hand to operate separately from its host arm, enabling complex manipulation in tight or cluttered environments. Published in Nature Communications, the work highlights potential applications in industrial inspection, repair and maintenance. The system prioritises dexterity and practical utility rather than surveillance or stealth.
What you need to know: Demonstrates how advances in robotic manipulation—not just AI cognition—are expanding the scope of automation in physical environments.
Original link: https://www.ft.com/content/39d6fd07-4202-4404-8f78-38a77caf79a0

 

TikTok owner ByteDance targets Alibaba with AI-led cloud drive
20 January 2026 | Eleanor Olcott, Financial Times
ByteDance is rapidly expanding its enterprise cloud business, Volcano Engine, using proprietary AI models, deep discounts and massive computing capacity to challenge Alibaba’s dominance. The company is leveraging data and infrastructure built for consumer apps such as TikTok and Douyin to offer bespoke AI agents to corporate clients. Although ByteDance remains a smaller player in overall cloud services, it has become China’s second-largest AI cloud provider. Its strategy prioritises closed, commercial models over open-source visibility.
What you need to know: Highlights how consumer AI giants are repurposing data and compute advantages to compete in enterprise AI infrastructure, reshaping cloud markets.
Original link: https://www.ft.com/content/3732a646-da35-4437-bfde-7f9efc2725ff

 

Apple sits out AI arms race to play kingmaker between Google and OpenAI
15 January 2026 | Michael Acton, Stephen Morris and George Hammond, Financial Times
Apple has chosen not to build large-scale AI models or data-centre infrastructure, instead signing a multibillion-dollar deal to use Google’s Gemini models across its devices. The move positions Apple as a powerful distributor rather than a frontier AI developer, potentially shaping competition between Google and OpenAI. While investors worry Apple risks falling behind, the strategy mirrors its earlier decision to make Google Search the default on iPhones. Apple continues to focus on smaller, device-based AI systems.
What you need to know: Illustrates how control over distribution and user ecosystems may be as strategically important as owning frontier AI models.
Original link: https://www.ft.com/content/8033b1bc-4ffe-47ed-baf0-5abea6a1322a

 

China will clinch the AI race
18 January 2026 | Tej Parikh, Financial Times
China is positioning itself to overtake the US in the global AI race by prioritising large-scale deployment over frontier model performance. While US firms still lead in cutting-edge large language models, Chinese companies have narrowed the gap through open-source approaches, algorithmic efficiency and state-backed infrastructure investment. Beijing’s advantages in energy capacity, critical minerals, manufacturing and inference chip production are expected to accelerate the diffusion of “good-enough” AI models across industry and emerging markets, potentially outweighing US strengths in proprietary systems and advanced hardware.
What you need to know: AI leadership is shifting from who builds the best models to who can deploy them at scale, highlighting the strategic importance of infrastructure, energy and industrial policy in AI competition.
Original link: https://www.ft.com/content/d9af562c-1d37-41b7-9aa7-a838dce3f571

 

Don’t hold your breath for robots’ ChatGPT moment
13 January 2026 | Sarah O’Connor, Financial Times
Despite rapid advances in AI and optimistic claims from technology leaders, the widespread economic transformation of robotics is likely to be slow. Unlike generative AI, which can be adopted quickly via software subscriptions, physical automation requires heavy capital investment, long planning cycles and strong commercial justification. Case studies from logistics and retail show that even technically successful robotic systems can fail to scale when demand forecasts or cost structures fall short, underscoring the gap between technical breakthroughs and real-world deployment.
What you need to know: Progress in physical AI will be constrained less by intelligence and more by economics, safety and infrastructure, tempering expectations of rapid labour disruption from robotics.
Original link: https://www.ft.com/content/ed4e523e-923c-493d-b402-98a03f0cf7dd

 

Supply chain snags and TSMC’s big spending
15 January 2026 | Lauly Li, Cheng Ting-Fang, Mitsuru Obe and Cristina Criddle, Financial Times / Nikkei Asia
Global AI growth is increasingly constrained by supply chain bottlenecks, geopolitical uncertainty and infrastructure limits, even as demand remains strong. Shortages of components such as memory chips and power transformers are slowing data centre expansion, while US–China tensions complicate access to advanced AI hardware. Against this backdrop, TSMC plans up to $56bn in capital expenditure in 2026 to meet soaring demand for AI chips, underscoring the central role of semiconductor manufacturing capacity in the AI race.
What you need to know: The pace of AI deployment is now shaped as much by physical supply chains and geopolitics as by algorithmic progress.
Original link: https://www.ft.com/content/15da3d21-25c0-4b39-81c0-a9db7a1e8ea8

 

From AI Experiments to AI Products
23 January 2026 | Andrew Ng, DeepLearning.AI (The Batch)

Writing from the World Economic Forum in Davos, Ng reflects on conversations with CEOs who have struggled to translate scattered AI experiments into meaningful business transformation. He argues that “letting a thousand flowers bloom” with disconnected pilot projects rarely produces payoff, and that companies must redesign workflows and strategy to build true AI-powered products. Successful AI adoption, he suggests, requires integrating systems deeply into operations rather than treating them as bolt-on efficiency tools.

What you need to know: The next stage of AI impact will depend less on model breakthroughs and more on execution — turning raw capability into scalable products and redesigned institutions.
Original link: https://www.deeplearning.ai/the-batch/from-ai-experiments-to-ai-products/

AI and Cybersecurity

US has failed to stop massive Chinese cyber campaign, warns senator
12 December 2025 | Demetri Sevastopulo, Financial Times

A senior US senator has warned that Chinese intelligence continues to access American telecom networks through a sprawling cyber operation known as “Salt Typhoon”, potentially allowing surveillance of unencrypted communications nationwide. Mark Warner blamed staff cuts and fragmented oversight for the failure to contain the breach, arguing that the US response lags behind the scale of the threat. The campaign has exposed systemic weaknesses in telecom infrastructure and cyber governance, raising alarms about national security.

What you need to know: As AI amplifies cyber capabilities on both offence and defence, insecure digital infrastructure becomes a strategic vulnerability with geopolitical consequences.
Original link: https://www.ft.com/content/50e45bac-c16b-48e8-b788-e6b106be9490

 

Fraudsters use AI to fake artwork authenticity and ownership
21 December 2025 | Lee Harris and Josh Spero, Financial Times

The piece reports that art fraudsters are using chatbots and large language models to generate convincing fake invoices, provenance records and certificates of authenticity to support dubious insurance claims or sales. Loss adjusters and provenance researchers describe cases where multiple certificates share identical AI-generated text, forged signatures and invented references, with models “hallucinating” documentation that never existed. While insurers and experts are experimenting with AI to detect such fakes, improvements in generative tools are making it harder to spot doctored documents through simple visual inspection alone.

What you need to know: Illustrates how generative AI is not only creating new economic value but also lowering the barrier to sophisticated fraud, pressuring regulators, insurers and cultural institutions to upgrade verification and audit tools.
Original link: https://www.ft.com/content/fdfb5489-daa0-4e7e-97b7-4317514cd9f4

 

The data breach that hit two-thirds of a country
23 December 2025 | Song Jung-a, Financial Times

South Korean online retailer Coupang has suffered the country’s largest-ever data breach, exposing personal information from more than 33mn accounts, nearly two-thirds of the population, after hackers accessed overseas servers for months before detection. Investigators believe a former employee with privileged access exploited lingering credentials to extract customer data, prompting political backlash, executive resignations and calls for tougher cyber security enforcement. The incident has become a national wake-up call on data protection failures as digital platforms scale rapidly.

What you need to know: In the age of AI-driven personalisation and data-intensive systems, weak cyber security doesn’t just risk privacy, it undermines the data foundations AI systems depend on.
Original link: https://www.ft.com/content/df4042fa-3e56-410f-b905-4aed8fd434ac

AI Regulation and Legal Issues

Europe has ‘lost the internet’, warns Belgium’s cyber security chief
2 January 2026 | Laura Dubois, Financial Times

Belgium’s top cyber security official Miguel De Bruycker has warned that Europe has become dangerously dependent on US cloud providers, making it “impossible” to keep data fully within the EU. He argues the continent is missing out on crucial technologies like AI and cloud computing, even as regulation such as the AI Act may be slowing domestic innovation. Calls are growing for Airbus-style European-scale investment in sovereign digital infrastructure.

What you need to know: AI competitiveness depends on cloud infrastructure, and Europe’s dependence on US hyperscalers remains a structural weakness in building autonomous AI capability.

 

The US is losing its battle to break up Big Tech 5 January 2026 | Stefania Palma, Financial Times

US efforts to break up major technology companies have suffered significant setbacks as judges prove reluctant to order structural remedies such as divestitures, despite finding that companies like Google maintain illegal monopolies. Courts have cited rapidly evolving AI competition and the complexity of splitting multi-trillion-dollar businesses as reasons for imposing softer remedies. The rulings highlight how enforcement delays—with cases often filed years after alleged anti-competitive acquisitions—have undermined regulators' ability to reshape markets, as judges note that competitive landscapes have changed significantly since the original conduct.

What you need to know: Suggests that traditional antitrust approaches may be insufficient to address Big Tech dominance, with implications for how AI market concentration might be regulated.

 

China reviews Meta’s $2bn purchase of AI start-up Manus
7 January 2026 | Ryan McMorrow and Zijing Wu, Financial Times

Chinese officials are reviewing Meta’s $2bn acquisition of Manus, an AI assistant start-up with Chinese roots, for possible violations of technology export controls. The move underscores Beijing’s growing willingness to exert leverage over strategic AI assets, even when companies relocate abroad through “Singapore washing.” The deal also reflects intensifying divergence between US and Chinese AI ecosystems as regulation, chips, and capital fragment global development.

What you need to know: AI is increasingly treated as geopolitically strategic infrastructure, meaning even corporate acquisitions can become flashpoints in US–China technological competition.

 

Trump cuts to academia risk ceding AI lead, warns Microsoft scientist
8 January 2026 | Cristina Criddle and Rafe Rosner-Uddin, Financial Times

Microsoft chief scientist Eric Horvitz warns that cuts to US federal research funding could push talent and ideas overseas and weaken America’s long-term position in AI. He points to the postwar model of public investment in basic science — including the National Science Foundation — as foundational to today’s AI “moment,” and notes recent grant cancellations and reductions that critics fear will accelerate brain drain. The article also underscores how university research has historically fed industry advances, from core ideas behind large-scale models to breakthroughs like reinforcement learning.

What you need to know: Frontier AI leadership depends on the upstream pipeline — basic research, training, and open inquiry — not just private-sector compute and productization.

 

What lies behind Trump’s retro oil plundering?
9 January 2026 | Gillian Tett, Financial Times

Gillian Tett argues that US moves to secure fossil-fuel resources reflect a “matter matters” worldview — but warns it may be strategically backward for the AI era. She contrasts a fossil-heavy approach with China’s rapid build-out of renewable generation and electrification, framing cheap, scalable electricity as a decisive input to AI competitiveness. The column suggests that undermining renewables could slow the US build-out of the power infrastructure needed for data centres and broader AI-driven growth, even if it yields short-term geopolitical leverage.

What you need to know: Energy strategy is AI strategy — electricity capacity, price, and build speed are becoming core determinants of where AI infrastructure scales fastest.

EU readies tougher tech enforcement in 2026 as Trump warns of retaliation
January 2026 | Barbara Moens, Financial Times

The EU is preparing a more aggressive phase of enforcement under landmark laws like the Digital Markets Act and Digital Services Act, challenging firms including Google, Meta, Apple and Musk’s X. Brussels has begun probing AI-specific competition issues, including whether Meta restricts rival AI access through WhatsApp and how Google uses online content for training models. However, Trump has threatened tariffs in retaliation, raising the risk of a transatlantic tech trade conflict.

What you need to know: AI regulation is shifting from drafting rules to enforcement, and Europe’s approach could shape how frontier AI platforms are governed globally.

 

London emerges as frontline in US-China battle over robotaxis
January 2026 | Tim Bradshaw, Financial Times

Waymo and Baidu are preparing robotaxi launches in London as soon as 2026, turning the UK capital into a high-profile testing ground for US and Chinese autonomous driving leaders. The UK’s move to allow commercial driverless trials this spring is accelerating plans, and London could become the first city where both US and Chinese robotaxis operate at scale. The piece also underscores the dual nature of autonomous vehicles: they promise safety and mobility benefits, but they’re “mobile AI supercomputers” that raise privacy, regulatory, and national-security concerns—especially regarding sensor data collection and cross-border tech rivalry.

What you need to know: Robotaxis are one of the most consequential “AI-in-the-real-world” deployments, where progress depends as much on regulation, safety validation, and trust as on model capability.

 

China agrees to allow local companies to buy Nvidia H200 chips
12 January 2026 | Data Points, DeepLearning.AI

China has approved the purchase of Nvidia’s H200 AI chips by domestic companies, reversing earlier restrictions while barring their use by the military and sensitive state entities. The move opens a potentially $50bn market for Nvidia, with Chinese tech giants Alibaba and ByteDance expressing interest in large orders. Although the H200 is based on Nvidia’s older Hopper architecture, it still significantly outperforms China’s domestic alternatives, underscoring Beijing’s continued reliance on foreign high-end compute. The deal transfers regulatory risk to buyers through strict payment terms.

What you need to know: Shows how geopolitics is reshaping access to AI compute, with China balancing strategic autonomy against the need for cutting-edge hardware to stay competitive.

 

Donald Trump calls for emergency energy auction to make tech giants pay for AI power
16 January 2026 | Martha Muir, Financial Times

US President Donald Trump has urged the country’s largest grid operator to hold an emergency auction forcing data centre operators to pay directly for new power plants needed to support AI infrastructure. The proposal aims to curb rising electricity bills, which have climbed partly due to surging demand from AI-driven data centres. Tech companies including Microsoft and Amazon have pledged to shoulder higher energy costs, while utilities warn of massive investment needs. The move underscores how AI growth is straining physical infrastructure far beyond the tech sector.

What you need to know: Highlights energy as a core bottleneck for AI scaling, with power availability now shaping policy, pricing, and the pace of data-centre expansion.

 

Indonesia temporarily blocks access to Grok over sexualised images
January 2026 | Reuters

Indonesia has become the first country to block access to Elon Musk’s Grok chatbot, citing concerns over AI-generated sexualised content. The ban follows similar scrutiny in Europe and Asia after Grok produced explicit images, including of minors, prompting xAI to restrict some features to paying users. Indonesian officials said non-consensual sexual deepfakes violate human rights and digital safety laws, signalling a tougher stance on AI platforms operating across borders.

What you need to know: Illustrates how national regulators are increasingly willing to block AI tools outright, fragmenting global deployment and raising compliance costs.

 

Mother of one of Elon Musk’s children sues xAI over sexual images
January 2026 | Hannah Murphy & Rafe Rosner-Uddin, Financial Times

Ashley St Clair has sued xAI, alleging that Grok generated and distributed sexualised images of her without consent, including altered images of her as a minor. The case adds to mounting legal pressure on Musk’s AI company following global outrage over non-consensual deepfakes. Regulators in multiple jurisdictions are now investigating whether platforms can be held directly liable when AI systems generate illegal content.

What you need to know: Signals a potential legal turning point, with courts increasingly testing whether AI developers bear responsibility for generated harm.

 

White House sets tariffs to take 25% cut of Nvidia and AMD sales in China
15 January 2026 | Aime Williams, Michael Acton, Camilla Hodgson and Eleanor Olcott, Financial Times

The White House has introduced tariffs designed to implement a deal that would take a 25% cut of Nvidia and AMD sales of certain AI chips into China, effectively monetising export permissions. The policy is positioned as a national-security measure with a novel legal structure, while still carving out exemptions for chips used to build domestic US AI infrastructure. The piece also highlights ongoing uncertainty over China’s willingness to accept imports amid a push for semiconductor self-sufficiency, alongside broader tensions over critical minerals and supply chains.

What you need to know: Compute supply is now a geopolitical instrument — export controls, tariffs, and industrial policy are directly shaping who gets advanced AI hardware, at what price, and under what conditions.

 

Nvidia suppliers halt H200 output after China blocks chip shipments
17 January 2026 | Zijing Wu and Eleanor Olcott, Financial Times

Parts suppliers for Nvidia’s H200 paused production after Chinese customs reportedly blocked shipments, injecting uncertainty into a chip that had only recently regained a path to approval for China sales. The disruption underscores how rapidly policy and regulatory signals—on both sides—can reshape hardware availability, procurement plans, and even spur shifts to alternative chips (including via grey markets).

What you need to know: AI capability is increasingly constrained by geopolitics—export controls and customs actions can throttle compute supply and accelerate domestic substitution efforts.

 

Social media companies purge 4.7mn accounts after landmark Australia ban
January 2026 | Nic Fildes, Financial Times

Technology companies say they have deactivated or restricted access to 4.7mn accounts since Australia’s under-16s social media ban came into force, using age-estimation and verification measures to identify likely child users. Regulators caution the real impact will take years to evaluate, including whether teens circumvent rules, migrate to smaller apps, or create “recidivist” fake accounts. The eSafety Commissioner is also widening its scope by identifying additional platforms that could be brought under the ban — and has opened investigations into X’s Grok over AI-generated content that may sexualise or exploit people.

What you need to know: Safety policy is starting to collide directly with generative AI features on platforms — pushing AI systems (and their operators) into stricter accountability for harmful content and youth protection.

                                                               

Data centre groups plan lobbying blitz to counter AI energy backlash
25 January 2026 | Rafe Rosner-Uddin, Financial Times
Major US data-centre operators are preparing an aggressive lobbying and public-relations push in response to growing opposition over the energy, water and environmental costs of AI infrastructure. With dozens of projects delayed or blocked, companies are seeking to reframe data centres as economic enablers while urging governments to address grid underinvestment. Tech groups argue that public resistance now poses a serious bottleneck to scaling AI capacity.
What you need to know: Highlights energy and infrastructure as emerging political constraints on AI growth, underscoring that compute scale is no longer just a technical or financial challenge.
Original link: https://www.ft.com/content/f45d45fc-c0ea-463c-8edf-38fb99ed5c05

 

Data centre suffers setback after UK government admits planning error
23 January 2026 | Alistair Gray and Gill Plimmer, Financial Times
A £1bn UK data-centre project near London has been hit by a legal setback after the government conceded it made an error in granting planning permission. Campaigners argue that officials failed to properly assess environmental and energy impacts, reflecting rising scrutiny of data-centre expansion. The case exposes tensions between the UK’s ambition to attract AI investment and growing local and environmental resistance.
What you need to know: Shows how regulatory and planning processes are becoming friction points for AI infrastructure in Europe, potentially slowing deployment despite political support for AI growth.
Original link: https://www.ft.com/content/888c756f-4de1-4b71-a0cb-05db1098f976

 

Trump’s TikTok deal is a gift to China
23 January 2026 | Jim Secreto, Financial Times
A US-brokered deal allowing TikTok’s US operations to continue under American oversight has stabilised ByteDance while leaving its core recommendation algorithm firmly under Chinese control. By licensing rather than divesting the algorithm, the compromise avoids a ban but preserves ByteDance’s most valuable AI asset. The outcome removes a major political risk for the company at a moment when it is investing heavily in AI and computing power. Critics argue the arrangement strengthens one of China’s most strategically important tech firms.
What you need to know: Demonstrates how control over AI algorithms—not just ownership of platforms—has become central to geopolitical competition.
Original link: https://www.ft.com/content/59b91fc8-03a1-48df-9821-e2fdff24bd33

 

US regulator appeals Meta’s antitrust win
21 January 2026 | Stefania Palma and Hannah Murphy, Financial Times
The US Federal Trade Commission has appealed a court ruling that rejected its attempt to break up Meta, prolonging a landmark antitrust case against Big Tech. Regulators argue Meta preserved dominance by acquiring rivals such as Instagram and WhatsApp, while Meta maintains it faces intense competition from TikTok and YouTube. The appeal comes despite tech companies’ efforts to curry favour with the Trump administration. The outcome could shape how competition law is applied to AI-driven digital platforms.
What you need to know: Signals that regulatory pressure on AI-enabled platforms will persist, affecting how large companies integrate and scale AI products.
Original link: https://www.ft.com/content/ef7b57cf-d2e7-4cfb-be24-1b5e1bdc3452

 

Musk won’t fix Grok’s fake AI nudes. A ban would
7 January 2026 | Parmy Olson, Bloomberg Opinion
Elon Musk’s Grok chatbot has been widely used to generate non-consensual sexual images, exposing gaps in safeguards that other mainstream AI tools restrict. Regulators in Europe, the UK and India have warned of potential enforcement action, including fines or orders to disable the feature, as concerns mount over deepfake abuse and platform responsibility. The controversy highlights the limits of voluntary moderation when commercial incentives reward permissiveness.
What you need to know: Generative AI is forcing regulators to confront enforcement rather than principles, with misuse cases accelerating legal scrutiny of AI safety claims.
Original link: https://www.bloomberg.com/opinion/articles/2026-01-07/musk-will-not-fix-fake-ai-nudes-made-by-grok-a-ban-would

 

Social media companies purge 4.7mn accounts after landmark Australia ban
16 January 2026 | Nic Fildes, Financial Times
Social media platforms have deactivated or restricted 4.7mn accounts since Australia implemented the world’s first nationwide ban on under-16s using major social networks. The move follows legislation placing responsibility on companies to enforce age verification, with penalties of up to A$50mn for systemic failures. Regulators say the impact will take years to assess, while other countries, including the UK and France, are closely watching Australia’s approach as concerns grow over AI-driven content amplification and child safety.
What you need to know: Enforcement-focused regulation is emerging as a model for controlling AI-amplified harms on digital platforms, with implications for how AI systems verify age and moderate content.
Original link: https://www.ft.com/content/e8542783-d21a-45eb-87c6-302d74bb6849

 

UK to outlaw non-consensual intimate images after Grok outcry
January 2026 | Daniel Thomas and Mari Novik, Financial Times
The UK government will criminalise the creation of non-consensual intimate images following public outcry over sexualised deepfakes generated by Elon Musk’s Grok chatbot. Regulators have warned X of potential fines or bans under the Online Safety Act, while ministers pledged to outlaw nudification apps and fast-track enforcement powers. The move positions the UK among the most assertive regulators globally in tackling AI-enabled image abuse.
What you need to know: Governments are moving from voluntary AI safeguards to criminal liability, marking a tougher phase of AI governance focused on real-world harms.
Original link: https://www.ft.com/content/8eec6d77-c72e-4e8f-a6b5-ce82575e71c6

AI Market and Investment

Japan to Quadruple Spending Support for Chips, AI in Budget
26 December 2025 | Komaki Ito and Yoshiaki Nohara, Bloomberg

Japan’s industry ministry plans to nearly quadruple budgeted support for cutting-edge semiconductors and AI development to about ¥1.23 trillion for the fiscal year starting in April, as it tries to strengthen domestic capabilities amid US-China tech competition. Funding includes major allocations for Rapidus (advanced chips), domestic foundation-model development, data infrastructure, and “physical AI” that controls robots and machinery—alongside efforts to secure key minerals such as rare earths. The shift toward putting more of this spend into regular budgets signals an attempt to make national AI and chip investment more stable and strategically sustained.

What you need to know: Government policy is increasingly a direct lever for AI progress—because compute, chips, and supply chains are now core bottlenecks in frontier model development.

 

SoftBank strikes $4bn AI data centre deal with DigitalBridge 29 December 2025 | Tim Bradshaw and Eric Platt, Financial Times

SoftBank has agreed to acquire DigitalBridge, a US-based investor managing over $100bn in data centre and telecoms infrastructure assets, for approximately $4bn. The deal marks the latest move in Masayoshi Son's AI-focused dealmaking spree, following SoftBank's major investment in OpenAI and its participation in the Stargate AI infrastructure project. Despite SoftBank's shares nearly doubling in 2025, the company has faced concerns about financing its ambitious AI investments, having recently sold its entire Nvidia stake for $5.8bn.

What you need to know: Illustrates how AI infrastructure has become a primary focus for major technology investors seeking to capitalise on computing demand.

 

The AI boom is not a bubble 29 December 2025 | Robin Harding, Financial Times

Despite widespread concerns about an AI bubble, the current investment surge is better characterised as a boom driven by rational corporate behaviour rather than market mania. Tech giants including Meta, Alphabet, Apple, and Amazon are spending heavily on AI primarily as insurance to protect their existing multi-trillion-dollar businesses from potential disruption. While valuations are optimistic and a bust remains possible, the unprecedented cash reserves of these companies and their existential motivation to defend their market positions distinguish this period from classic speculative bubbles.

What you need to know: Provides a counterpoint to bubble narratives, arguing that defensive spending by established tech giants underpins current AI valuations.

 

Meta buys Chinese-founded AI start-up Manus
30 December 2025 | Hannah Murphy and Ryan McMorrow, Financial Times

Meta said it is buying Manus, an advanced AI agent platform with Chinese roots, and will operate and sell the service while integrating its capabilities into products like Meta AI. Manus is positioned as an “autonomous general-purpose agent” that can do tasks such as market research, coding, and data analysis, sold via subscriptions starting around $20 per month. The deal highlights Meta’s push toward what Mark Zuckerberg calls “personal superintelligence,” but it also sits in the geopolitical crossfire: talent, capital, and strategic AI systems are increasingly scrutinised across US-China lines.

What you need to know: The AI race is shifting from standalone models to agents that execute work—making acquisitions of agent platforms strategically valuable (and politically sensitive).

 

Musk’s xAI Buys Building to Expand ‘Colossus’ Data Center
30 December 2025 | Kurt Wagner, Bloomberg

xAI is expanding its Memphis-area “Colossus” data-center footprint, purchasing a third building that Elon Musk said would bring training compute close to 2 gigawatts. The buildout includes Colossus and a second nearby site (Colossus 2), and Musk has previously described plans involving massive numbers of Nvidia chips—implying tens of billions of dollars in hardware costs. The article ties this to xAI’s aggressive fundraising and the broader reality that frontier AI capability is increasingly constrained by power, facilities, and access to top-tier accelerators.

What you need to know: AI progress is now infrastructure-led—power availability and data-center scale are becoming as decisive as algorithmic breakthroughs for training frontier systems.

 

Meta’s Manus Deal Validates Belief in Chinese Innovation
31 December 2025 | Vlad Savov and Lulu Yilun Chen, Bloomberg

Bloomberg’s newsletter frames Meta’s acquisition of Manus as a symbolic win for Chinese-founded entrepreneurship, with investors celebrating it as evidence that Chinese teams can set global milestones in fast-moving AI categories. It contrasts Manus with DeepSeek’s earlier splash—suggesting a pattern where Chinese-linked teams differentiate through efficiency (models) and refinement (agents that complete tasks like building websites or organising itineraries). At the same time, the enthusiasm is tempered by the reality that Manus worked hard to scrub Chinese ties, and Meta emphasised safeguards and the end of Chinese ownership interests—reflecting how geopolitics now shapes where AI innovation can be built, branded, and sold.

What you need to know: Innovation in AI is increasingly transnational, but market access is political—founder identity, corporate structure, and “risk” narratives can determine whether breakthroughs scale globally.

 

Sandisk leads tech stocks with 559% gain in 2025 31 December 2025 | Martin Peers, The Information

Sandisk emerged as one of 2025's best-performing tech stocks with a 559% gain since its February spinoff from Western Digital, driven by surging data centre demand for NAND flash memory in AI workloads. The company's CEO noted that data centres will become the largest market for NAND memory in 2026, overtaking consumer devices. Other notable AI infrastructure beneficiaries included GE Vernova (up 99%) and Rolls-Royce (doubled), while enterprise software companies like Salesforce and ServiceNow fell 20-30% amid concerns about AI disruption to their offerings.

What you need to know: Shows how AI demand is reshaping technology sector valuations, benefiting infrastructure providers while creating uncertainty for traditional software companies.

 

AI Chip Designer Biren’s Shares Surge 76% on Debut in Hong Kong
2 January 2026 | Sangmi Cha and Chongjing Li, Bloomberg

Shanghai Biren Technology, a Chinese GPU designer developing alternatives to Nvidia, saw its shares surge almost 76% in its Hong Kong trading debut, reflecting investor enthusiasm for AI infrastructure plays. The strong IPO performance underscores China’s accelerating push for homegrown AI computing power amid US export restrictions. Despite continuing financial losses, Biren plans to channel proceeds into R&D as part of a broader wave of Chinese chipmakers racing to fill the gap left by Nvidia’s retreat.

What you need to know: AI competitiveness increasingly depends on access to advanced chips, and Biren’s market success signals China’s determination to build an independent AI hardware stack.

 

Baidu’s AI Chip Unit Kunlunxin Confidentially Files for Hong Kong IPO
2 January 2026 | Dave Sebastian, Bloomberg

Baidu has confidentially filed for a Hong Kong IPO for Kunlunxin, its AI chip unit, as Chinese firms intensify efforts to nurture domestic AI champions. The carve-out is expected to better reflect Kunlunxin’s value and strengthen its ability to compete in general-purpose AI computing hardware. Kunlunxin is seen as central to Beijing’s ambition to reduce reliance on US suppliers such as Nvidia, alongside players like Huawei and Cambricon.

What you need to know: The AI race is shifting from software models to the underlying compute infrastructure, making chip spin-offs and listings a key strategic lever for national AI ecosystems.

The AI debt boom does not augur well for investors

5 January 2026 | Michael Contopoulos, Financial Times

Major technology companies have shifted from asset-light, cash-rich business models to capital-intensive operations, taking on significant long-term debt to finance AI infrastructure. The rush to issue 30-40 year bonds for data centre construction echoes previous investment cycles in telecoms and energy that ended in overcapacity and writedowns. The author argues that credit investors are absorbing substantial speculative risk, betting that today's infrastructure will remain relevant for decades despite rapid technological change and uncertain AI returns.

What you need to know: Raises concerns that the scale of debt financing for AI infrastructure may create vulnerabilities similar to previous technology investment cycles.

 

Accenture buys UK AI start-up Faculty in $1bn deal
6 January 2026 | Ellesheva Kissin, Tim Bradshaw & Ivan Levingston, Financial Times

Accenture has agreed to acquire London-based AI start-up Faculty in a deal valuing it above $1bn, marking the largest-ever purchase of a privately held UK AI company. Faculty, known for employing PhDs and building AI products for clients such as Novartis and the UK government, will strengthen Accenture’s push to reinvent consulting around AI adoption. The acquisition reflects how traditional consultancies are being forced to overhaul pyramid-style business models as generative AI automates junior analyst work. Faculty has also played a role in AI safety testing for large language models, positioning it at the intersection of commercialisation and governance.

What you need to know: Illustrates how AI is restructuring entire industries, with consulting giants buying specialised firms to stay competitive in an AI-automated economy.

 

Anthropic to Raise $10 Billion at a $350 Billion Valuation
8 January 2026 | Sri Muppidi, The Information

Anthropic is reportedly seeking to raise $10 billion at a staggering $350 billion valuation, nearly double its previous round, as investor appetite for frontier AI labs continues to surge. Backers including Singapore’s GIC and Coatue are expected to lead the deal, reflecting confidence in Anthropic’s rapid revenue growth projections. The company’s steep compute costs—estimated at $60 billion over the next three years—also raise the prospect of an IPO as soon as this year.

What you need to know: Frontier AI development is becoming one of the most capital-intensive industries in history, and Anthropic’s fundraising highlights the enormous financial stakes required to stay competitive with OpenAI and Google.

 

2026 Stocks to Watch: 50 Companies Include Boeing, Reddit, Nike
January 2026 | Melissa Heikkilä, Financial Times

A new list of “Stocks to Watch” for 2026 highlights 50 companies expected to shape markets in the year ahead, spanning aerospace, retail, semiconductors, social media and emerging AI-driven platforms. Many of the firms are positioned around major technological transitions, including generative AI adoption, the buildout of advanced chip infrastructure, and shifting consumer behaviour online. The selection reflects how investors are increasingly clustering around companies that either supply the AI boom—such as chipmakers and cloud providers—or are being transformed by automation and AI-led product strategy.

What you need to know: Even broad market watchlists are now heavily influenced by AI’s economic impact, showing how deeply the technology is driving investor attention across industries.

 

Top 5 AI-relevant companies from the list

 

Reddit

Reddit is emerging as one of the most AI-connected social media platforms, as its vast archive of human conversation becomes highly valuable training data for large language models. Deals with major AI companies have turned online discussion forums into strategic infrastructure for generative AI development.

 

Nvidia

Nvidia remains the defining hardware winner of the AI boom, supplying the GPUs that underpin everything from OpenAI models to enterprise AI systems. Its dominance has also triggered global efforts to build domestic alternatives, especially in China and Europe.

 

Amazon
Amazon continues to integrate AI across retail, logistics and cloud computing, while also pushing into the emerging market for AI agents through services that can shop and act autonomously online. AWS remains one of the largest backbones for AI workloads globally.

 

Boeing

While not traditionally an AI company, Boeing is increasingly shaped by automation and AI-driven aerospace manufacturing, as the industry looks to smarter systems for design, safety and defence applications. AI-enabled engineering is expected to play a bigger role in future aircraft development.

 

Nike

Nike is investing heavily in data and AI-powered consumer personalisation, using predictive analytics to optimise product design, marketing and supply chain decisions. The company represents how retail brands are adopting AI not just for efficiency, but also for customer engagement.

 

Samsung forecasts record profit and signals sustained AI boom 8 January 2026 | Song Jung-a, Financial Times

Samsung Electronics has forecast record quarterly earnings of approximately Won20tn ($13.8bn), marking a sharp turnaround for the South Korean chipmaker amid surging demand for memory chips powering AI data centres. The company's share price rose 125% in 2025, its biggest annual gain in 26 years, while analysts project the semiconductor shortage driving this boom may persist until 2027. Samsung is now expected to become a key supplier for Nvidia's next-generation Vera Rubin platform, with analysts forecasting record operating profit of Won155tn this year.

What you need to know: Confirms the scale of the AI-driven semiconductor supercycle and its ripple effects across global technology supply chains.

 

Alphabet hits $4tn valuation on AI hopes
January 2026 | Tim Bradshaw & Michael Acton, Financial Times

Alphabet has become the fourth Big Tech firm to surpass a $4tn market value, driven by investor optimism that its Gemini AI models are closing the gap with OpenAI. A major catalyst was a multiyear collaboration with Apple, under which Gemini will help power future Siri upgrades as part of Apple Intelligence. Alphabet’s rebound comes after earlier fears that AI chatbots would erode Google’s search dominance, but its “full stack AI strategy” — spanning bespoke chips, data centres, and consumer products — has reassured markets. Meanwhile, Apple continues to hedge by partnering with both Google and OpenAI.

What you need to know: Demonstrates how frontier AI leadership is now shaping trillion-dollar market valuations and alliances between the world’s most powerful tech firms.

 

Alphabet Overtakes Apple, Becoming Second to Nvidia in Size
8 January 2026 | Ryan Vlastelica, Bloomberg

Bloomberg reports that Alphabet has overtaken Apple to become the world’s second-most valuable company behind Nvidia, reflecting Wall Street’s belief that Google is emerging as one of AI’s biggest winners. Alphabet’s rally has been fuelled by strong reviews of its Gemini AI model and investor confidence in its tensor processing unit chips as future growth drivers. Apple, by contrast, has suffered a market slump amid concerns it is falling behind in AI execution. The shift marks the first time since 2019 that Alphabet has been valued above Apple, underlining AI’s power to reorder the hierarchy of Big Tech.

What you need to know: Highlights how AI capability is now the primary determinant of market leadership, reshaping even the largest technology rivalries.

 

How do you value a company like Nvidia?
January 2026 | Simon Edelsten, Financial Times

Nvidia’s meteoric rise has made it the most valuable company in the world, but this opinion piece questions whether its valuation fully reflects emerging risks. While demand for AI training chips has driven margins to extraordinary levels, the article points to growing competition from custom silicon at Alphabet, Amazon and Chinese firms, as well as efficiency breakthroughs that may reduce reliance on top-end GPUs. Energy constraints and the capital intensity of AI infrastructure could also limit future growth, suggesting the AI boom may be entering a more selective phase.

What you need to know: Underscores how AI progress is reshaping capital markets — and why efficiency, power constraints and competition matter as much as raw model scale.

 

Putting the US AI boom(let) in perspective
8 January 2026 | Toby Nangle, Financial Times (FT Alphaville)

A BIS note argues the US AI investment boom is real but, so far, modest by historical standards: its measured contribution to GDP growth has risen since ChatGPT’s release, yet remains smaller than past investment booms like the dot-com era. The bigger concern is financial vulnerability—AI buildout is increasingly funded through debt (notably private credit), and returns will need to materialise to avoid broader spillovers.

What you need to know: Even if AI transforms products, the macro outcome depends on financing—leverage, private credit, and “capex realism” may determine whether the boom is sustainable.

 

Samsung forecasts record profit and signals sustained AI boom
8 January 2026 | Song Jung-a, Financial Times

Samsung forecast record quarterly operating profit as demand for memory chips—especially high-bandwidth memory used in AI systems—continues to surge amid global data-centre expansion. The guidance reinforces the idea of a prolonged semiconductor “supercycle,” with spillover effects across pricing, capacity, and competitive positioning as Samsung aims to regain momentum in advanced memory for Nvidia’s next platforms.

What you need to know: Memory is now a strategic bottleneck for AI—HBM supply and pricing directly affect the pace and cost of scaling model training and inference infrastructure.

 

Smart Glasses Pioneer Xreal Raises $100 Million in New Funding
8 January 2026 | Chris Welch and Edward Ludlow, Bloomberg

AR smart-glasses maker Xreal raised $100mn from undisclosed backers, pushing its valuation above $1bn as competition intensifies around AI-enabled wearables. The company is extending work with Google on Android XR smart glasses planned for 2026, betting that collaboration—pairing strong hardware/optics with leading AI software—will be the winning formula as rivals like Meta and Apple circle the category.

What you need to know: Wearables are becoming the next battleground for multimodal AI—smart glasses could shift assistants from “chat” to always-available perception and action in the real world.

 

Smartphone and PC prices set to rise as AI boom drains memory chips
9 January 2026 | Michael Acton, Financial Times

Executives at CES warned that the rush to build AI data centres is diverting memory-chip production toward high-end HBM, squeezing supplies for consumer devices and pushing up costs for smartphones and PCs. With only a handful of hyperscalers driving massive capex, the downstream impact could be higher retail prices and weaker demand—especially for lower-end device makers—while giants with long-term supply deals may be better insulated.

What you need to know: AI infrastructure is reshaping consumer tech economics—compute demand can now raise prices for everyday devices, influencing adoption cycles and hardware roadmaps.

 

DeepSeek rival’s shares double in debut as Chinese AI companies rush to list
9 January 2026 | Eleanor Olcott & William Sandlund, Financial Times

MiniMax, a Shanghai-based large language model developer, saw its shares more than double in its Hong Kong IPO, reflecting investor optimism around China’s AI sector. Alongside rival Zhipu, MiniMax is raising public capital earlier than US peers to fund costly model development and overseas expansion. The company focuses on consumer applications such as chatbots and video generation rather than enterprise licensing, but continues to burn cash as it scales. The listings highlight China’s urgency to build competitive AI champions without the backing of hyperscaler balance sheets.

What you need to know: Signals how capital markets are becoming a critical funding route for AI model development in China amid escalating compute and research costs.

 

Chip shortages threaten 20% rise in consumer electronics prices
January 2026 | Song Jung-a, Financial Times

Electronics makers are warning of price increases of up to 20% for smartphones, PCs and appliances, as AI data-centre expansion drives demand for high-bandwidth memory chips. Major cloud providers are signing long-term supply deals, forcing chipmakers to prioritise AI infrastructure over consumer devices. Analysts predict shortages could persist until 2027, making AI’s hardware hunger felt far beyond Silicon Valley.

What you need to know: The AI boom is now reshaping global supply chains, with model training needs driving real-world inflation in everyday consumer technology.

 

OpenAI agrees $10bn AI infrastructure deal with start-up Cerebras
15 January 2026 | Michael Acton and George Hammond, Financial Times

OpenAI signed a multiyear, $10bn deal with Cerebras through 2028 for 750MW of computing capacity, deepening its push to diversify beyond Nvidia by adding specialised inference-focused hardware. The agreement reflects a broader industry shift: inference speed and cost are becoming as strategic as training scale, even as OpenAI’s infrastructure commitments vastly outstrip its current revenues and keep financial pressure on the business model.

What you need to know: The “AI arms race” is turning into a portfolio game—labs are hedging across chip suppliers and optimising for inference, which will likely dominate compute demand as deployment scales.

 

OpenAI and SoftBank to invest $1bn in energy and data centre supplier
January 2026 | Rafe Rosner-Uddin, Financial Times

OpenAI and SoftBank will each invest $500mn in SB Energy, a SoftBank-linked infrastructure firm contracted to build a Texas data centre for OpenAI’s Stargate project. The deal highlights how AI expansion is pulling energy development and data-centre construction into the core of frontier AI strategy, while also raising investor concerns about “circular” financing and the sheer scale of long-term infrastructure commitments.

What you need to know: Frontier AI is becoming an energy-and-real-estate story—access to power, land, and build capacity may be as decisive as model architecture.

China’s new tech stock boom leaves its economic malaise behind
18 January 2026 | Jeanny Yu, Bloomberg
Chinese technology stocks have surged at the start of 2026, driven by renewed enthusiasm around domestic AI breakthroughs, robotics and advanced manufacturing, even as the broader economy remains weighed down by weak consumption and a property slump. Developments following DeepSeek’s earlier AI advances have boosted investor confidence, with capital flowing into firms focused on low-cost, application-driven AI. While valuations are becoming stretched in parts of the market, state backing and expectations of further AI releases continue to fuel optimism.
What you need to know: Illustrates how AI momentum is reshaping investor sentiment in China, positioning applied and cost-efficient AI as a new engine of growth despite macroeconomic fragility.
Original link: https://www.bloomberg.com/news/articles/2026-01-18/china-s-new-tech-stock-boom-leaves-its-economic-malaise-behind?embedded-checkout=true

 

DeepMind chief Demis Hassabis warns AI investment looks ‘bubble-like’
24 January 2026 | Melissa Heikkilä et al., Financial Times
Google DeepMind chief Sir Demis Hassabis has cautioned that parts of the AI investment boom appear detached from commercial reality, citing multibillion-dollar funding rounds for start-ups with limited products. While warning of possible market corrections, Hassabis argued that Big Tech firms with deep research capabilities and diversified businesses are better positioned to withstand any downturn. He also downplayed concerns that Chinese labs have overtaken the AI frontier.
What you need to know: Suggests a growing divide between speculative AI investment and sustainable AI capability, with implications for how capital will flow as the market matures.
Original link: https://www.ft.com/content/a1f04b0e-73c5-4358-a65e-09e9a6bba857

 

Europe’s AI ambitions are running into a markets plumbing problem
19 January 2026 | Huw van Steenis, Financial Times
Europe’s push to scale AI, data centres and energy infrastructure is being constrained by shallow capital markets and regulatory barriers that limit long-term investment. Unlike the US, Europe lacks deep securitisation and insurance-led financing channels capable of supporting large-scale infrastructure build-out. Without reforms to financial “plumbing”, Europe risks falling further behind in funding the physical backbone required for AI competitiveness.
What you need to know: Underlines that Europe’s AI challenge is not talent or regulation alone, but capital mobilisation—linking financial structure directly to technological competitiveness.
Original link: https://www.ft.com/content/45b39cea-c931-4fe7-b173-c5d64d94ed19

 

IMF warns global economic resilience at risk if AI falters
19 January 2026 | Sam Fleming and Myles McCormick, Financial Times
The IMF has warned that global economic growth is increasingly dependent on a narrow AI-driven investment boom centred in the US tech sector. If expectations around AI-led productivity gains fail to materialise, a correction in tech investment and equity markets could significantly drag on global growth. While the fund sees upside potential if AI delivers sooner than expected, it cautions that leverage and concentrated risk amplify downside threats.
What you need to know: Frames AI not just as a technology story but as a macroeconomic risk factor, raising stakes for whether promised productivity gains actually arrive.
Original link: https://www.ft.com/content/2af4d92a-452c-4d35-ab55-3afce930f98a

 

Japan lacks AI stars, but one chipmaker still shines
22 January 2026 | June Yoon, Financial Times
Despite Japan’s reputation as a high-tech economy, it lacks globally dominant AI firms comparable to Nvidia or major platform builders. Investors have instead turned to Kioxia, a memory chipmaker benefiting indirectly from the AI boom as demand for data-centre storage surges. Shifts away from consumer electronics towards AI infrastructure have improved pricing power for NAND flash producers, after years of underinvestment and supply constraints. However, Kioxia remains exposed to the cyclical risks of memory markets and future changes in data-centre spending.
What you need to know: Highlights how AI’s infrastructure layer—especially memory—can create outsized winners even in countries without leading AI model developers.
Original link: https://www.ft.com/content/13c5ac7d-c19d-48b5-9b91-2ce1caf57c67

 

Memory stocks soar as investors hunt for new AI winners
25 January 2026 | Rachel Rees, Tim Bradshaw and Stephen Morris, Financial Times
Shares in memory and storage companies have surged as AI-driven data-centre expansion creates intense demand for high-bandwidth memory and storage chips. Firms such as Micron, SK Hynix and SanDisk have benefited from supply bottlenecks and reluctance to expand capacity too quickly, pushing prices sharply higher. Investors are increasingly shifting attention away from megacap tech towards less glamorous infrastructure suppliers seen as critical choke points in AI deployment. Analysts warn, however, that memory markets remain historically cyclical.
What you need to know: Reinforces that AI’s next investment wave is moving deeper into physical infrastructure, where bottlenecks may sustain returns longer than model-layer competition.
Original link: https://www.ft.com/content/0a8743a8-a23e-4d93-aba9-b9d533310adc

 

Sequoia targets major Anthropic investment
18 January 2026 | George Hammond, Financial Times
Sequoia Capital is preparing to make a significant investment in Anthropic as part of a funding round that could value the AI start-up at around $350bn. The move marks Sequoia’s first direct backing of Anthropic after a leadership shake-up and reflects the firm’s growing willingness to back multiple rivals in the AI race. The round includes major contributions from sovereign wealth funds and strategic investors such as Microsoft and Nvidia. Anthropic is also laying the groundwork for a potential IPO.
What you need to know: Highlights how AI funding has shifted from venture-style bets to quasi-public market scale, signalling expectations of long-term dominance by multiple model providers.
Original link: https://www.ft.com/content/53220829-2ab2-471c-9a00-30d24beb8d48

 

Why China has so many robot IPOs
20 January 2026 | Lizzi Lee, Financial Times
Chinese companies dominate the emerging humanoid robotics sector, driven by dense manufacturing supply chains, engineering talent and aggressive cost control. However, limited access to advanced chips and long development timelines have pushed many robotics start-ups to seek early public listings in Hong Kong. While viral robot demonstrations attract attention, commercial deployment remains distant. Geopolitical constraints are increasingly shaping business models and innovation priorities.
What you need to know: Explains how embodied AI is advancing through industrial scale rather than frontier research, with financial markets compensating for technological constraints.
Original link: https://www.ft.com/content/6687d2c0-a493-4681-833e-dbb1aae1a17d

 

AI voice start-up ElevenLabs in funding talks at $11bn valuation
17 January 2026 | Ivan Levingston and George Hammond, Financial Times
ElevenLabs is in talks to raise hundreds of millions of dollars at a valuation of about $11bn, potentially making it the UK’s most valuable AI start-up. The company’s voice-generation technology is widely used in customer service, dubbing and text-to-speech applications, and it generated roughly $330mn in annual recurring revenue last year. The talks underscore continued investor appetite for commercially proven AI firms. European start-ups, however, still lag far behind US peers in scale.
What you need to know: Shows that applied AI products with clear revenue streams are attracting capital even as funding concentrates around a few global leaders.
Original link: https://www.ft.com/content/5bb87485-7641-4577-8b64-144a1553d42e

 

Introducing ChatGPT Go, now available worldwide
16 January 2026 | OpenAI
OpenAI has launched ChatGPT Go globally, offering a low-cost $8-per-month subscription designed to expand access to advanced AI capabilities. The plan provides higher usage limits, longer memory and access to GPT-5.2 Instant, positioning Go between the free tier and higher-end Plus and Pro subscriptions. The rollout reflects OpenAI’s strategy to broaden global adoption while supporting affordability through future advertising in select tiers.
What you need to know: AI access is increasingly stratified by pricing tiers, with affordability and scale becoming as important as frontier performance in shaping who benefits from AI.
Original link: https://openai.com/index/introducing-chatgpt-go/

 

The risky bet by AI start-ups
11 January 2026 | Holden Spaht, Financial Times
Many AI start-ups are pursuing narrow, task-specific applications built on top of foundation models, a strategy that leaves them vulnerable to high compute costs and easy imitation. By contrast, large enterprise software platforms such as Microsoft, SAP and Salesforce are integrating AI across entire business systems, strengthening network effects and spreading compliance and infrastructure costs. The result, the author argues, is a structural advantage for established platforms, while start-ups face a difficult path to defensible, long-term growth.
What you need to know: Value in AI is shifting from standalone tools to deeply integrated platforms, reshaping where sustainable competitive advantage is likely to sit.
Original link: https://www.ft.com/content/748a8ede-d388-4fa1-8d56-613eca386cbc

Further Reading: Find out more from these resources

Resources: 

  • Watch videos from other talks about AI and Education in our webinar library here

  • Watch the AI Readiness webinar series for educators and educational businesses 

  • Listen to the EdTech Podcast, hosted by Professor Rose Luckin here

  • Study our AI readiness Online Course and Primer on Generative AI here

  • Read our byte-sized summary, listen to audiobook chapters, and buy the AI for School Teachers book here

  • Read research about AI in education here

About The Skinny

Welcome to "The Skinny on AI for Education" newsletter, your go-to source for the latest insights, trends, and developments at the intersection of artificial intelligence (AI) and education. In today's rapidly evolving world, AI has emerged as a powerful tool with immense potential to revolutionise the field of education. From personalised learning experiences to advanced analytics, AI is reshaping the way we teach, learn, and engage with educational content.

 

In this newsletter, we aim to bring you a concise and informative overview of the applications, benefits, and challenges of AI in education. Whether you're an educator, administrator, student, or simply curious about the future of education, this newsletter will serve as your trusted companion, decoding the complexities of AI and its impact on learning environments.

 

Our team of experts will delve into a wide range of topics, including adaptive learning algorithms, virtual tutors, smart classrooms, AI-driven assessment tools, and more. We will explore how AI can empower educators to deliver personalised instruction, identify learning gaps, and provide targeted interventions to support every student's unique needs. Furthermore, we'll discuss the ethical considerations and potential pitfalls associated with integrating AI into educational systems, ensuring that we approach this transformative technology responsibly. We will strive to provide you with actionable insights that can be applied in real-world scenarios, empowering you to navigate the AI landscape with confidence and make informed decisions for the betterment of education.

 

As AI continues to evolve and reshape our world, it is crucial to stay informed and engaged. By subscribing to "The Skinny on AI for Education," you will become part of a vibrant community of educators, researchers, and enthusiasts dedicated to exploring the potential of AI and driving positive change in education.

bottom of page