ARTICLE AD BOX

Summary
Governments have thrown caution to the wind for economic gains, giving up plans for safety measures to let AI firms watch themselves. Will the world re-awaken to AI’s dangers only after disaster strikes?
Former British Prime Minister Rishi Sunak once thought artificial intelligence (AI) so risky that in 2023 he organized the world’s first AI Safety Summit, inviting policymakers and longtime AI doomer Elon Musk to talk up guardrails for the boom sparked by ChatGPT. Two years on and his view has softened considerably.
“The right thing to do here is not to regulate,” he said last month, saying companies like OpenAI were “working really well” with security researchers in London who tested their models for potential harms. Those firms volunteered to be audited. When I said they might change their minds in the future, Sunak replied, “So far we haven’t reached that point, which is positive.” But what happens when we do?
Sunak’s U-turn from once saying Britain should be the “home of AI safety regulation” to wanting no legislation at all reflects a broader shift happening in governments around the world. Behind it is an urge to capitalize on tech that could revitalize stagnant economies and a sense that strict rules aren’t needed without clear evidence of widespread harm.
But waiting for catastrophe before regulating is a gamble when new technology is spreading so quickly. ChatGPT may be the fastest-growing software of all time, regularly used by 10% of the global population just three years after launch. It may also be reshaping our brains.
Its owner OpenAI has been sued by the families of multiple people who’ve had delusional spirals or become suicidal after spending hours on ChatGPT. AI is meanwhile wreaking havoc on school homework, entrenching stereotypes, sparking a novel kind of dependency and engaging in artistic theft.
All of this has faded into the background amid a tech-hype cycle that even former safety advocates have jumped on. Sunak, for one, has taken advisory roles at AI companies Anthropic and Microsoft, and while he has pledged his salary to charity, those relationships will be valuable should he leave politics. Musk, who fretted about AI’s existential risks, has gone quiet on the subject since founding xAI, the firm behind chatbot Grok. But throwing caution aside in a chase for uncertain economic benefits may come back to haunt governments.
Both the West and Asia seem to have entered this age of regulatory leniency. The US went from issuing an executive order under President Joe Biden to build safer AI in 2023 to banning that order under Donald Trump.
The current administration is fast-tracking data centres and chip exports to beat China, and trying to block state-level AI laws so tech businesses can thrive. Silicon Valley billionaires such as Marc Andreessen have committed tens of millions of dollars to lobbying against any future AI restrictions.
The UK has a track record of creating quick and sensible tech regulation, but it looks unlikely to crack down on Generative AI. Europe’s digital privacy rules were once a template for other governments, yet the so-called Brussels effect looks unlikely to trouble AI.
China is no exception to this laissez-faire trend. Its Communist Party has rolled out policies to help domestic AI companies flourish. Despite strict rules requiring social media firms to register their algorithms to prevent social unrest, similar standards only apply to chatbots or AI tools that generate images or videos. These businesses must label deepfakes and test their tools to make sure they don’t generate illegal or politically sensitive content.
But mass-market consumer chatbots are only a slice of China’s AI market. The biggest AI sectors are in industrial automation, logistics, e-commerce and AI infrastructure. Companies working on this get generous R&D tax deductions, VAT rebates and lower corporate tax, according to a 2025 research paper by Angela Zhang, a professor of law at the University of Southern California and an authority on Chinese tech regulation.
China’s softer approach to AI firms is down to the Communist Party also being a major customer for their tools, particularly for surveillance tech like facial recognition. Beijing has too much invested in AI to smother its development and US export restrictions on chips and a nationwide economic slump have pushed China towards growth over regulation. That “offers little protective value to the Chinese public,” Zhang argues.
She and others have warned of AI-enabled disasters sparked by China’s historically lax approach to hazards, from AI-designed pathogens to electrical grid and oil pipelines disruptions.
The prevailing wisdom among governments is that AI companies should be left to self-govern.
But unintended consequences often arise when technologists start off with the best intentions for humanity. Self-regulation works, until it doesn’t. ©Bloomberg
The author is a Bloomberg Opinion columnist covering technology.

1 month ago
3





English (US) ·