AI has gotten away without the self-restraints that scientists employ in the face of risks. How come?

3 days ago 1
ARTICLE AD BOX

logo

Examples like the pausing of mirror-life research cannot be used as a model for AI policy. (REUTERS)

Summary

In 2024, the scientific community chose to halt mirror-life research as soon as its dangers became clear. Are the artificial intelligence (AI) stakes too high for this field of technology to take a cue from that episode of conscience-led self-restraint?

In December 2024, a number of scientists from around the world signed a four-page paper in Science that urged their own community to ensure that an entire class of organisms is never created. Their efforts managed to halt progress in a field that many of them had themselves been pursuing.

This marked a rare moment in science when an entire field voluntarily abandoned the path it had been on because of the risks scientists saw ahead of them.

The science in question was mirror-life, and the fact that such a moratorium was achieved raises the question of why similar outcomes were not achieved in the field of artificial intelligence (AI). How was a small band of scientists able to pause progress in mirror-life when some of the most powerful voices in technology failed to do so in AI?

For this, we need to first understand what mirror-life is.

All complex naturally occurring biological molecules exist in one of two mirror-image forms, or chiralities. These forms are identical in every respect except that one cannot be superimposed on the other, much like our left hand cannot on our right.

DNA for example, exists as right-handed nucleotides and the proteins they encode are made up of left-handed amino acids. This pattern repeats in every microbe, plant and animal, and has done so for four billion years.

A mirror organism would be identical to its natural counterpart, except that its orientation would be reversed. It would have left-handed nucleotides encoding right-handed proteins, and its every molecule would be a mirror image of its natural counterpart.

Nothing like this exists in nature, but until the paper in Science, efforts had been underway to create such an organism. It was effortful but not beyond the realm of the possible.

The concern is not whether we can create these molecules, but what happens once we do. Our immune systems, the bacteriophages and other elements of our biome that keep us healthy, as well as the antibiotics we use to fight disease, all rely on chirality-specific bindings. None of them would be able to protect us from mirror-microbes that exist under an entirely different set of rules.

Even benign mirror bacteria would breeze through our defences—because nothing in nature can check their spread.

What is truly remarkable about the 2024 paper is that its lead author, Kate Adamala, had been one of four principal investigators on a major US National Science Foundation-funded grant for mirror cell research. Since then, the Mirror Biology Dialogues Fund has convened follow-on meetings in Manchester and at the Pasteur Institute, with another planned at the National University of Singapore. So far, the moratorium has held.

Five conditions made it stick: the field itself agreed that the harm would be uncontainable; no commercial capital had flowed into this dangerous capability; the community was small enough to coordinate; no state had staked its competitive position on getting there first; and the pioneers could be persuaded to abandon their own work. Had all five not held simultaneously, there may still be scientists going down this path.

This is not the first time the scientific community has chosen to pause its own progress. When recombinant DNA became a reality, Paul Berg, Stanley Cohen and Herbert Boyer—each a pioneer in the field—were central to the Asilomar conference and the moratorium that followed. The safety guidelines that emerged became guardrails for the research that came after. The mirror life community is following in their footsteps.

The AI community attempted the same Asilomar approach when, in March 2023, the Future of Life Institute released an open letter calling for a six-month pause on frontier AI development. Thousands signed, including a luminary in the field, Yoshua Bengio. But unlike recombinant DNA and mirror-life, AI development did not stop and continues at a blistering pace to this day.

As it happens, AI fails on every one of those five conditions. There is no consensus that its harms are uncontainable; most AI labs believe they will be able to constrain the technology enough to ensure no harm results.

By the time the Future of Life Institute got its act together, AI labs were already worth tens of billions and had committed to spending more. The community was large, multinational and fractured along ideological and ethical lines, and the US and China had already staked their national strategy on AI dominance. Even though Hinton and Bengio advised caution, it was the founders of AI labs who really mattered—and they were going hell for leather.

But the deeper asymmetry is in the stakes. Mirror life would only have destroyed the biological commons. As serious as this is, frontier AI accelerates the very disciplines—biology, immunology, ecology—that produced the mirror life moratorium in the first place. A broad pause risks slowing entire fields that depend on it to do their work.

The mirror-life example cannot serve as a model for AI policy. While there may be narrow verticals where the same diagnostic applies (autonomous biological design assistance may be one), these are specialist sub-problems whose answers cannot drive the governance of AI as a whole.

For that, we must build governance capacity—through post-deployment transparency, capability evaluations and the steady augmentation of regulatory skills that can meet the demands of a new and dynamic technology. While mirror life is a precedent worth admiring, it’s not one we can follow.

The author is a partner at Trilegal and the author of ‘The Third Way: India’s Revolutionary Approach to Data Governance’. His X handle is @matthan.

About the Author

Rahul Matthan

Rahul Matthan is a partner at Trilegal and the author of ‘The Third Way: India’s Revolutionary Approach to Data Governance’. His X handle is @matthan.

Read Entire Article