How global AI governance could work

LONDON –- Ahead of the AI Impact Summit in India in February, it is clear that most countries still lack a workable model for governing the technology. While the United States leaves matters largely to market forces, the European Union relies on extensive regulatory compliance, and China on concentrated state authority. But none of these is a realistic option if you are among the many countries that must govern AI without large regulatory structures or massive computing capacity. Instead, we need a different framework, one that embeds transparency, consent, and accountability directly into digital infrastructure. This approach treats governance as a design choice that can be built into the very foundations of digital systems. When safeguards are part of the architecture, responsible behavior becomes the default. Regulators gain immediate insight into how data and automated systems behave, and users have clear control over their information. It is a far more scalable and inclusive method than one that relies on regulation alone. But what should this look like in practice? India’s ex