A common framework to build trust in AI in Asia
Artificial intelligence (AI) holds extraordinary promise for addressing globally persistent challenges such as improving public health outcomes, expanding access to education, and boosting productivity while respecting human rights. However, across South Asia, Southeast Asia, and the wider Asia-Pacific, AI-driven transformation is unfolding unevenly. Decisions about safety, bias, accountability, and social impact are often made far from the communities most affected by them. With these contrasting developments, it is clear that the gains of AI will only be realised if AI systems are trusted by users, developers, regulators, and society at large.
Without trusted AI ecosystems, even the most advanced AI systems risk rejection by societies, resistance from governments, and misuse by certain entities. Enabling trust is a challenge since AI ecosystems are transnational, in terms of global data flows, global interdependence on the hardware, dispersed infrastructure supply chains, skewed supply of global talent, and the absence of common cybersecurity practices securing AI systems. For many developing countries, particularly in South and Southeast Asia, this means becoming consumers of AI systems over which they exercise little influence.
Differing agendas
Recognising this, countries have come up with national AI policies attempting to create a conducive ecosystem for responsible AI development. However, their agendas differ, owing to their technological capabilities and resources. South Korea wants to retain its memory chip dominance within the AI supply chain. Singapore, as stated in its policy, aims to become the “pace-setter” for AI governance. China aims to lead the global AI governance efforts by respecting the sovereign control of the state within the borders. India looks to upskill its IT workforce and take advantage of its expanding digital market, while Nepal aims to establish itself as the provider of energy-efficient compute infrastructure.
Amidst these differing objectives, the AI policy and governance frameworks of Asian economies emphasise one common principle: building trust among stakeholders. For example, India announced its AI Governance Guidelines last November, anchoring trust as the foundation of AI development and adoption. South Korea’s AI Basic Act, which came into force on January 22, 2026, aims to establish a foundation for trustworthiness. The UN Secretary-General’s AI Advisory Body has called for shared understanding, common ground, and common benefits in AI governance.
To meet that objective, a common framework is required that measures and strengthens trust in AI ecosystems across Asia. Such a framework should reflect regional realities while remaining interoperable with global norms, encompassing cybersecurity practices, bias and risk mitigation, institutional accountability, and policy preparedness.
A trusted AI ecosystem in Asia rests on the interaction of several foundational layers. At its core are trusted datasets — a real-time, high-quality, and representative data infrastructure that reflects Asia’s linguistic, cultural, and social diversity — increasingly anchored in Digital Public Infrastructure. This must be complemented by resilient AI infrastructure, including reliable access to compute, energy, and cloud resources that can withstand geopolitical and supply-side disruptions without undermining broader socioeconomic activity. Equally critical are AI skills and public awareness, encompassing both advanced technical talent pipelines and widespread societal literacy that enables responsible adoption and productivity gains.
Trust is further shaped by a country’s leverage on the global AI value chain, particularly its access to semiconductors, critical minerals, and manufacturing capabilities that determine the stability and predictability of AI development. Proportionate AI governance is essential to balance innovation with accountability — tackling risks such as misinformation, deepfakes, and liability — without disrupting data flows, hindering AI development, or deterring investment. Such governance institutions must operate within global frameworks, such as UNESCO’s Recommendation on the Ethics of AI, and consider ISO 42001/42005 standards for AI management. Finally, cybersecurity underpins the entire ecosystem, safeguarding AI systems against both AI-enabled threats and attacks. Together, these layers provide a foundation for measuring trust in AI ecosystems and for guiding policy choices
India’s opportunity
As AI adoption accelerates across Asia, the region faces a choice. One pathway is to accept the fragmented governance that reinforces existing asymmetries. Another is to establish a shared framework that ensures technological progress and translates into inclusive human development. When the AI value chain is global and interdependent, India is particularly well positioned to lead this with its approach to AI governance. Characteristics such as techno-legal solutions that simplify compliance help establish governance mechanisms, balancing AI innovation with safeguards for individuals and society. India’s AI Impact Summit offers an opportunity to advance the establishment of a shared framework that measures trust of AI ecosystems in Asia. It aims not to minimise AI’s risks, but to build the trusted ecosystems necessary to realise its promise.
Arun Teja, JSW Science and Technology Fellow, Asia Society Policy Institute, New Delhi
Published – February 16, 2026 02:12 am IST





