Report|Sovereign AI: Disentangling Rhetoric from Reality The Chip Era and Digital Governance Forum 12
2026-04-27
Event Report
The Chip Era and Digital Governance Forum 12
Organizer: International Center for Cultural Studies (ICCS), National Yang Ming Chiao Tung University
Speakers: Dr. Megha Shrivastava, Assistant Professor, PES University, Bengaluru, India
Ms. Nistha Kumari Singh, PhD scholar, Manipal Academy of Higher Education, India
Moderator: Dr. Dolma Tsering (ICCS NYCU)
Date: April 14, 2026 (10:00–12:00, GMT+8, Online)
Report writer: Lan-Hanh T. Nguyen
On April 14, 2026, the International Center for Cultural Studies (ICCS) at National Yang Ming Chiao Tung University hosted the twelfth session of The Chip Era and Digital Governance Forum, titled “Sovereign AI: Disentangling Rhetoric from Reality.” The event brought together scholars to examine one of the most pressing questions in contemporary digital politics: whether AI sovereignty represents a genuine pathway toward national resilience or merely a form of political rhetoric in an increasingly fragmented technological landscape.
Moderated by Dr. Dolma Tsering (ICCS NYCU), the forum featured two speakers from India: Dr. Megha Shrivastava, Assistant Professor at PES University, and Ms. Nistha Kumari Singh, a PhD scholar at Manipal Academy of Higher Education. Together, their presentations provided complementary analytical lenses—one grounded in geopolitical political economy and the other in cyber governance and data politics—through which to interrogate the concept of sovereignty in the age of artificial intelligence.
The event was situated within the broader rise of AI as a central domain of geopolitical competition. As outlined in the event description, AI sovereignty refers to a nation’s ability to develop, control, and govern its own AI systems, infrastructure, and data without excessive reliance on foreign technology providers. This includes building domestic data centers, securing semiconductor supply chains, developing local models, and ensuring that AI systems reflect national laws and cultural values.
However, as the poster also emphasized, this concept remains contested. While proponents argue that AI sovereignty enhances national security, economic resilience, and technological independence, critics warn that it may increase costs, reduce international collaboration, and fragment the global AI ecosystem. The forum thus aimed to critically evaluate whether AI sovereignty is materially achievable or primarily rhetorical.
Dr. Megha Shrivastava’s presentation offered a rigorous conceptual framework for analyzing AI sovereignty. She began by noting the rapid proliferation of national AI strategies since 2017, many of which emphasize self-reliance and independence. Yet, she argued, these strategies often overstate national capabilities and obscure underlying dependencies.
A central contribution of her talk was the identification of three recurring misconceptions in AI sovereignty discourse: the assumption that infrastructure investment equates to sovereignty, the conflation of model customization with original innovation, and the tendency to treat regulatory authority as a substitute for technological independence. These “rhetorical moves,” she suggested, obscure the structural realities of the global AI ecosystem.
To address this gap, Shrivastava proposed a four-layer model of AI sovereignty, consisting of compute, data, models, and deployment. True sovereignty would require control across all these layers—from semiconductor production and energy infrastructure to data governance, algorithm development, and system integration. However, such full-stack autonomy remains largely unattainable due to the deeply globalized and interdependent nature of AI supply chains.
Her comparative analysis of major global actors illustrated this point. The United States holds a dominant position in compute and model development, driven by its powerful private tech sector. However, it remains dependent on external manufacturing, particularly Taiwan’s semiconductor industry, revealing a structural vulnerability. The European Union, by contrast, leads in regulatory governance through frameworks such as the AI Act and GDPR but lacks strong capabilities in compute and frontier models.
China, in Shrivastava’s analysis, demonstrates significant strength in data generation and deployment, supported by its large population and controlled digital ecosystem. However, it still faces constraints in advanced semiconductor manufacturing. India, meanwhile, represents a case of “aspirational sovereignty,” focusing on infrastructure, governance, and deployment rather than attempting to compete in frontier model development.
One of the most significant insights from her presentation was the argument that AI sovereignty should be understood as a spectrum rather than a binary condition. Countries achieve varying degrees of autonomy across different layers, and full sovereignty is unlikely in the foreseeable future. Instead, states pursue selective strategies, focusing on areas where they can realistically build capacity while remaining dependent in others.
Building on this structural analysis, Ms. Nistha Kumari Singh expanded the discussion to encompass cyber sovereignty and data governance. Her approach emphasized the qualitative, historical, and ideological dimensions of sovereignty, highlighting its variability across national contexts.
Singh argued that cyber sovereignty is not a uniform model but a dynamic and subjective construct shaped by national interests, political systems, and historical experiences. She traced the evolution of cyberspace governance from its early conception as a global commons to its current status as a securitized and contested domain. Over time, states have increasingly sought to assert control over digital space, leading to the emergence of competing governance models.
She identified two dominant paradigms: the multistakeholder model associated with the United States, which emphasizes openness and private sector participation, and the state-centric model associated with China and Russia, which prioritizes sovereignty, control, and security. The European Union occupies an intermediate position, focusing on regulatory frameworks and data protection.
Importantly, Singh highlighted the role of middle powers such as India and Taiwan, which adopt hybrid approaches tailored to their specific contexts. India, for example, balances openness with data localization and digital sovereignty concerns, influenced by its postcolonial history. Taiwan emphasizes democratic governance and rights-based approaches.
A key focus of her presentation was China’s strategy in shaping global digital governance. Through initiatives such as the Digital Silk Road and investments in smart city infrastructure, China is exporting not only technology but also governance models and standards. These efforts are closely tied to data sovereignty, which Singh described as central to securing critical infrastructure and enabling AI systems.
She also emphasized the unique nature of data as a resource. Unlike traditional commodities, data is dynamic, context-dependent, and difficult to standardize. Its value depends on how it is collected, processed, and applied, making governance a complex challenge that extends beyond technical considerations.
Both presentations underscored the fragmented nature of global AI and cyber governance. Rather than converging toward a single model, the world is witnessing the emergence of multiple competing frameworks, shaped by geopolitical rivalry and ideological differences.
Singh further extended the discussion to emerging technologies, particularly quantum computing. She argued that quantum technologies represent a future frontier of sovereignty, with the potential to transform data security, encryption, and global power dynamics. China’s investments in quantum research reflect a long-term strategy to secure technological leadership and address current vulnerabilities in cyber infrastructure.
At the same time, the discussion highlighted the limitations of sovereignty as a solution to digital insecurity. Despite strong state control, cyberattacks and data breaches continue to occur, demonstrating that sovereignty alone cannot guarantee security. This raises important questions about the balance between control, openness, and resilience in digital governance.
The discussion session significantly extended the scope of the presentations by introducing a critical perspective that moved beyond state-centric understandings of sovereignty. While both speakers had focused primarily on national strategies and geopolitical competition, participants raised questions about the role of non-state actors – particularly private technology companies – and the implications for what might be termed “people’s sovereignty.”
A central issue raised during the discussion concerned the growing tension between governments and large AI firms. One participant pointed to recent conflicts between state authorities and private companies over the use of AI technologies for military and surveillance purposes. These cases illustrate a fundamental contradiction: while states invoke sovereignty to justify control over data and AI systems in the name of national security, private firms often retain significant power over the development and deployment of these technologies. In some instances, companies have resisted state demands, citing ethical concerns – such as the use of AI in autonomous warfare or mass surveillance. This raises an important question: if states do not fully control AI infrastructure, can they genuinely claim sovereignty?
At the same time, the discussion emphasized that neither states nor corporations adequately represent the interests of individuals and communities. Professor Joyce Liu introduced the concept of “people’s sovereignty” to highlight the absence of meaningful public agency in current AI governance frameworks. She noted that individuals – particularly marginalized groups, migrants, and undocumented populations – are often the most affected by AI-driven systems such as predictive policing, border control technologies, and automated decision-making, yet they have little influence over how these systems are designed or governed.
This concern is especially relevant in the context of data governance. While national strategies often emphasize data localization and control, these measures do not necessarily translate into greater protection for citizens. Instead, they may enable more extensive state surveillance or reinforce existing inequalities. The discussion thus challenged the assumption that national sovereignty automatically aligns with public interest, suggesting instead that it can sometimes conflict with individual rights and freedoms.
Another key theme was the ethical dimension of AI sovereignty. Participants questioned where the boundaries should be drawn between legitimate state use of AI and potential abuses. The example of AI deployment in military contexts was particularly contentious, as it highlights the risks of automation in warfare and the potential erosion of accountability. Similarly, the use of AI in domestic surveillance raises concerns about civil liberties and the normalization of intrusive monitoring practices.
Building on these points, the discussion also touched on the possibility of alternative governance models. Some participants suggested that a more inclusive framework—incorporating civil society, academia, and local communities—could help address the limitations of both state-centric and corporate-driven approaches. This aligns with broader debates about multistakeholder governance, although the feasibility of such models remains uncertain in a fragmented geopolitical environment.
Finally, the discussion returned to the global dimension of sovereignty, particularly in relation to digital inequality. Participants noted that countries in the Global South often adopt technologies and infrastructures developed elsewhere, which can embed external governance norms and dependencies. In this context, sovereignty becomes not only a matter of national control but also of negotiating asymmetrical power relations in the global digital economy.
Overall, the discussion underscored that AI sovereignty cannot be understood solely in terms of state capacity or technological infrastructure. It is equally a question of power distribution, ethical responsibility, and social justice. By foregrounding the perspectives of individuals and communities, the session highlighted the need to rethink sovereignty in more plural and participatory terms—an approach that may be essential for addressing the challenges of AI governance in the years ahead.
The forum “Sovereign AI: Disentangling Rhetoric from Reality” provided a timely and nuanced examination of one of the central concepts shaping contemporary digital politics. By combining structural, geopolitical, and socio-political perspectives, the speakers demonstrated that AI sovereignty is neither a straightforward policy goal nor a purely rhetorical construct. Rather, it is a complex, layered, and contested phenomenon.
Shrivastava’s framework highlighted the structural constraints and interdependencies that limit the feasibility of full AI sovereignty, while Singh’s analysis emphasized the diversity of governance models and the importance of historical and ideological context. Together, their insights suggest that the future of AI governance will be characterized by pluralism, competition, and ongoing negotiation.
Ultimately, the event underscored that the key question is not whether sovereignty can be achieved in absolute terms, but how it is defined, practiced, and contested across different domains and actors. As AI continues to reshape global power relations, understanding these dynamics will be essential for navigating the challenges of digital governance in the twenty-first century.
近期新聞 Recent News
Call for Papers: Multi-species Asia: Toward Animal Perspectives International Conference
2026-04-29
more