top of page
Writer's pictureSIS Blog

Blog Special: Taming the Beast: On the Global Regulation of AI for a Safe Future- Part II


By Prof. (Dr.) Bharat H. Desai


A flurry of global initiatives and processes have moved forward on the emerging challenge of Artificial Intelligence (AI) since this author wrote Part – I of the SIS Blog Special article: Taming the Beast (July 24, 2023). The UN Secretary-General Antonio Guterres, in his November 02, 2023 address to the AI Safety Summit (Bletchley Park, London), emphasized that: “the gap between AI and its governance is wide and growing. AI-associated risks are many and varied. Like AI itself, they are still emerging, and they demand new solutions”. Interestingly, notwithstanding the new technology driven challenges, the basic architecture of International Law is capable of addressing these hitherto unforeseen challenges. As a corollary, the question of governance of AI shall have to be within the established tenets and realm of International Law. “The principles for AI governance should be based on the United Nations Charter and the Universal Declaration of Human Rights. We urgently need to incorporate those principles into AI safety”, the UNSG said.

AI Safety Summit, Bletchley Park Summit, United Kingdom, 1-2 November 2023

The Regulatory Blitzkrieg


Within limits of time and space, this author has sought to review and place under scanner three global processes comprising (i) the US President’s Executive Order (October 30, 2023), (ii) the UK-led AI Safety Summit (November 1-2, 2023) and (iii) the forthcoming India Chaired Global Partnership Summit on AI (December 12-14, 2023).


(i) The US Executive Order on AI 2023


Since the US President Joe Biden hosted a White House meet on July 21, 2023 of a group of seven leading AI behemoths (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI), there has been heightened urgency to ‘tame the beast’. It has been a sequel to the adoption of the “blueprint for an AI Bill of Rights” (October 2022) for making automated systems work for the people. In the aftermath of the “voluntary commitments to the White House” (July 21, 2023), the US president has issued an Executive Order on October 30, 2023 entitled: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In calibrated moves, the US Executive Order came just two ahead of the UK initiated AI Safety Summit. It was attended by the US Vice-President Kamala Harris.


The main thrust of the detailed 12 Section White House Order is to provide guardrails on the most advanced forms of the emerging AI technology that would impinge upon the planetary future. Mandated by the Executive Order, the White House Artificial Intelligence Council, chaired by the Deputy Chief of Staff on Policy, appears to be all powerful as comprises the entire Washington power structure with 28 plus members. Primarily intended to secure the US, the Executive Order aims to “address cross-border and global AI risks to critical infrastructure”, yet seeks to play a role in “ensuring the safe, responsible, beneficial, and sustainable global development and adoption of AI”. The opening up and prioritization of AI is taking shape even as brutal wars are raging in conflicts zones wherein one fourth (2 billion) of the global population lives. It provides us a beacon of hope with enormous potential to make things work by marshalling unfathomable human ingenuity for our safe future.


The US Executive Order underscores the inherent predicament of ‘taming the beast’ of AI. It emphatically states: Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society” (Section 1). It sums up the predicament of grappling with a technology that can be put to good or bad use – akin to the proverbial riding of a tiger.


(ii) AI Safety Summit 2023

Quick on the heels of the White House Executive Order (October 30, 2023), the US Vice-President chose to personally attend British Prime Minister Rishi Sunak’s initiative on the AI Safety Summit. Regarded as a major “diplomatic coup”, the event showcased political and commercial heft. Invoking the pioneering role of Alan Turing who cracked the Enigma cipher at Bletchley Park (during World War – II), Rishi Sunak reflected upon the global concerns and observed: “nothing in our foreseeable future that will be more transformative for our economies, our societies and all our lives…than the development of technologies like Artificial Intelligence”. Articulating his initiative for the AI Safety Summit, the British PM described it as a “conversation” to “tip the balance in favor of humanity” by bringing together “CEOs of world-leading AI companies…with countries most advanced in using it…and representatives from across academia and civil society”. It speaks volumes about the UK’s institutionalized University based research traditions and societal values for innovations as it brought onboard the academia alongside the AI corporate honchos. It wasn’t surprising that, immediately after his Coronation, King Charles chose to visit a University (Cambridge) that annually contributes almost £30 billion to the UK economy. In order to cement UK’s position as a “world leader” in AI safety, Rishi Sunak promptly announced the setting up of the AI Safety Institute by placing the Frontier AI Taskforce on a permanent footing.


The Bletchley Park Declaration (November 01, 2023), as an outcome of the AI Safety Summit, resolved that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential”. As a corollary, the participating 28 countries and the European Union affirmed their resolve to “sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024”. In his address (Nov. 02, 2023) to the AI Safety Summit, the Secretary-General of the 193-member UN, Antonio Guterres sounded a note of caution that any “global oversight of emerging artificial intelligence (AI) technology should be based on the UN Charter’s core principles and ensure full respect for human rights”. This encapsulates the essence of the future pathway and raison d'être of the proposed global instrument on AI (that may carry nomenclature of a compact, treaty, convention, agreement, covenant or charter)


(iii) Global Partnership on AI Summit 2023


GPAI is a multi-stakeholder, 25 country initiative whose Secretariat is provided by the Organization for Economic Cooperation and Development (OECD). It comprises “leading experts from science, industry, civil society, international organizations and government” and aims to “bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities”. GPAI emanated from the Recommendation of the Council on Artificial Intelligence (OECD). The Recommendation is premised on “value-based principles for the responsible stewardship of trustworthy AI” articulated as follows: (i) inclusive growth, sustainable development and well-being; (ii) human-centred values and fairness; (iii) transparency and explainability; (iv) robustness, security and safety; and (v) and accountability. Regarded as the OECD’s legal instrument (adopted on May 22, 2019 and amended on November 08, 2023), the ‘recommendation’ can be construed as having binding effect on the OECD member countries. Drawing from these recommendations, the G20 Osaka Summit (June 28-29, 2019) gave shape to its own AI Principles. Significantly, the G7 Digital & Tech Ministers meeting (September 07, 2023), during the Japanese Presidency of G7, within the Hiroshima Artificial Intelligence (AI) Process, decided to collaborate with prominent international organizations and actors including GPAI. One of the important AI technological aspects under GPAI radar is the new generation of “foundational AI models” such as ChatGPT and MidJourney that would require “detection mechanisms” (GPAI; July 2023) as a condition for their public release. In this context, India (whose economy/GDP can gain from AI USD 450–500 billion by 2025) as the Council Chair of the GPAI including the forthcoming New Delhi Ministerial Council (December 13, 2023) could potentially play a pivotal role in giving a big push for concrete blueprint and global regulatory instrument for responsible AI.

AI generated Illustration, European Parliament Website, June 14, 2023

The Big Unknown and Beyond


In view of the “world we live” in as well as opportunities (addressing national security threats), risks (cognitive behavioral manipulation of people, impersonation and deep fakes) and effects arising from the public release of AI products such as generative AI and the “Big Unknown” (future of some 9% or 281 million global workforce), there is an urgent need for concerted global ideational works to address ethical issues and regulatory guardrails in time. It is refreshing that several major players such as China, European Union, India, the UK and the USA have already taken steps or working on prospective regulatory tools. Ironically, even the designers of the AI do not seem to have an idea as to where it will lead us in the future. Still, it is estimated that by 2030 AI could contribute an estimated US $10 and $15 trillion to the global economy. Cumulatively, AI presents a big challenge for the global knowledge architecture especially the Universities. As envisioned by this author (Indian Express, December 11, 2008), the School of International Studies, as a think tank, needs to gear up for AI’s ideational research challenge. In the words of the eminent theoretical physicist, late Prof. Stephen Hawking, “AI is likely to be the best or worst thing to happen to humanity.” In the times of a planetary crisis, the coming together of right-thinking peoples, nations, AI behemoths, international organizations and the civil society provides a ray of hope that there are no limits to human ingenuity. Hence, we can audaciously hope and pray that AI will turn out to be the best thing for humankind and the future of the planet Earth.



This Article is a sequel to Taming the Beast : AI - Part I


This Article is an Original Contribution to the SIS Blog.


Prof. (Dr.) Bharat H. Desai is Professor of International Law and Chairperson of the Centre for International Legal Studies (SIS, JNU), who served as a member of the Official Indian Delegations to various multilateral negotiations (2002-2008), coordinated the knowledge initiatives for Making SIS Visible (2008-2013) and the Inter-University Consortium: JNU; Jammu; Kashmir; Sikkim (2012-2020) as well as contributes as the Editor-in-Chief of Environmental Policy and Law (IOS Press: Amsterdam)

Post: Blog2 Post
bottom of page