top of page

Blog Special:Taming the Beast: On the Global Regulation of Artificial Intelligence for a Safe Future


By Prof. Bharat H. Desai


On July 21, 2023, the US President Joe Biden hosted a White House meet of a group of seven leading behemoths that generate Artificial Intelligence (AI) products. It comprised: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. These companies made “voluntary commitments to the White House” to implement measures such as watermarking AI-generated content to help make the technology safer. "We must be clear-eyed and vigilant about the threats from emerging technologies", President Biden announced. This ground work by the US Administration came on heels of the “blueprint for an AI Bill of Rights” (October 2022) for making automated systems work for the people. It seems to be a prelude to an Executive Order and a US legislation to regulate the rapidly growing AI technology. It assumes significance especially in view of widespread usage of generative AI – ChatGPT – that uses data to create new content such as human-sounding prose. Ostensibly, since the AI technology is here to stay, it needs to be safe, secure and beneficial to the people.


Image By Author, Xiamen, 12 Juy 2023
Image By Author, Xiamen, July 12, 2023

AI and Global Security


The White House meeting (July 21) took place as a sequel to July 18, 2023 Security Council’s first special debate on AI, “Opportunities and Risks for International Peace and Security”, organized by the UK Presidency (July 2023). It’s focus was to promote and explore the UN’s effort “to improve conflict analysis and early warning, monitor ceasefires, and support mediation efforts” as well as prevent “a serious risk if misused by states and non-state actors to contribute to instability and exacerbate conflict situations, including through the spread of online disinformation and hate speech”. In his address to the UNSC, the Secretary-General Antonio Guterres observed: “I have been shocked and impressed by the newest form of AI, generative AI, which is a radical advance in its capabilities. The speed and reach of this new technology in all its forms are utterly unprecedented”.


The UNSC sought to examine: how can AI be used to enhance the UN’s peace and security toolkit? It is feared that AI could be used and abused by states and non-state actors that may cause instability and exacerbate conflict situations. It can also be invoked as a powerful propaganda tool for the spread of online disinformation and hate speech. In recent times, democratic societies have become more vulnerable since the rogue actors can use AI tools as powerful weapons to launch cyber-attacks to disrupt financial markets and banking operations, national elections and high security nuclear and space command centers. Even the designers of the AI do not seem to have an idea as to where it will lead in the future. Still, it is estimated that by 2030 AI could contribute an estimated US $10 and $15 trillion to the global economy.

AI generated Illustration, European Parliament Website, June 14, 2023

Need for Going ‘Artificial’


In the digital and cyber age, the role of machines has been debated over the years. However, the idea of assigning the cognitive tasks to ‘humanoids’ has caught attention. Robotics as an area of scientific research has been around for some time since automation has become integral to many high-risk industries. They are employed for multiple reasons including hazardous jobs, high security apparatus, cost factor and sheer convenience. The term AI is attributed prima facie to the ‘intelligence’ of machines. The primary source of machines’ assigned attributes are humans only. It comprises various tools, techniques and usages comprising web search engines (Google; Bing), entertainment choice providers (Amazon; Netflix; YouTube), human speech recognition tools (Siri; Alexa) and generative tools (ChatGPT).


There are enormous educative possibilities of AI to make the world a better place (AI for Good Global Summit; July 6, 2023). Yet they equally present grave risks to the humans and the environment. These are primarily driven by software designed by outsourcing of human intellect to an object to make it mimic exactly as the humans would do. Therein lies the catch. Machines are machines. Isn’t it inherently risky to build general AI system that claims to be “smarter than humans”? This author saw it first hand during recent stay at the Xiamen Millennium Hotel for delivering 2023 Summer Course of Xiamen Academy of International Law. In the hotel’s 22-storied structure, often the co-passenger in the lift would be a cutely designed robot that moves around like a supervisor! As I saw, the ‘humanoid’ would perform tasks with discipline but lacked crucial human feelings, voice and touch. What was hitherto predicted in science fiction and novels, has come true with AI. The human inventions designed to conquer Nature have only brought misery on planet Earth. Hence, AI driven devices must not be allowed in any way to control Nature’s precious creation – humans. We need to find risk-free and timely answers to new vistas opened up by AI for the human race in what it does on the planet Earth as well as deep seabed and outer space.


Taming the Beast: Regulating AI


Turning the innovation into a boon instead of bane constitutes a new ideational challenge to counter deeply entrenched mindsets. The 67th session of the UN Commission on the Status of Women (New York; March 6-17, 2023) issued an alert since women face a graver risk from ICT. The UN data shows that women are 27 times more likely to face online harassment or hate speech. In view of this grave risk, the voluntary commitments (July 21, 2023) by the seven AI corporations based on three principles (safety, security and trust) are a good beginning for taming the beast.


Notwithstanding above, the possibilities of use and abuse of AI call for a robust global regulatory instrument, a watchdog and a verification regime. Several major players such as China, European Union, the UK and the USA are considering oversight and regulatory options. This year Britain will host a global summit on AI safety. In February 2021, the Indian NITI Ayog issued an approach document on Responsible AI for All. It will require to be augmented by a Parliamentary legislation and a regulatory authority. The key challenges lie in averting threats to citizens’ privacy, misinformation as a weapon of war and ensuring safety of vital national interests and societal order.


The UNSC debate of July 18 showed the gravity of the threat of AI to international peace and security. Ironically, in spite of all efforts, a transparent, equitable and democratic regulation of internet has not yet materialized. Who controls internet? It remains the big question. Therefore, AI technology requires an urgent international legal instrument to tame the beast of AI by laying down precautionary and preventive mechanisms, crisis management processes, affixing accountability of the private players, state responsibility and dispute settlement mechanism. The AI automation poses a grave peril to humanity by accident or design as well as the ethical dilemma weighs heavily on making right choices for larger societal good. It is not possible to stop scientific and technological innovations. Yet, as seen in cases of human cloning, surrogacy and stem cell research, any technology that causes human misery and societal havoc will need to be discouraged or reined in. It requires higher call of duty by going beyond legalese to safeguard interests of the future generations.


UNSG addressing the UNSC Debate on AI, New York, July, 18, 2023
UNSG addressing the UNSC Debate on AI, New York, July, 18, 2023

The Future: Oppenheimer Moment


The rapid growth in AI leads us to the Oppenheimer Moment (Christopher Nolan Interview; July 16, 2023) wherein the main protagonist in the Hollywood movie is haunted by recitation of Lord Krishna’s words in the Bhagwad Gita (Chapter 11, Verse 32): “I become death; destroyer of words”. Though used as a miasma in a film’s context, in essence, Krishna’s utterances underscore: “death is merely an illusion, that we’re not born and we don’t die”. However, AI technology cannot be allowed to become a cause for pain, suffering, death or destruction of its creator. The explosion of the social media has taught that we cannot afford more deleterious changes in human existence, heightened societal chasms, human greed and violence, harmful distractions and fatal consequences for human empathy and value systems around which lives of millions revolve. We have a much-troubled world population of 8 billion who already face a crisis of planetary survival. We need global leaders to take a timely call on AI before it is too late. The UN General Assembly mandated 2024 Summit of the Future (New York; September 22-23) would provide an ideal platform for a concrete plan of action on AI regulation. Who shall bell the cat?



Dr. Bharat H. Desai is Professor of International Law and Chairperson of the Centre for International Legal Studies (SIS, JNU), who served as a member of the Official Indian Delegations to various multilateral negotiations (2002-2008), coordinated the knowledge initiatives for Making SIS Visible (2008-2013) and the Inter-University Consortium: JNU; Jammu; Kashmir; Sikkim (2012-2020) as well as contributes as the Editor-in-Chief of Environmental Policy and Law (IOS Press: Amsterdam)

Post: Blog2 Post
bottom of page