SEOUL, South Korea — This week, South Korea is holding a small meeting on AI threats and governance while the first event on this topic was held in the UK last year. Hence, the AI Safety Summit seeks to progress the dialogue on counteracting the dangers posed by complex AI systems.
AI Safety Initiatives and Actions Around the World
Seoul summit is one of the events aimed to develop the AI regulation that would address problems like biased algorithms and possible risks AI might pose. In November of last year, Bletchley Park in the United Kingdom brought together technological giants and government representatives to address the topic of AI safety. Leaders of more than 20 nations including the United States and China endorsed the Bletchley Declaration which pledged to regulate and mitigate the risks posed by AI. This was succeeded by a General Assembly resolution in March endorsing attempts at promoting and regulating AI to benefit all member countries as well as uphold human rights and safeguards. Late last year, the U. S. and China convened a meeting at the level of senior officials in Geneva to discuss AI risks and set out common approaches to its management.
Seoul Summit: The Plan
It will be a two-day event, which is co-chaired by the Republic of Korea and the United Kingdom on May 21-22. On the first day, South Korean President Suk Yul and the Prime Minister of the United Kingdom Rishi Sunak will have a meeting with AI managers to discuss the development of safety measures. Day two will consist of an informative meeting with digital ministers of different countries inc, the United States, China, Germany, France, and Spain, as well as AI industry leaders such as OpenAI, Google, Microsoft, AND Anthropic. The discussions will be centered on tangible measures and approaches that should be adopted to remove the negative social impacts of AI for instance in energy consumption, employment and information management.
AI Safety: Current State and Further Development
The first agreement signed between UK and America did not contain concrete measures on potential regulations. Another criticism was given by Lee Seong-yeob of Korea University stressing the impossibility to unite the nations due to their self-interested motives as well as the phase of AI development. Nevertheless, the personalities working in this field are determined to approve common security directives. An interim report by an expert panel revealed such threats as the improper use of AI in fraud and propaganda as well as system-level threats related to the labor market. South Korea intends to use this particular summit as a springboard to assume the role of a leader of AI governance on the international level, even though its capabilities in this regard are being criticized as of now.