Picture: Kirsty Wigglesworth / POOL / AFP / Taken on November 2, 2023 – Britain’s Prime Minister Rishi Sunak, left, shakes hands with X (formerly Twitter) CEO Elon Musk after an in-conversation event held in London on November 2, following the UK Artificial Intelligence (AI) Safety Summit.
By David Monyae
At the beginning of November, the United Kingdom (UK) hosted the Artificial Intelligence (AI) Safety Summit which saw different countries including China, the US and the European Union, corporations and civil society organisations participating.
The summit comes against the backdrop of rapidly evolving and widespread deployment of AI technology endowed with ever-increasing capabilities with potential for doing a lot of good and causing serious harm.
Artificial Intelligence (AI) – the ability of machines to evolve cognitive abilities and functions hitherto believed to be exclusively human attributes such as reasoning, perceiving, learning, reading the environment, solving problems and being creative through manipulating and interpreting vast and complex data sets – is fast-becoming a global phenomenon with far-reaching implications. A key feature of AI-powered machines is that they have the capacity to take autonomous decisions based on new data with important economic, political, ethical, and social consequences.
AI applications from Google Assistant and Chat GPT to Google Maps and Fitness AI to mention a few have had profound and highly disruptive impact on the everyday lives of billions. It is no wonder that the AI market currently valued at just over US$500 billion and is expected to grow to surpass US$2 trillion by 2030.
Governments, businesses, industries, militaries, health, education, and other institutions are awake to the capabilities of AI in shaping public policy, improving efficiency, creating effective weapon systems, producing new medical knowledge, and improving education outcomes.
The impact of AI is already being felt in the global security domain through the development of autonomous weapons with highly destructive power controlled not just by a few major states but a wider array of state and non-state actors thus complicating the global security architecture.
It is observed through the disruption of labour markets as businesses increasingly rely on robots to increase productivity resulting in massive retrenchments and worsening inequality. The Cambridge Analytica scandal in 2015 and the accusations of Russian interference in the 2016 US elections have demonstrated how AI can be used to unduly influence outcomes of political processes with huge implications for global geopolitics.
Thus, on the one hand, AI can make significant contributions towards addressing some of the world’s major problems such as climate change, promoting sustainable development, improving public security, and spurring economic development. On the other hand, AI tools can be used to violate privacy, discriminate against a group of people, pursue cybercrime, and develop lethal weapons systems among other things.
As AI operates through a global and borderless cyberspace, it goes without saying that its impact will be felt at global level. Because of the opportunities and challenges presented by the emergency of AI, there have been attempts to develop regulatory frameworks and norms to govern it at the national, regional, and global levels.
Consequently, there are now over 700 known AI regulatory initiatives at the national level as countries try to control the use of the new technology. However, international organisations have also intensified efforts to develop international regulations for AI which has seen about 210 initiatives being agreed on at the international level between 2015 and 2022.
As such, the AI Safety Summit is the latest in a long series of diplomatic endeavours aimed at developing an equitable global governance framework for AI technologies that would maximise benefits while at the same time minimising and mitigating the risks. It was an important meeting as it brought the countries at the forefront of AI development, China and the US and to a lesser extent the EU, together which will certainly help close ranks in their respective visions for the future of AI.
Among the major points of agreement reached at the meeting include identifying AI safety risks and building a shared scientific understanding of those risks, adopting risk-based policies at the national level, and promoting transparency among private sector players leading in the AI sector. Most importantly, the parties at the Summit signed a joint commitment on pre-deployment testing of AI models to ensure that they meet safety standards.
Moreover, the US and the UK announced the establishment of AI Safety Institutes in their respective jurisdictions to work on improved safety regimes to govern AI applications. The Summit was important in terms of laying the groundwork for future dialogue between major countries on the use of AI and potential agreement on the global norms and ideals that should govern AI technologies.
It will go a long way in reducing mistrust and averting an AI arms race which was already adding to the current geopolitical tensions especially between the US and its western allies on the one hand and China and Russia on the other. Such platforms as the AI Safety Summit may also be the beginning of a journey towards a binding international treaty on AI which will help bring certainty around its use.
Developing regions like Africa must be part of this conversation as the implications of AI development will be felt there too.
Prof David Monyae is Associate Professor of International Relations and Political Science, and Director of the Africa-China Studies Centre at the University of Johannesburg