Experts at Chatham House discuss AI risks and regulation ahead of AI Safety Summit

E

A 14-year-old has been accused by his school of cheating in an exam by using ChatGPT. This is just one of the everyday harms Artificial Intelligence (AI) is presenting, and we’re not even talking about killer robots yet.

Next week the UK Government is making a bold play to join the regulatory and geopolitical discussions related to AI technologies. The AI Safety Summit at the historic Bletchley Park will recognize the transformative impact that AI could have on our economy, society, and international affairs.

Chatham House ran an ‘on the record’ event to dissect and discuss the UK AI Safety Summit with some of the world’s leading experts:

  • Professor Yoshua Bengio, Department of Computer Science and Operations Research at Université de Montréal
  • Jean Innes, CEO, The Alan Turing Institute
  • Katie O’Donovan, Director of Public Policy, Google UK
  • Francine Bennet, Interim Director, Ada Lovelace Institute
  • Zoe Kleinman, Technology Editor, BBC

 The context of the summit comes at a time when international cooperation appears limited, geopolitical tensions may harm collaboration, and competing regulatory structures are evident. As Chatham House says,

“From the US and UK’s self-regulatory, decentralised model to Europe’s risk-based, prescriptive approached enshrined in the EU AI Act. There is also the state-led, information control model adopted by China.”

It was an eye-opening conversation, revealing the diverse spectrum of risks that AI presents. Foremost among these risks is the potential for misuse by bad actors, who could employ AI in highly concerning ways. Given my work as part of Kekst CNC Intelligence, there are applications I’m all too aware of given private companies face such AI-driven coordinated attacks.

The development and regulation of AI technologies is complicated, but I’ve captured the main points of discussion:

The summit will primarily focus on ‘frontier risks’. These are the risks that arise from the training and development of the most advanced AI models, rather than the risks arising from specific applications. This was an immediate reflection point as publicly available technologies are already having a tangible impact; from 14 year olds being blamed for cheating in exams to open source language models that can design deadly chemical components for bioweapons. It’s not just the frontier that matters.

AI models never act as intended. One glaring challenge that emerged during the discussions was the inability to build AI systems that consistently behave as intended, adhering to values, norms, and cultural expectations. The current AI race, marked by intense competition and rapid advancements, has heightened the urgency of focusing on safety.

Threat to democratic processes that are characterised by threat actors using disinformation campaigns to manipulate democratic processes. This is already happening today, but the availability of new technologies means such campaigns can achieve greater scale, better language translations for mimicry, and fake content to drive agendas. The very essence of democracy is to prevent the concentration of power, and as AI evolves, democracies need to put the right safeguards in place.

Transformation of the job market as AI systems take on tasks previously performed by humans. Earlier this month, Microsoft began previewing its Copilot technology to select organisations showcasing a tool’s capability to draft emails, reply to email chains, and create PowerPoint presentations based on notes. It’s just one basic impact that will drastically impact certain jobs – once a task has been automated, what task replaces it? New jobs will come in time but there will be a difficult period of transformation in-between.

It’s not all doom and gloom though as discussions eventually turned to positive real-world applications of AI. From healthcare to public services, AI is being employed to read eye scans, predict the onset of cancers, and aided UK hospital management during the pandemic. It also has the potential to alleviate the administrative burden on stressed public services. Projects like using Google Translate make the internet accessible to diverse cultures by translating local dialects.

Tracking and addressing AI-related harms isn’t something that’s happening today, which is crucial given AI systems don’t typically follow commonly accepted morals and values. Experts acknowledged the need to develop capabilities to monitor and mitigate such harms, convincing countries to invest in these capabilities.

One key need identified is for swift and responsive regulation. It’s difficult to govern AI at the speed of democracy, let alone international coordination. To counter this, ideas were shared for how to approach regulations:

The UK’s ambition to coordinate a global approach. Whilst the UK is not endowed with the deep pockets of tech giants, it does play home to some of the best minds in the world. It’s a bold move to seek the involvement of delegates from China, but such collaboration and diversity of thought is needed to foster change. Experts argued for the inclusion of all perspectives, even those from countries with which the West has strained relationships.

A call for licensing and compute monitoring as only a handful of companies possess the capabilities to build the required chips for advanced AI systems. This, in turn, calls for greater government scrutiny and the question of whether governments should have the right to monitor these developments. How about the registration of the largest AI models? Companies and developers must be held accountable.

Governance of AI emerged as a monumental challenge, especially in the context of its breakneck speed of development. As the reputation and economic impact of AI technology continue to grow, particularly concerning its role in the dissemination of disinformation, the need for regulation becomes increasingly apparent. Could the UN’s approach to the Intergovernmental Panel on Climate Change provide inspiration for an AI equivalent?

Geopolitical diversity matters as the panel underscored the importance of diversity in the geopolitical control of AI. To prevent AI from being used for political or military benefits, the powers steering AI development must be varied and should prioritize principles that align with human rights and peaceful purposes, rather than offensive military applications. In addition, the experts stressed the need for a global and public voice in shaping the conversation around AI governance.

Private sector engagement in AI governance was also highlighted as crucial. Companies play a pivotal role in AI development and should be actively involved in shaping the regulatory framework. The rapid pace of AI development poses challenges for regulators, and the need for private sector input was emphasized considering the inadequacies of self-regulation, such as that seen in social media platforms.

The need for standards in AI is something The Alan Turing Institute has begun working on with the launch of the AI Standards Hub, a UK initiative dedicated to standardising AI technologies and facilitating knowledge sharing.

The call for inclusive global conversations, swift but thoughtful regulation, and diverse representation in AI development and governance highlights the urgency and complexity of managing AI responsibly. All eyes are now on next week’s AI Safety Summit.

About the author

Michael White

Add Comment

By Michael White