67th Commonwealth Parliamentary Conference
An image representing artificial intelligence, created using AI software using the prompt 'artificial intelligence in a pop art style'.
Event

Conference Programme: AI, Disinformation and Parliament

Programme Overview

The proposed themes for the conference are:

  • Day 1: Core Concepts: Disinformation, Artificial Intelligence and Synthetic Media
  • Day 2: The Threat Landscape: Threats, Vulnerabilities and Combatting Disinformation
  • Day 3: Individual Protection: Deepfakes, Disinformation and Reputational Threats

Browse the full programme below to find details including timings and session titles.

Each day of the Conference will complement the other days and we hope to see you at as many sessions as possible though we understand if you are unable to join for the entire programme!

How to register

Each day has a unique Zoom link available in the Programme section. Select the registration link and complete the form via Zoom. Once approved, you will receive a personalised link to attend.

If you wish to attend multiple days, complete a separate form for each day, using the specific link for that day. 

Deadline: Ensure you register by 1 December 2023 as spaces are limited.

Day 1 | 4 December 2023

Core Concepts: Disinformation, Artificial Intelligence and Synthetic Media

Theme: Core Concepts: Disinformation, Artificial Intelligence and Synthetic Media

Time zones:

12:00 - 15:00 AST (UTC -4) | 17:00 - 20:00 GMT (UTC +0) | 21:30 - 00:30 IST (UTC +5:30) | 05:00 - 08:00 NZDT (UTC +13)

All times below are GMT (UTC + 0).

Register for Day 1:

https://us06web.zoom.us/webinar/register/WN_nidbYbvUQ3yfakNA1VPqCQ

17:00 - 17:30 | Opening Remarks and Keynote Address
Session Overview

Artificial intelligence (AI) has the potential to revolutionize our lives and societies in many ways. It can help us make better decisions, improve our productivity, and even save lives. AI can also be used to automate mundane tasks, freeing up time for more substantive, creative and innovative work.

However, AI also poses significant risks to our democracies and public discourse, especially when it comes to AI-generated disinformation and synthetic media. Artificial intelligence and synthetic media, such as deepfakes, can be used to spread false information and manipulate public opinion. Their existence can also blur the lines between objective fact and subjective opinion. It is therefore crucial that legislators keep up with developments in these technologies and work towards creating a legislative
environment that ensures the integrity of our increasingly digital democracies.

Parliamentarians, who may have previously contended with fake accounts bearing their names, may have to fight digitally created 'doppelgangers' who use offensive language, brief against their own party and partake in activities they never really engaged in.

This keynote address will highlight why it is important that Parliamentarians and other stakeholders understand the threats posed by AI and synthetic media, as well as what a good legislative environment could look like. The address will outline the key actors who can help shape the conversation around AI and disinformation, as well as the importance of developing policies that promote transparency,
authenticity and accountability of actors in this space.

Speakers

Opening remarks: Stephen Twigg, CPA Secretary-General

Keynote address: The Right Honourable Chloe Smith MP, former Secretary of State for Science, Innovation and Technology and Member of Parliament for Norwich North

17:30 - 18:00 | A History of Disinformation in Democracy
Session Overview

This session addresses the historical challenge of disinformation in democratic societies. This issue has evolved in conjunction with advancements in communication technologies. Historically, disinformation has been a tool to influence public opinion and political events, adapting to changes in media and technology. From early propaganda in print media to today's digital disinformation campaigns, it has played a role in shaping public narratives and perceptions.

This session will offer an overview of the historical development of disinformation. It will trace its origins and highlight moments where disinformation has impacted democratic processes. The session will also look at the shift from traditional media to the digital age, examining how this change has affected the dynamics and spread of disinformation.

The aim is to enhance attendees' understanding of disinformation's historical context, its evolution, and its persistent influence on democratic systems. While highlighting the importance of historical knowledge, the session will also acknowledge the limitations of historical parallels in fully addressing contemporary challenges. It will emphasise where historical insights are invaluable and where they may fall short in navigating the complexities of disinformation in today's digital landscape.

Speakers

Dr Heidi Tworek, Canada Research Chair, Director, Centre for the Study of Democratic Institutions, Associate Professor, University of British Columbia

18:00 - 19:00 | Generative AI and Synthetic Media: Present and Future Perspectives
Session Overview

This session will aim to provide attendees with an appreciation of the current capabilities of generative Artificial Intelligence, and its ability to generate content and media at speed and scale, as well as the potential trajectory of these technologies. The session will also encourage attendees to consider AI
through an ethical lens and to start to think about what ethical AI might look like in practice.

Within the wider programme, this session will follow a session looking at disinformation and precede a session on deepfakes and disinformation. Therefore, it is hoped that the schedule of discussions throughout the three sessions will allow attendees to gradually build their knowledge of 1) disinformation; then 2) the current AI landscape; then 3) the threat of disinformation/misinformation/hallucinations etc. in the face of AI.

Speakers

Dr Carolyn Ashurst, Turing Research Fellow @ the Alan Turing Institute

Cassidy Bereskin, Director & Founder, OxGen AI, and author of the forthcoming CPA Handbook on Disinformation, AI and Synthetic Media

Tommy Shaffer Shane, AI Policy Advisor @CLTR  

19:00 - 20:00 | Disinformation, Deepfakes and Elections
Session Overview
‘AI’s Rapid Growth Threatens to Flood 2024 Campaigns with Fake Videos’
‘Deepfakes worrying threat to democracy in Africa, says report’
‘AI content is meddling in Turkey’s election. Experts warn it’s just the beginning’

In 2023, the global media has been awash with headlines warning how artificial intelligence and synthetic media, including deepfakes, might influence elections around the world.

Synthetic media – content that has been created or altered using artificial intelligence – and its underlying generative AI technologies have significantly lowered the barrier to entry for producing and disseminating disinformation, threatening to turbocharge the scale, reach and potential impact of disinformation during election periods.

Alarming headlines and the sudden emergence of artificial intelligence at the forefront of public discourse have left many voters, candidates and governments deeply concerned about the dangers that artificial intelligence poses to the electoral process and the integrity of our democracies.

But what actually happens when ‘synthetic disinformation’ – defined as disinformation generated or enhanced via synthetic media techniques – is deployed during an election?

This session will offer real-life stories about the impact of synthetic disinformation on 2023 elections in Turkey, Slovakia and Nigeria. Our panel have recent, first-hand experience of election campaigns in the era of artificial intelligence. Their stories will examine the practical implications of disinformation and deepfakes for people at the heart of the election process, answering key questions such as:

  • Are voters really deceived by deepfakes?
  • Can AI-generated disinformation influence how people vote?
  • Who is spreading disinformation and creating deepfakes?
  • How do candidates, fact-checkers and social media companies respond to incidents of synthetic disinformation?
  • Are there positive uses for AI during elections?

Join us to hear stories from the frontlines of elections, told by people who are amongst the first in the world to deal with live examples of synthetic disinformation during election campaigns.

Speakers

Esra Özgür, Head of Education, Teyit (Turkey)

Veronika Hincová Frankovská, Analyst and Project Manager, Demagog (Slovakia)

Silas Jonathan, Researcher/Factchecker, Dubawa (Nigeria)

Day 2 | 5 December 2023

The Threat Landscape: Threats, Vulnerabilities and Combatting Disinformation

Theme: The Threat Landscape: Threats, Vulnerabilities and Combatting Disinformation

Time zones:

02:00 - 05:00 AST (UTC -4) | 06:00 - 09:00 GMT (UTC +0) | 11:30 - 14:30 IST (UTC +5:30) | 19:00 - 22:00 NZDT (UTC +13)

All times below are GMT (UTC + 0).

Register for Day 2:

https://us06web.zoom.us/webinar/register/WN_ZXJeST8QTUOCr-HeinpMwg

06:00 - 06:45 | AI-generated disinformation - Actors, Motivations and Tactics
Session Overview

This session will examine the increasingly sophisticated landscape of AI and its role in the creation and propagation of disinformation. In an age where AI technologies are becoming more advanced and accessible, understanding the nuances of AI-generated disinformation is crucial for maintaining the integrity of public discourse and democratic processes.

This session will aim to unravel the web of different actors involved in AI-generated disinformation. It will explore the different entities, from state actors to individuals, who utilise AI for creating and spreading false information. The motivations driving these actors are diverse, encompassing political, economic, and social aims, and their tactics are continually evolving, making it a challenging landscape to navigate.

An important focus will be on the technical aspects and methodologies employed in AI-generated disinformation. This includes the use of deepfakes, automated bots, and algorithmically generated false narratives. The session will provide insights into how these technologies are developed and manipulated to create convincing disinformation campaigns.

Moreover, the session will address the broader implications of AI-generated disinformation. It will explore how disinformation impacts public opinion and erodes trust in institutions and democracy.

Speakers
Speakers for this session will be announced in due course.
06:45 - 07:15 | The role of Parliament and Parliamentarians in combatting AI-generated disinformation
Session Overview

This session will provide an overview of the following seven 'generally accepted principles' of ethical AI and the role of parliamentarians in promoting and upholding these to combat misuse of AI and dis/misinformation: 

  • Safety;
  • Transparency; 
  • Accountability;
  • Privacy;
  • Equity;
  • Openness and Digital Literacy; and,
  • Responsibility and the Public Good

This session will also emphasise the specific relationship between disinformation and the democratic process (including elections) and the integrity of democratic norms more generally.  

Speakers

Hon. Marco Mendicino MP, Member of Parliament for Eglinton—Lawrence, Parliament of Canada

07:15 - 08:15 | Mitigating AI-Generated Disinformation - The Role of Provenance
Session Overview

Attempts to combat and mitigate synthetic disinformation are multi-layered and, more recently, embedded in larger processes to regulate artificial intelligence. Spanning international processes, national regulations, industry initiatives, and platform-specific actions they vary widely but coalesce around addressing the issues while also preserving democratic values and human rights.

The Coalition for Content Provenance and Authenticity (C2PA) includes companies like Adobe, Arm, Intel, Microsoft, and Truepic and aims to combat misinformation and deepfakes by standardizing the verification of the origins, circulation, and trajectory of digital and synthetic media. Utilising advancements in cryptography, C2PA’s key innovation is a specification for embedding reliable data into images to prevent tampering. This standard for fingerprinting digital content has been adopted in photojournalism initiatives, including prototypes in Ukraine that digitally sign images showing the effects of the current conflict.

This session will look to reassure attendees that there are innovative initiatives to combat the potentially subversive nature of AI-generated disinformation and synthetic media, including during elections and other periods of political sensitivity.

Speakers

Andy Parsons, Senior Director of Content Authenticity Initiative, Adobe
Bruce MacCormack, Senior Advisor (External) on Disinformation Defence initiatives, CBC/Radio-Canada
and Principal, Neural Transform

08:15 - 09:00 | AI Governance: Balancing Freedom of Expression & Effective Regulation
Session Overview

This session will address the delicate balance between fostering innovation in AI and ensuring robust regulatory frameworks that protect fundamental freedoms, particularly freedom of expression.

AI technologies, with their potential for widespread impact, pose unique challenges to traditional regulatory approaches. The rapid pace of AI development often outstrips the ability of legislation and policy to keep up, leading to a regulatory gap. This session will explore how governments, international bodies, and other stakeholders are responding to these challenges, seeking to establish governance structures that are both effective and respectful of human rights.

A key focus of the session will be on the implications of AI for freedom of expression. AI-driven tools, such as content moderation algorithms and automated decision-making systems, have significant consequences for how information is shared and accessed. These technologies hold the power to shape public discourse, raise concerns about censorship, and impact the diversity of viewpoints in the digital space.

The session will also delve into the complexities of creating regulatory frameworks that can adapt to the evolving nature of AI. It will examine best practices and innovative approaches to regulation that aim to protect public interests while encouraging technological advancement. Discussions will include the role of ethical guidelines, international cooperation, and stakeholder engagement in shaping the future of AI governance.

Speakers

Professor Chris Marsden, Professor of Artificial Intelligence, Technology and the Law, Monash University

Maria Luisa Stasi, Head of Law & Policy, Article19

Day 3 | 6 December 2023

Individual Protection: Deepfakes, Disinformation and Reputational Threats

Theme: Individual Protection: Deepfakes, Disinformation and Reputational Threats 

Time zones:

07:00 - 10:00 AST (UTC -4) | 11:00 - 14:00 GMT (UTC +0) | 16:30 - 19:30 IST (UTC +5:30) | 00:00 - 03:00 NZDT (UTC +13)

All times below are GMT (UTC +0)

Register for Day 3:

https://us06web.zoom.us/webinar/register/WN_Oa2jEvH_Tm683gGZ0dYLZA

11:00 - 11:45 | Synthetic Media and Parliamentarians: Understanding the Threats as Public Figures
Session Overview

In a recent (13 November 2023) New Yorker article titled ‘What The Doomsayers Get Wrong About Deepfakes’, Daniel Immerwahr writes:

"If by “deepfakes” we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren’t deep, and the deeps aren’t fake. In worrying about deepfakes’ potential to supercharge political lies and to unleash the infocalypse, moreover, we appear to be miscategorizing them. A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.
Manipulated media is far from harmless, but its harms have not been epistemic."

Meanwhile, in a recent (7 August 2023) article in the New York Times titled ‘What Can You Do When A.I. Lies About You?’, Tiffany Hsu highlights:

“The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist.
One legal scholar described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment claim that he said had never been made, which supposedly took place on a trip that he had never taken for a school where he was not employed, citing a non-existent newspaper article as evidence. High school students in New York created a deepfake, or manipulated, video of a local principal that portrayed him in a racist, profanity-laced rant.”

Whilst Immerwahr argues that the current use of synthetic media and AI-generated deepfakery is a crude new form of political humour, reflective of the state of democratic institutions rather than a threat to them, Hsu points to dangerous, reality-altering technologies existing adjacent to an inadequate and lagging regulatory framework.

A murky landscape therefore emerges about the nature of the threats when it comes to synthetic media and generative AI. It is also clear that there aren’t always malicious (or even human) intentions behind disinformation, with hallucinations and frankenpeople often emerging from algorithms and incomplete information. How should those in the public eye view, therefore, understand and prioritise the threats (or lack thereof)?

This session will aim to provide some guidance on how to categorise and conceptualise the threats posed by generative AI and synthetic media to public figures in democracies and, in particular, during the spate of high-profile elections scheduled for 2024. Speakers will be invited to give their own particular perspectives on the severity and immediacy of the threats as well as future threats that may be on the horizon.

Speakers

Hon. Ray Abela MP, Member of the Parliament of Malta
Professor Alexander Evans OBE, Professor in Practice @ the London School of Economics
Ms Marietje Schaake (pre-recorded), International Policy Director at Stanford University Cyber Policy Center and International Policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence

11:45 - 12:30 | Synthetic Media and Human Rights: Building a Rights-Based Approach
Session Overview
The continued narrative of AI as so highly technical and inscrutable that it escapes the grasp of human control and effective regulation in a human rights-compliant manner, […] dominates the debate.”
Human rights by design: future-proofing human rights protection in the era of AI, Office of
the Commissioner for Human Rights.

Whilst the importance of human rights in relation to artificial intelligence has been considered in specific areas, such as data protection and privacy, there has been less consideration of a more holistic, overarching relationship between them, and even less attention to what a rights-based/humancentric approach might look like in practice.

This session will invite attendees to consider the value of taking a rights-based approach to regulating generative AI and the specific risks to individual rights posed by increasingly capable and available generative AI, including synthetic media such as deepfakes. 

The panel will consider the importance of embedding rights across the development of AI tools, rather than a strictly risk-based approach, as well as the importance of regulation and legislation in ensuring effective frameworks are in place.

The session will also note the potentially positive applications of AI towards protecting human rights, particularly as they might relate to political rights.

Speakers

Loui Mainga, Program Communications Coordinator, Africa, WITNESS

Professor Nnenna Ifeanyi-Ajufo, Professor of Law and Technology, Leeds Law School, Leeds Beckett University

12:30 - 13:30 | Deepfakes: A New Online Gender-Based Violence?
Session Overview

This session will be a collaboration with the Commonwealth Women Parliamentarians network, which is hosting its 2023 Workshop on Gender Champions in Dar es Salaam, Tanzania, from 6 - 8 December 2023.

The session will bring together attendees at both events to consider the gendered impacts of deepfakes.

Speakers

Varaidzo Faith Magodo-Matimba, Grants and Growth Coordinator at POLLICY

Kiran Hassan, Coordinator of Freedom of Expression and Digital Rights at the Institute of Commonwealth Studies

Suzie Dunn, Senior Fellow at CIGI, a Ph.D. candidate at the University of Ottawa and an Assistant Professor of Law & Technology at Dalhousie University

13:30 - 14:00 | Closing Remarks
Session Overview

In place of the regular vote of thanks and formalities the closing remarks will be delivered by Sophie Compton, Director of Another Body (trailer below), a documentary film following a student’s search for justice after discovering deepfake pornography of herself online. Sophie's remarks will share research into the cultures and communities that promote deepfake abuse as well as the pathway that enables bad actors, illustrate the harms through survivor testimonies, and discuss the legislative/regulatory landscape specific to deepfakes.

Speakers

Sophie Compton, Director, Another Body (2023)