Deepfakes, Synthetic Media and Generative AI

WITNESS helps people use video and technology to protect and defend human rights. Our Technology Threats and Opportunities Team engages early on with emerging technologies that have the potential to enhance or undermine our trust on audiovisual content. Building upon years of  foundational research and global advocacy on synthetic media, we’ve been preparing for the impact of AI on our ability to discern the truth. In consultation with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns and recommendations on what we must do now.

WITNESS helps people use video and technology to protect and defend human rights. Our Technology Threats and Opportunities Team engages early on with emerging technologies that have the potential to enhance or undermine our trust on audiovisual content. Building upon years of  foundational research and global advocacy on synthetic media, we’ve been preparing for the impact of AI on our ability to discern the truth. In consultation with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns and recommendations on what we must do now.

Featured work

This forward-looking report investigates the evolving relationship between synthetic media and the information landscape in situations of armed conflict and widespread violence, with a particular focus on implications for conflict resolution and peace processes. It is available in English and Arabic.

With the progress of generative AI technologies, synthetic media is getting more realistic. We therefore see growing demand for AI detection tools that can determine whether a piece of audio and visual content has been generated or edited using AI. This piece discusses some of the limitations of detection tools and how to decide when to use them.

WITNESS’ Raquel Vazquez Llorente joined an expert panel at the 2023 Obama Foundation Democracy Forum to discuss the challenges and opportunities of AI for a healthy democracy. Fellow panelists included: Alondra Nelson, Harold F. Linder Professor at the Institute for Advanced Study; Anna Makanju, OpenAI’s Vice President of Global Affairs; Hany Farid, Professor at University of California, Berkeley; Terah Lyons, Founding Executive Director of the Partnership on AI.

We’re fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality. Watch WITNESS Executive Director Sam Gregory’s TED Democracy talk in which he highlights three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is crucial to our AI-infused future.

WITNESS’ Raquel Vazquez Llorente argues that democracy faces ‘old, new, and borrowed’ challenges in the deepfake era, where existing inequalities may be exacerbated and inconvenient truths dismissed by those in power. She argues that an inclusive and human-rights based approach to AI development can help us to overcome past mistakes and safeguard democracy.

While generative AI and synthetic media have creative and commercial benefits, these tools are connected to a range of harms that are disproportionately impacting vulnerable communities. This article via TechPolicy – recently featured in President Obama’s AI reading list explores how legislators can center human rights by drawing from the thinking of human rights organizations and professionals with regard to transparency, privacy, and provenance in audiovisual content.

On Sept 12th WITNESS’ Executive Director, Sam Gregory, presented testimony to the US Senate Subcommittee on Consumer Protection, Product Safety & Data Security on “The Need for Transparency in Artificial Intelligence.” The testimony, which can be read here, focused on how to optimize the benefits and minimize the harms and risks from multimodal audiovisual generative AI.

Our 2023 and 2024 consultations

WITNESS continuously hosts journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from different parts of the globe to discuss threats and opportunities that deepfakes, generative AI and synthetic media bring to audiovisual witnessing. We seek to identify and prioritize collective responses that can have a positive impact in the information landscape.

Fortifying the Truth in Asia-Pacific

Read our blog with key threats, opportunities and responses identified. Or read the full report of the workshop.

Fortifying the Truth in Brazil

Participants from Brazil workshop pose for a picture covering their faces

Read our blog with key threats, opportunities and responses identified. Or read the full report (Portuguese) of the workshop.

Fortifying the Truth in Africa

Read our blog with key threats, opportunities and responses identified. Or read the full report of the workshop.

Fortifying the Truth in LAC

Read our blog with key threats, opportunities and responses identified. Or read the full report  (Spanish) of the workshop.

Read our blog in Spanish here

Previous consultations (2019-2020)

Reports and articles

As generative AI technology evolves, so do the tools designed to detect it. Our blog discusses the need for standards to evaluate the effectiveness of AI detection tools informed by their application in real-world scenarios.

In this co-authored piece for Just Security, Raquel Vazquez Llorente shares high-level findings from work by WITNESS and the TRUE project, exploring how synthetic media impacts trust in the information ecosystem. 

This guide is intended to assist judges and other decision makers in their assessment of open source information, by explaining some of the most common open source investigative techniques.

This article talks about the need to ‘fortify the truth’ by fostering resilient witnessing practices that can ensure trustworthy videos and strengthen narratives of vulnerable communities. It identifies and speculates on actions at tactical, strategic, tools, technology, and policy levels, drawing upon human rights organization WITNESS’s work on proactive preparation for emerging technologies and technical infrastructures.

Generative AI can protect witnesses’ identities, visualize survivors’ testimonies, reconstruct places & create political satire. Check out our blog about using generative AI and synthetic media for human rights advocacy, the ethical challenges it poses and the questions that organizations can ask

How do we ensure technical solutions for enhancing confidence in media help rather than harm? In this article Sam Gregory discusses some core issues in pursuit of this goal.

 

Warning labels on AI-generated media give viewers little context. Artists and human rights advocates have forged a more effective—and creative—path. Read more here.

Around the world, deepfakes are becoming a powerful tool for artists, satirists and activists. But what happens when vulnerable people are not “in on the joke,” or when malign intentions are disguised as humor? Read this report that focuses on the fast-growing intersections between deepfakes and satire. Who decides what’s funny, what’s fair, and who is accountable?

This report focuses on 14 dilemmas that touch upon individual, technical and societal concerns around assessing and tracking the authenticity of multimedia. It focuses on the impact, opportunities, and challenges this technology holds for activists, human rights defenders and journalists, as well as the implications for society-at-large if verified-at-capture technology were to be introduced at a larger scale. Read the full report.

This article examines the various issues which arise when using technology to document crimes, and posits that the communities affected by conflict should be at the centre of how documentation tools are developed and deployed.  

How do we best prepare for, and not panic over, generative AI? WITNESS’ Sam Gregory discusses one area of preparation, authenticity and provenance infrastructure, which show the work of how media was made, where it came from and was edited, and how it was distributed.

 

This article discusses the use of provenance technologies as a way to trust video in the age of generative AI. It touches on the myths and realities that should be considered by those developing these technologies. Read the full article in Commonplace.

Dive deeper

What are the ethics of using Deep Fakes to anonymize sources in non-fiction media? What are the layers of consent that require consideration? What are the futures, the risks, and the opportunities of these types of manipulations? What strategies can non-fiction media makers (journalists, documentarians, and artists) implement to navigate the complex landscape of these technologies? See this conversation that includes WITNESS’ Raquel Vazquez

The Partnership on AI’s
Glossary for Synthetic Media Transparency Methods provides definitions around a number of key synthetic media transparency terms. WITNESS took part in a series of workshops that PAI ran and directly fed into the creation of this glossary.

WITNESS co-chairs the Threats and Harms task force of the C2PA, where it leads the harm assessment of these specifications designed to track the source and history of multimedia across devices and platforms. WITNESS has influenced this and related initiatives at an early stage to empower critical voices globally and bolster a human rights framework. Read our blog.

Provenance and authenticity tools would enable you to show a range of information about how, where and by whom a piece of media was created, and how it was subsequently edited, changed and distributed. Check this video series out to know more about provenance and authenticity, the C2PA standards and how we may fortify truth for accountability and awareness.

Deepfakery is a series of critical conversations exploring the intersection of satire, art, human rights, disinformation, and journalism. Join WITNESS and the Co-Creation Studio at MIT Open Documentary Lab for interdisciplinary discussions with leading artists, activists, academics, film-makers and journalists. See the full series here

Policy submissions & advisory opinions

Our submission covers a number of risks with current approaches to AI transparency, including indirect disclosure mechanisms such as watermarking, fingerprinting, and signed metadata. We also highlight the importance of centering the experience of those at the frontlines of human rights and democracy, as well as risks and limitations of current AI detection tools and share what we have learned through our experience working with leading AI detection experts.

WITNESS evidence to the UK Communications and Digital Committee, Lords Select Committee Inquiry into Large Language Models. This submission responds to questions about opportunities and risks over the next three years, and how to address the risk of unintended consequences. It also puts forward a set of recommendations, based on our long standing work with industry, academia and civil society to guard against the risks of large language models.

Our submission to the USA President’s Council of Advisors on Science and Technology (PCAST) Working Group on Generative AI puts forward a set of recommendations on how to identify and promote the beneficial deployment of generative AI, as well as how to best mitigate risks.

Our submission to the US Office of Science and Technology Policy Request For Information focuses on WITNESS’ recommendations to ensure that global human rights laws and standards are baked into the design, development and deployment of generative AI into societies across the globe.

Our submission to the US National Telecommunications and Information Administration (NTIA) focuses on our guiding principles for developing AI accountability mechanisms, in particular in relation to synthetic media. We provide examples of how watermarking, labelling and provenance technologies can help inform people about how AI tools are operating, and why these approaches need to be grounded in internationally recognized human rights laws and standards.

Large social media platforms are developing tools to detect synthetic media. For these detection tools to be effective, they need to be trained on data that reflects as much as possible real situations. In this advisory opinion to the European Commission, we outline how the DSA can help study and mitigate social media-related risks in human rights crises and conflict.

This submission shares WITNESS’s views on the relationship between human rights and
technical standard-setting processes for new and emerging digital technologies. This document
is shaped by our three decades of experience helping communities advocate for human rights
change to create trustworthy information, to protect themselves against the misuse of their
content, and to challenge misinformation that targets at-risk groups and individuals.

Panel discussions

WITNESS’ Raquel Vazquez Llorente addressed the OSCE in Vienna on April 22,  pointing to areas where the intervention of member states, as well as their cooperation with civil society organizations and the private sector, are key to ensure the documentation of rights violations is resilient to evolving technologies. 

In this event at the International Journalism Festival in Perugia, Italy, WITNESS’ Sam Gregory provides an overview of the current trends and future of synthetic media in the context of 2024 as a major election year.

In the news...

[Aspen Digital] Reporting on AI Hall of Fame: 2023 Winners from WITNESS

[Al Jazeera] “Inflection point”: AI meme wars hit India election, test social platforms

[PBS: Amanpour and Company] “Take on Fake:” How AI-Generated Content Is Impacting Elections

[The Economic Times] Detecting deepfakes should not be the sole responsibility of platforms: Sam Gregory

[Carnegie Council] Prepare, Don’t Panic: Navigating the Digital Rights Landscape, with Sam Gregory

[Daily Dot] Explicit AI images of Taylor Swift got 22 million views before X cracked down

[WIRED] Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs

[WIRED] If Taylor Swift Can’t Defeat Deepfake Porn, No One Can

[WIRED] The Biden Deepfake Robocall Is Only the Beginning

[cybernews] AI-faked Biden robocall told voters to skip New Hampshire primary

[Bloomberg] Taylor Swift, Joe Biden, Dead Kids: Fake AI Content Floods In

[WIRED] Worried About Political Deepfakes? Beware the Spread of ‘Cheapfakes’

[WIRED] Generative AI Learned Nothing From Web 2.0

[Bloomberg] Facebook’s Tolerance for Audio Deepfakes Is Absurd

[The Messenger] ‘It’s an Arms Race’: How We’re Already Losing The Battle to Stop Harmful AI Fakes

[FT] Deepfakes for $24 a month: how AI is disrupting Bangladesh’s election

[NYT] The 2024 Election Will Be Unlike Any Other. Is the Media Ready?

[WIRED] Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy

[THE VERGE] Watermarks aren’t the silver bullet for AI misinformation

[WashPost] AI fake nudes are booming. It’s ruining real teens’ lives

[WIRED] A Doctored Biden Video Is a Test Case for Facebook’s Deepfake Policies

[rest of world] An Indian politician says scandalous audio clips are AI deepfakes.

[The New Yorkerl] Will Biden’s Meetings with A.I. Companies Make Any Difference?

[The Atlantic] The AI Crackdown Is Coming

[NPR] AI-generated images are everywhere. Here’s how to spot them

[The Hill] To battle deepfakes, our technologies must track their transformations

[FT] TikTok urged to preserve Ukraine content for war crime investigations

[BBC Radio 4] An end to deepfakes? (Positive Thinking podcast)

[WSJ] China, a Pioneer in Regulating Algorithms, Turns Its Focus to Deepfakes

[The Economist] Proving a photo is fake is one thing. Proving it isn’t is another

[Washington Post] Fake images of Trump arrest show ‘giant step’ for AI’s disruptive power

[Opinio Juris] Coding Justice? The Tradeoffs of Using Technology for Documenting Crimes in Ukraine

Our Video Archive

Since 2018, WITNESS has been raising awareness about how emerging technologies can impact on people’s trust in audiovisual content. Check out our video archive to watch years’ worth of our video interviews, panels, and presentations on deepfakes, synthetic media, and more.

Our next consultations will be in Sao Paolo, Brazil, and in South-East Asia (location TBC) in the first quarter of 2024.

If you want to be part of the conversation, or have suggestions for collaborations, please get in touch with us via email.