WITNESS helps people use video and technology to protect and defend human rights. Our Technology Threats and Opportunities Team engages early on with emerging technologies that have the potential to enhance or undermine our trust on audiovisual content. Building upon years of foundational research and global advocacy on synthetic media, we’ve been preparing for the impact of AI on our ability to discern the truth. In consultation with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns and recommendations on what we must do now.
WITNESS helps people use video and technology to protect and defend human rights. Our Technology Threats and Opportunities Team engages early on with emerging technologies that have the potential to enhance or undermine our trust on audiovisual content. Building upon years of foundational research and global advocacy on synthetic media, we’ve been preparing for the impact of AI on our ability to discern the truth. In consultation with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns and recommendations on what we must do now.
WITNESS recently hosted in Nairobi, Kenya, over 20 journalists, fact-checkers, technologists, policy advocates, creators, human rights defenders and community activists from different parts of Africa. In our two-day workshop, we discussed threats and opportunities that deepfakes, generative AI and synthetic media bring to audiovisual witnessing, identifying and prioritising collective responses that can have a positive impact in the information landscape.
Read our blog and the full report of the workshop.
Media literacy campaigns should inform the public about what synthetic media is, and what is possible with new forms of multimedia manipulation (and what it is not). These initiatives can help prepare the public to view and consume media more critically while not adding to the rhetoric around generative AI. Moreover, media literacy should also be a vehicle for empowering individuals and communities to engage with governments, civil societies and companies to develop responses and solutions that reflect their needs, circumstances and aspirations. In this regard, media literacy campaigns acquire a critical importance and are a precursor to effective and inclusive public policy making.
Responses and solutions should not place the burden of responsibility on the end-users of synthetic media and generative AI tools, or consumers of digital content. Instead, these responses should set expectations across the pipeline, including foundational model researchers, tool makers, distribution platforms and other upstream stakeholders such as legislators and regulators. These actors should bear responsibility to guarantee transparency in how a piece of media is created or manipulated as it is circulated online, ensuring that media consumers are effectively informed about the nature of the content they are consuming.
More importantly, any solution should include input from global stakeholders, with an eye towards defending human rights and protecting privacy
Alliances can help civil society organisations ‘punch higher’. Participants discussed that, despite targeted advocacy and some efforts to leverage existing networks, they have not been able to influence legislation and policy. Well-organised networks can help digital advocates, communities and activists gain the credibility and the resources that are often required to get ‘into the room’.
One specific strategy for these networks to influence these spaces is to fill in gaps by producing evidence-led and foundational research, and by communicating these findings effectively for example via policy briefs. Similarly, regionally-led networks would be well-placed to monitor the propensity to copy legislation from Europe or the United States without proper consideration to the local context, and could also take note of China’s influence in the African digital space.
If you want to be part of the conversation, or have suggestions for collaborations, please get in touch with us via email.
WITNESS evidence to the UK Communications and Digital Committee, Lords Select Committee Inquiry into Large Language Models. This submission responds to questions about opportunities and risks over the next three years, and how to address the risk of unintended consequences. It also puts forward a set of recommendations, based on our long standing work with industry, academia and civil society to guard against the risks of large language models.
Our submission to the USA President’s Council of Advisors on Science and Technology (PCAST) Working Group on Generative AI puts forward a set of recommendations on how to identify and promote the beneficial deployment of generative AI, as well as how to best mitigate risks.
Our submission to the US Office of Science and Technology Policy Request For Information focuses on WITNESS’ recommendations to ensure that global human rights laws and standards are baked into the design, development and deployment of generative AI into societies across the globe.
Our submission to the US National Telecommunications and Information Administration (NTIA) focuses on our guiding principles for developing AI accountability mechanisms, in particular in relation to synthetic media. We provide examples of how watermarking, labelling and provenance technologies can help inform people about how AI tools are operating, and why these approaches need to be grounded in internationally recognized human rights laws and standards.
Large social media platforms are developing tools to detect synthetic media. For these detection tools to be effective, they need to be trained on data that reflects as much as possible real situations. In this advisory opinion to the European Commission, we outline how the DSA can help study and mitigate social media-related risks in human rights crises and conflict.
WITNESS co-chairs the Threats and Harms task force of the C2PA, where it leads the harm assessment of these specifications designed to track the source and history of multimedia across devices and platforms. WITNESS has influenced this and related initiatives at an early stage to empower critical voices globally and bolster a human rights framework. Read our blog.
Provenance and authenticity tools would enable you to show a range of information about how, where and by whom a piece of media was created, and how it was subsequently edited, changed and distributed. Check this video series out to know more about provenance and authenticity, the C2PA standards and how we may fortify truth for accountability and awareness.
Deepfakery is a series of critical conversations exploring the intersection of satire, art, human rights, disinformation, and journalism. Join WITNESS and the Co-Creation Studio at MIT Open Documentary Lab for interdisciplinary discussions with leading artists, activists, academics, film-makers and journalists. See the full series here.
Generative AI can protect witnesses’ identities, visualize survivors’ testimonies, reconstruct places & create political satire. Check out our blog about using generative AI and synthetic media for human rights advocacy, the ethical challenges it poses and the questions that organizations can ask
This article discusses the use of provenance technologies as a way to trust video in the age of generative AI. It touches on the myths and realities that should be considered by those developing these technologies. Read the full article in Commonplace.
Around the world, deepfakes are becoming a powerful tool for artists, satirists and activists. But what happens when vulnerable people are not “in on the joke,” or when malign intentions are disguised as humor? Read this report that focuses on the fast-growing intersections between deepfakes and satire. Who decides what’s funny, what’s fair, and who is accountable?
Partnership on AI, in collaboration with WITNESS and other key allies, has published the Responsible Practices for Synthetic Media Framework, which offers guidelines for developing, creating, sharing, and publishing synthetic media ethically and responsibly. Read our blog.
This report focuses on 14 dilemmas that touch upon individual, technical and societal concerns around assessing and tracking the authenticity of multimedia. It focuses on the impact, opportunities, and challenges this technology holds for activists, human rights defenders and journalists, as well as the implications for society-at-large if verified-at-capture technology were to be introduced at a larger scale. Read the full report.
[FT] TikTok urged to preserve Ukraine content for war crime investigations
[BBC Radio 4] An end to deepfakes? (Positive Thinking podcast)
[Nieman Lab] Synthetic media forces us to understand how media gets made
[WSJ] China, a Pioneer in Regulating Algorithms, Turns Its Focus to Deepfakes
[The Economist] Proving a photo is fake is one thing. Proving it isn’t is another
[Washington Post] Fake images of Trump arrest show ‘giant step’ for AI’s disruptive power
[Opinio Juris] Coding Justice? The Tradeoffs of Using Technology for Documenting Crimes in Ukraine
Since 2018, WITNESS has been raising awareness about how emerging technologies can impact on people’s trust in audiovisual content. Check out our video archive to watch years’ worth of our video interviews, panels, and presentations on deepfakes, synthetic media, and more.