Our Work on AI

Policy submissions & advisory opinions

A pioneer in ensuring that human rights and human rights defenders benefit from major technology shifts over the past twenty-five years, WITNESS is a trusted voice on protecting authentic footage and addressing the challenges of deceptive AI – grounded in our research, expertise, and cross-regional networks.

We actively engage with policymakers and tech companies in key geographies to ensure that human rights considerations are integrated into regulations, laws, and policies, with a focus on transparency, equity, and the safety of at-risk communities.

See below for our most recent policy submissions and advisory opinions.

Submitted on: October 14, 2025

WITNESS submitted an input to the European Commission Call for Evidence on the Digital Omnibus (Digital Package on Simplification) on October 14th, 2025. Our input focuses on the implementation of the AI Act, and draws recommendations that the EU Commission should take into account.

Submitted in 2025

WITNESS submitted a response to the European Commission consultation to develop guidelines and a code of practice on AI transparency obligations, based on the provisions of the Artificial Intelligence Act (AI Act). The submission focuses on highlighting technical standards such as C2PA Technical Specifications as a potential mechanism for fulfilling transparency obligations. 

Submitted on: September 5, 2025

WITNESS submitted evidence to the UK Parliament’s Joint Committee on Human Rights as part of its inquiry on Human Rights and the Regulation of Artificial Intelligence on 5 September 2025. Drawing on our Technology, Threats and Opportunities (TTO) programme, the submission highlights how AI affects trust, authenticity and accountability in audiovisual content, and sets out concrete recommendations to safeguard human rights. 

Submitted on: May 30, 2025

In this submission to NIST’s AI Standards “Zero Drafts” Pilot Project, WITNESS calls for the integration of sociotechnical evaluation frameworks—specifically the TRIED (Truly Innovative and Effective Detection) Benchmark—into U.S. standards for AI detection and risk mitigation. The submission argues that current detection metrics focus too narrowly on technical accuracy and fail to reflect the realities faced by journalists, fact-checkers, and human rights defenders operating in high-risk information ecosystems.

Submitted on: MAY 1, 2025

This joint submission by WITNESS, the Co-Creation Studio at MIT, and the Archival Producers Alliance contributes to the UN Human Rights Council’s study on Artificial Intelligence and Creativity. Drawing from years of collaborative research on deepfakes, synthetic media, and creative integrity, it examines how AI reshapes authorship, consent, and cultural equity across art, documentary, and human rights advocacy. 

Submitted on: February 28, 2025

Our submission highlighted three issues arising in the context of the technology-facilitated gender-based violence (TFGBV): ethical documentation of sexual and gender-based violence (SGBV), AI-driven SGBV (particularly, looking at the impact of synthetic media, deepfakes, and multimodal generative AI) and the impact of AI-driven SGBV specifically during elections and conflict. We also discussed the gaps in existing human rights responses to TFGBV and presented recommendations on how to  address these gaps.

Submitted on: February 2, 2024

Our submission covers a number of risks with current approaches to AI transparency, including indirect disclosure mechanisms such as watermarking, fingerprinting, and signed metadata. We also highlight the importance of centering the experience of those at the frontlines of human rights and democracy, as well as risks and limitations of current AI detection tools and share what we have learned through our experience working with leading AI detection experts.

Submitted on: September 5, 2023

WITNESS evidence to the UK Communications and Digital Committee, Lords Select Committee Inquiry into Large Language Models. This submission responds to questions about opportunities and risks over the next three years, and how to address the risk of unintended consequences. It also puts forward a set of recommendations, based on our long standing work with industry, academia and civil society to guard against the risks of large language models.

Submitted on: July 31, 2023

Our submission to the USA President’s Council of Advisors on Science and Technology (PCAST) Working Group on Generative AI puts forward a set of recommendations on how to identify and promote the beneficial deployment of generative AI, as well as how to best mitigate risks.

Submitted on: July 7, 2023

Our submission to the US Office of Science and Technology Policy Request For Information focuses on WITNESS’ recommendations to ensure that global human rights laws and standards are baked into the design, development and deployment of generative AI into societies across the globe.

Submitted on: June 15, 2023

Our submission to the US National Telecommunications and Information Administration (NTIA) focuses on our guiding principles for developing AI accountability mechanisms, in particular in relation to synthetic media. We provide examples of how watermarking, labelling and provenance technologies can help inform people about how AI tools are operating, and why these approaches need to be grounded in internationally recognized human rights laws and standards.

Submitted on: May 31, 2023

Large social media platforms are developing tools to detect synthetic media. For these detection tools to be effective, they need to be trained on data that reflects as much as possible real situations. In this advisory opinion to the European Commission, we outline how the DSA can help study and mitigate social media-related risks in human rights crises and conflict.

Submitted on: March 3, 2023

This submission shares WITNESS’s views on the relationship between human rights and
technical standard-setting processes for new and emerging digital technologies. This document
is shaped by our three decades of experience helping communities advocate for human rights
change to create trustworthy information, to protect themselves against the misuse of their
content, and to challenge misinformation that targets at-risk groups and individuals.