Abstract
This report aims to assess: how might language models change influence operations, and what steps can be taken to mitigate these threats? This task is inherently speculative, as both AI and influence operations are changing quickly.
Many ideas in the report were informed by a workshop convened by the authors in October 2021, which brought together 30 experts across AI, influence operations, and policy analysis to discuss the potential impact of language models on influence operations. The resulting report does not represent the consensus of workshop participants, and mistakes are our own.
We hope this report is useful to disinformation researchers who are interested in the impact of emerging technologies, AI developers setting their policies and investments, and policymakers preparing for social challenges at the intersection of technology and society.