Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk
OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to examine how giant language models could be misused for disinformation functions. The collaboration included an October 2021 workshop bringing collectively 30 disinformation researchers, machine studying consultants, and coverage analysts, and culminated in a co-authored report constructing on greater than a yr of analysis. This report outlines the threats that language models pose to the knowledge setting if used to increase disinformation campaigns and introduces a framework for analyzing potential mitigations. Read the complete report right here.