Reducing the Risk of the Exponential Growth of Automated Influence Operations

Of the research outlets we have discovered since the launch of OODALoop.com, the Center for Security and Emerging Technology (CSET),  OpenAI, and the Stanford Internet Observatory are best-in-class sources on topics of vital interest.  A new report  – “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” – is the result of a partnership between these three organizations “to explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts and culminated in a co-authored report building on more than a year of research. Their report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing mitigation strategies.” (1)

We have been following this research since its inception with great interest.  We also want to applaud OpenAI for participating in this research over the course of the last year. The press and the AI marketplace have put the cart before the horse in the overly positivist, groupthink-ish, techno-utopian initial response to the positive potential of OpenAI’s recently released ChatGPT platform.  It is encouraging to see that OpenAI’s management and researchers are more clear-eyed and have dedicated resources to understanding the potential implications and impact  – for good and for ill – of their technology.

Included here is a summary of this unique research collaboration’s findings on forecasting the potential unintended consequences of large language models and neural language models, especially their use in disinformation campaigns—and how to reduce risk.

Summary

As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education, and science. But, as with any new technology, it is worth considering how they can be misused. Against the backdrop of recurring online influence operations—covert or deceptive efforts to influence the opinions of a target audience—the paper asks:

How might language models change influence operations, and what steps can be taken to mitigate this threat?

Our work brought together different backgrounds and expertise—researchers with a grounding in the tactics, techniques, and procedures of online disinformation campaigns, as well as machine learning experts in the generative artificial intelligence field—to base our analysis on trends in both domains.

We believe that it is critical to analyze the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale. We hope our research will inform policymakers that are new to the AI or disinformation fields, and spur in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.

How AI Could Affect Influence Operations?

When researchers evaluate influence operations, they consider the actors, behaviors, and content. The widespread availability of technology powered by language models has the potential to impact all three facets:

  • Actors: Language models could drive down the cost of running influence operations, placing them within reach of new actors and actor types. Likewise, propagandists-for-hire that automate production of text may gain new competitive advantages.
  • Behavior: Influence operations with language models will become easier to scale, and tactics that are currently expensive (e.g., generating personalized content) may become cheaper. Language models may also enable new tactics to emerge—like the real-time content generation in chatbots.
  • Content: Text creation tools powered by language models may generate more impactful or persuasive messaging compared to propagandists, especially those who lack the requisite linguistic or cultural knowledge of their target. They may also make influence operations less discoverable since they repeatedly create new content without needing to resort to copy-pasting and other noticeable time-saving behaviors.

Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations. Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation-states may invest in the technology themselves.

READ MORE HERE

By Published On: March 9, 2023Categories: UncategorizedComments Off on Reducing the Risk of the Exponential Growth of Automated Influence Operations

Share This Story, Choose Your Platform!

About the Author: Patriotman

Patriotman currently ekes out a survivalist lifestyle in a suburban northeastern state as best as he can. He has varied experience in political science, public policy, biological sciences, and higher education. Proudly Catholic and an Eagle Scout, he has no military experience and thus offers a relatable perspective for the average suburban prepper who is preparing for troubled times on the horizon with less than ideal teams and in less than ideal locations. Brushbeater Store Page: http://bit.ly/BrushbeaterStore

GUNS N GEAR

Categories

Archives