Like Google, Meta and Microsoft, OpenAI offers online chatbots and other A.I. tools that can write social media posts, generate photorealistic images and write computer programs. In its report, the company said its tools had been used in influence campaigns that researchers had tracked for years, including a Russian campaign called Doppelganger and a Chinese campaign called Spamouflage.
The Doppelganger campaign used OpenAI’s technology to generate anti-Ukraine comments that were posted on X in English, French, German, Italian and Polish, OpenAI said. The company’s tools were also used to translate and edit articles that supported Russia in the war in Ukraine into English and French, and to convert anti-Ukraine news articles into Facebook posts.
OpenAI’s tools were also used in a previously unknown Russian campaign that targeted people in Ukraine, Moldova, the Baltic States and the United States, mostly via the Telegram messaging service, the company said. The campaign used A.I. to generate comments in Russian and English about the war in Ukraine, as well as the political situation in Moldova and American politics. The effort also used OpenAI tools to debug computer code that was apparently designed to automatically post information to Telegram.
The political comments received few replies and “likes,” OpenAI said. The efforts were also unsophisticated at times. At one point, the campaign posted text that had obviously been generated by A.I. “As an A.I. language model, I am here to assist and provide the desired comment,” a post said. At other points, it posted in poor English, leading OpenAI to call the effort “Bad Grammar.”
Spamouflage, which has long been attributed to China, used OpenAI technology to debug code, seek advice on how to analyze social media and research current events, OpenAI said. Its tools were also used to generate social media posts disparaging people who had been critical of the Chinese government.
Source Agencies